Closed Bug 781534 Opened 12 years ago Closed 12 years ago

Create basic automated test coverage for the mozGetUserMedia function for desktop

Categories

(Core :: WebRTC, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla20

People

(Reporter: jsmith, Assigned: jsmith)

References

()

Details

(Whiteboard: [getUserMedia], [blocking-gum+] [qa-])

Attachments

(1 file, 19 obsolete files)

15.57 KB, patch
jsmith
: review+
Details | Diff | Splinter Review
We need some basic automated test coverage for the mozGetUserMedia DOM API. This bug aims to track laying the foundation to be able to write unit tests along with landing at least one unit test to add basic automated coverage to mozGetUserMedia. This is especially important for this DOM API for a few reasons:

- Getting automated test coverage sooner rather than later will likely lead to catching many API-oriented bugs while building the test automation (examples like bug 780811 and bug 780790)
- Getting automated test coverage will help catch regressions on patches before they land
- etc.

This test automation should focus heavily on the API-pieces to test scenarios such as:

- Sending a null parameter into a success and error callback
- Providing incorrect parameters into dictionary of constraints for gUM
Flagging for blocking+ - it's a requirement to have unit tests for a feature on mozilla central, so this automatically blocks ship.
Whiteboard: [getUserMedia], [blocking-gum+]
I should note that for the test we'll need to use a fake backend, not real camera/mic data.  The fake backend already exists.
Comment on attachment 663788 [details] [diff] [review]
mozGetUserMedia Errors Basic Test WIP

Here's a first take at writing an automated test specifically drilling at common error cases at the API level that doesn't need the fake backend hooked up. There's one odd snag I'm hitting right now in this test: The test in checkNullValues appears to be failing, even though I know an exception is being thrown here and testing the test-specific code in http://ted.mielczarek.org/code/mozilla/mochitest-maker/ doesn't generate this problem. Am I forgetting something obvious here?

General feedback would be much appreciated.
Attachment #663788 - Flags: feedback?(anant)
Blocks: webrtc-tests
Jason, is this bug for your own purpose or can we hijack it for as tracking bug for all the smoketests we have to deliver until Oct 8th?
(In reply to Henrik Skupin (:whimboo) from comment #5)
> Jason, is this bug for your own purpose or can we hijack it for as tracking
> bug for all the smoketests we have to deliver until Oct 8th?

I was planning on using this originally for any tests needed to get basic smoke test coverage for the gUM portion of the WebRTC-based APIs. Feel free to hijack this bug for any additional implementation that's necessary. I'll probably want to get the attached patch though reviewed, cleaned up, and submitted as part of this bug though (some of it falls under some "basic" checks you do with the API for errors). When the patch does land, I'll flag this bug as [leave open] to make sure it doesn't go to resolved fixed so that we can land the additional pieces needed for the gUM portion of the automation.

Does that work?
Oh btw - my test plan for the getUserMedia piece is here - https://etherpad.mozilla.org/camera-api-testing. So that at least gives you ideas I thought about for the gUM piece. I'm currently working on the other pieces now.
(In reply to Jason Smith [:jsmith] from comment #6)
> I was planning on using this originally for any tests needed to get basic
> smoke test coverage for the gUM portion of the WebRTC-based APIs. Feel free
> to hijack this bug for any additional implementation that's necessary. I'll

Thanks Jason. Yes lets use this bug for our initial effort in getting in a good number of Mochitest smoketests for the API. I will put our project page into the URL field of this bug.

> probably want to get the attached patch though reviewed, cleaned up, and
> submitted as part of this bug though (some of it falls under some "basic"
> checks you do with the API for errors). When the patch does land, I'll flag
> this bug as [leave open] to make sure it doesn't go to resolved fixed so
> that we can land the additional pieces needed for the gUM portion of the
> automation.

It's great to see that there is already a patch attached for the basic tests of the mozGetUserMedia API. It certainly will help us. But given that we haven't had time to look at WebRTC yet, would you mind when we first check that test and figure out how it plays together with all the other smoketests we have to write? IMO we should avoid code duplication and it could be that parts of it are better suited for the other tests.

So my proposal would be that we start to collect tests in the following etherpad: https://etherpad.mozilla.org/automation-webrtc

Once the list is ready we should file bugs and make them blocking this tracking bug. How does it sound?
(In reply to Henrik Skupin (:whimboo) from comment #8)
> (In reply to Jason Smith [:jsmith] from comment #6)
> > I was planning on using this originally for any tests needed to get basic
> > smoke test coverage for the gUM portion of the WebRTC-based APIs. Feel free
> > to hijack this bug for any additional implementation that's necessary. I'll
> 
> Thanks Jason. Yes lets use this bug for our initial effort in getting in a
> good number of Mochitest smoketests for the API. I will put our project page
> into the URL field of this bug.
> 
> > probably want to get the attached patch though reviewed, cleaned up, and
> > submitted as part of this bug though (some of it falls under some "basic"
> > checks you do with the API for errors). When the patch does land, I'll flag
> > this bug as [leave open] to make sure it doesn't go to resolved fixed so
> > that we can land the additional pieces needed for the gUM portion of the
> > automation.
> 
> It's great to see that there is already a patch attached for the basic tests
> of the mozGetUserMedia API. It certainly will help us. But given that we
> haven't had time to look at WebRTC yet, would you mind when we first check
> that test and figure out how it plays together with all the other smoketests
> we have to write? IMO we should avoid code duplication and it could be that
> parts of it are better suited for the other tests.
> 
> So my proposal would be that we start to collect tests in the following
> etherpad: https://etherpad.mozilla.org/automation-webrtc
> 
> Once the list is ready we should file bugs and make them blocking this
> tracking bug. How does it sound?

Yeah, that works for me.
No longer blocks: webrtc-tests
Comment on attachment 663788 [details] [diff] [review]
mozGetUserMedia Errors Basic Test WIP

Moving this to bug 795367.
Attachment #663788 - Attachment is obsolete: true
Attachment #663788 - Flags: feedback?(anant)
Depends on: 795367
Keywords: meta
Attachment #671283 - Attachment is patch: true
Assignee: nobody → jsmith
Status: NEW → ASSIGNED
Attachment #671283 - Attachment is obsolete: true
Attached patch Basic Automated gUM coverage v2 (obsolete) — Splinter Review
Keywords: meta
Summary: Create basic automated test coverage for the mozGetUserMedia function for desktop/android → Create basic automated test coverage for the mozGetUserMedia function for desktop
Attachment #671297 - Attachment is obsolete: true
Attached patch Basic Automated gUM coverage v3 (obsolete) — Splinter Review
Attachment #671304 - Attachment is patch: true
Comment on attachment 671304 [details] [diff] [review]
Basic Automated gUM coverage v3

First pass of some very bare bones test automation for getUserMedia. Not ready for review, but I'm looking to get feedback on the following areas:

Why is the current implemenation of MediaStreams when placed in a MediaStream not 100% in alignment with the spec - http://dev.w3.org/2011/webrtc/editor/getusermedia.html? Specifically I'm trying to understand why some of the bug XXXXXX comments (will file if it's actual bug and will cleanup the comment here) and todos aren't working. Specifially:

- defaultPlaybackRate is undefined in video and audio scenarios
- playbackRate is undefined in video and audio scenarios
- buffered length for video and audio scenarios right now is 0, but spec says 1
- currentTime for the HTMLMediaElement isn't greater than zero after an audio element has played for a second 
- The played.length value is 0 for an audio element, but spec says 1
- The ready state for an audio element is HAVE_METADATA, but spec says HAVE_ENOUGH_DATA
- The MediaStream's currentTime isn't greater than zero for an audio element that has been played for a second

Other DOM questions:

- Why isn't LocalMediaStream defined? If I try to do an instanceof call on LocalMediaStream, nothing ends up being found.
- Why do I get a null value back if I try to read mozSrcObject on a HTMLMediaElement with a stream already set? Example:
*** var value = video.mozSrcObject;

Code questions:

- Did I setup the Makefile correctly to only run the mochitests for video and audio on desktop only?
- Am I on the right track in general with what's being tested here for a bare bones test case with video in one test case, audio in the other?
- Should I try doing more refactoring to eliminate the possible slight similar logic in the basicVideo vs. basicAudio tests? Or is the level of breakout of helper functions sufficient? Any ideas?
- Any other general suggestions on what I should take care of in general before putting this out for review?
Attachment #671304 - Flags: feedback?(rjesup)
(In reply to Jason Smith [:jsmith] from comment #14)
> - Why isn't LocalMediaStream defined? If I try to do an instanceof call on
> LocalMediaStream, nothing ends up being found.

See bug 798037.

> - Did I setup the Makefile correctly to only run the mochitests for video
> and audio on desktop only?

Why would we have to? With faked streams we should still be able to, or not?

> - Am I on the right track in general with what's being tested here for a
> bare bones test case with video in one test case, audio in the other?

I would say we should talk about which code will land in heads.js and what's not necessary. Also I will add enhancements in my patch for bug 796890 which you could benefit from.

> - Any other general suggestions on what I should take care of in general
> before putting this out for review?

First you should fix your editor so it does not save the files in DOS mode. I can check details of the patch later once the majority of API questions have been answered by Randell.
(In reply to Henrik Skupin (:whimboo) from comment #15)
> 
> > - Did I setup the Makefile correctly to only run the mochitests for video
> > and audio on desktop only?
> 
> Why would we have to? With faked streams we should still be able to, or not?

These tests in particular test desktop only functionality (which will be likely with many WebRTC tests). I don't know what will happen on Android with the fake media streams with functionality that was originally intended for desktop. Maybe Randell knows more on this if that we use fake media streams, do we need to disable it on Android?

> 
> > - Am I on the right track in general with what's being tested here for a
> > bare bones test case with video in one test case, audio in the other?
> 
> I would say we should talk about which code will land in heads.js and what's
> not necessary. Also I will add enhancements in my patch for bug 796890 which
> you could benefit from.

Yeah, we should. The original intention for the couple of functions added was that common functionality in many webrtc tests will be both starting and stopping media. I put in the common tests too just show were slamming home that were following the DOM spec in multiple use cases.

What I'm trying to avoid is duplication of common areas of functionality and tests. Even what I have in my basic video and basic audio test has some code duplication in some ways, which I'm probably going to clean up after get feedback from Randell.

> 
> > - Any other general suggestions on what I should take care of in general
> > before putting this out for review?
> 
> First you should fix your editor so it does not save the files in DOS mode.

Yeah, I fixed that on my editor. I'll keep a look out on that...
(In reply to Jason Smith [:jsmith] from comment #14)
> Comment on attachment 671304 [details] [diff] [review]
> Basic Automated gUM coverage v3
> 
> First pass of some very bare bones test automation for getUserMedia. Not
> ready for review, but I'm looking to get feedback on the following areas:
> 
> Why is the current implemenation of MediaStreams when placed in a
> MediaStream not 100% in alignment with the spec -
> http://dev.w3.org/2011/webrtc/editor/getusermedia.html? Specifically I'm
> trying to understand why some of the bug XXXXXX comments (will file if it's
> actual bug and will cleanup the comment here) and todos aren't working.
> Specifially:
> 
> - defaultPlaybackRate is undefined in video and audio scenarios
> - playbackRate is undefined in video and audio scenarios
> - buffered length for video and audio scenarios right now is 0, but spec
> says 1
> - currentTime for the HTMLMediaElement isn't greater than zero after an
> audio element has played for a second 
> - The played.length value is 0 for an audio element, but spec says 1
> - The ready state for an audio element is HAVE_METADATA, but spec says
> HAVE_ENOUGH_DATA
> - The MediaStream's currentTime isn't greater than zero for an audio element
> that has been played for a second

Those are probably questions for roc, and generally apply to MediaStreams sourced from a realtime element.

> Other DOM questions:
> 
> - Why isn't LocalMediaStream defined? If I try to do an instanceof call on
> LocalMediaStream, nothing ends up being found.

See bug 803976

> - Why do I get a null value back if I try to read mozSrcObject on a
> HTMLMediaElement with a stream already set? Example:
> *** var value = video.mozSrcObject;

That sounds like a bug.  Please file and CC roc and myself.
Attachment #671304 - Flags: feedback?(rjesup) → feedback+
(In reply to Randell Jesup [:jesup] from comment #17)
> (In reply to Jason Smith [:jsmith] from comment #14)
> > Comment on attachment 671304 [details] [diff] [review]
> > Basic Automated gUM coverage v3
> > 
> > First pass of some very bare bones test automation for getUserMedia. Not
> > ready for review, but I'm looking to get feedback on the following areas:
> > 
> > Why is the current implemenation of MediaStreams when placed in a
> > MediaStream not 100% in alignment with the spec -
> > http://dev.w3.org/2011/webrtc/editor/getusermedia.html? Specifically I'm
> > trying to understand why some of the bug XXXXXX comments (will file if it's
> > actual bug and will cleanup the comment here) and todos aren't working.
> > Specifially:
> > 
> > - defaultPlaybackRate is undefined in video and audio scenarios
> > - playbackRate is undefined in video and audio scenarios
> > - buffered length for video and audio scenarios right now is 0, but spec
> > says 1
> > - currentTime for the HTMLMediaElement isn't greater than zero after an
> > audio element has played for a second 
> > - The played.length value is 0 for an audio element, but spec says 1
> > - The ready state for an audio element is HAVE_METADATA, but spec says
> > HAVE_ENOUGH_DATA
> > - The MediaStream's currentTime isn't greater than zero for an audio element
> > that has been played for a second
> 
> Those are probably questions for roc, and generally apply to MediaStreams
> sourced from a realtime element.

Roc - Can you provide input here?

> 
> > Other DOM questions:
> > 
> > - Why isn't LocalMediaStream defined? If I try to do an instanceof call on
> > LocalMediaStream, nothing ends up being found.
> 
> See bug 803976
> 
> > - Why do I get a null value back if I try to read mozSrcObject on a
> > HTMLMediaElement with a stream already set? Example:
> > *** var value = video.mozSrcObject;
> 
> That sounds like a bug.  Please file and CC roc and myself.

Looks like I may have screwed up in my code - I tested this with a test case on today's build, and I can get read access via mozSrcObject.
Flags: needinfo?(roc)
(In reply to Jason Smith [:jsmith] from comment #18)
> > > - defaultPlaybackRate is undefined in video and audio scenarios
> > > - playbackRate is undefined in video and audio scenarios

We just don't implement these yet for any kind of media resource.

> > > - buffered length for video and audio scenarios right now is 0, but spec
> > > says 1

I think the spec is wrong. We can't claim to have buffered 1s of media.

> > > - currentTime for the HTMLMediaElement isn't greater than zero after an
> > > audio element has played for a second 

Works for me.

> > > - The played.length value is 0 for an audio element, but spec says 1

Works for me. However, played[0] seems to be undefined... That seems like a bug we should fix.

> > > - The ready state for an audio element is HAVE_METADATA, but spec says
> > > HAVE_ENOUGH_DATA

Once we have some audio/video data in the stream, we'll change to HAVE_ENOUGH_DATA state. It sounds like you're checking the stream contents too early. Wait for canplaythrough to fire.

> > > - The MediaStream's currentTime isn't greater than zero for an audio element
> > > that has been played for a second

Works for me.
Flags: needinfo?(roc)
Attachment #671304 - Attachment is obsolete: true
Attached patch Basic Automated gUM coverage v4 (obsolete) — Splinter Review
Updated my patch based on everyone's feedback, including doing some code cleanup. Closer, but I have two followup questions from comment 14 that's causing two sets of test failures right now:

- How many times is canplaythrough supposed to fire when play is called with a MediaStream? I'm seeing the event getting fired repeatedly in my updated patch, even though I was initially expecting the event to only fire once.
- With the updated patch, after I wait for canplaythrough to fire, then check the ready state on the HTMLMediaElement, the video only MediaStream on the HTMLMediaElement gives me HAVE_CURRENT_DATA, where as the audio only and video & audio MediaStreams give me HAVE_ENOUGH_DATA. Is that expected? Why is that the case?

Roc - Any ideas?
Flags: needinfo?(roc)
(In reply to Jason Smith [:jsmith] from comment #21)
> Updated my patch based on everyone's feedback, including doing some code
> cleanup. Closer, but I have two followup questions from comment 14 that's
> causing two sets of test failures right now:
> 
> - How many times is canplaythrough supposed to fire when play is called with
> a MediaStream? I'm seeing the event getting fired repeatedly in my updated
> patch, even though I was initially expecting the event to only fire once.

Fix on the wording here:

- How many times is canplaythrough supposed to fire when play is called on HTMLMediaElement with a MediaStream in the mozSrcObject?
canplaythrough can fire multiple times if the stream keeps stalling due to underrruns. Maybe that's what's happening; it shouldn't be, but it is possible, especially if test machines are heavily loaded.

So you should be prepared for that. It also means that when canplaythrough fires you might only see HAVE_CURRENT_DATA, because we might have stalled between queuing and firing canplaythrough.
Flags: needinfo?(roc)
Attachment #675929 - Attachment is obsolete: true
Attached patch Basic Automated gUM coverage v5 (obsolete) — Splinter Review
Made fixes based on Roc's comments to make this test more reliable. I've ran this set of tests a few times and see consistent results (and they pass).

I think it's time for a review of the patch. Asking Randell for review of the gUM bits, Roc for the MediaStream bits.

Note - We'll probably want to do a try run on this patch, although I don't have commit access to do it.
Attachment #678080 - Flags: review?(roc)
Attachment #678080 - Flags: review?(rjesup)
Comment on attachment 678080 [details] [diff] [review]
Basic Automated gUM coverage v5

Once Randell and Roc have given you r+ on a patch for this bug I would recommend you ask me for review of the test pieces. FYI that has been discussed in our last weeks automation development meeting and will ensure that we land proper tests and utility methods. Thanks.
Comment on attachment 678080 [details] [diff] [review]
Basic Automated gUM coverage v5

Review of attachment 678080 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/media/tests/mochitest/mediaStreamPlayback.js
@@ +55,5 @@
> +
> +        is(self.mediaElement.paused, false,
> +          "Media element should be playing");
> +        is(self.mediaElement.ended, false,
> +          "Media element should not have ended");

This is probably true for a getUserMedia stream but not necessarily true for other kinds of media streams.

@@ +149,5 @@
> +    this.startMedia(timeoutLength, function() {
> +      setTimeout(function() {
> +        self.stopMedia();
> +        onSuccess();
> +      }, playLength);

For general kinds of MediaStreams, we can't guarantee >= N milliseconds of stream progress during N milliseconds of real time --- streams can pause or jank.

Without looking ahead to know what you want to do with the onSuccess handler, I think you probably should poll the element's currentTime (via the timeupdate event on the media element).
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #26)
> Comment on attachment 678080 [details] [diff] [review]
> Basic Automated gUM coverage v5
> 
> Review of attachment 678080 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> ::: dom/media/tests/mochitest/mediaStreamPlayback.js
> @@ +55,5 @@
> > +
> > +        is(self.mediaElement.paused, false,
> > +          "Media element should be playing");
> > +        is(self.mediaElement.ended, false,
> > +          "Media element should not have ended");
> 
> This is probably true for a getUserMedia stream but not necessarily true for
> other kinds of media streams.

Hmm...then why is this test passing indicating the value is true? I thought true would mean when playback has stopped, although in this playback is running.

> 
> @@ +149,5 @@
> > +    this.startMedia(timeoutLength, function() {
> > +      setTimeout(function() {
> > +        self.stopMedia();
> > +        onSuccess();
> > +      }, playLength);
> 
> For general kinds of MediaStreams, we can't guarantee >= N milliseconds of
> stream progress during N milliseconds of real time --- streams can pause or
> jank.
> 
> Without looking ahead to know what you want to do with the onSuccess
> handler, I think you probably should poll the element's currentTime (via the
> timeupdate event on the media element).

Right, that's what I was going for later in the code in the stopMedia function, although I didn't end up going with the timeupdate event (which could be a good idea to consider to get more event handling coverage). My original intention is writing this set of tests wasn't to simulate actual direct time measurement against the stream's time passed, but more to simulate running media for a set amount of time and be able to verify that the state of the HTMLMediaElement and LocalMediaStream elements involved carried valid sets of values at different states (e.g. nothing happened, playing has begun, X amount of seconds has passed, paused, stream removed).
(In reply to Jason Smith [:jsmith] from comment #27)
> Hmm...then why is this test passing indicating the value is true?

1) You're only testing getUserMedia streams, which won't get into the 'ended' state unless you call stop() on them, which you don't.
2) Even with other streams, it's highly unlikely they'll be in the ended state when canplaythrough fires. For that to happen you'd probably need a very short stream/media clip so that the stream ends almost immediately after canplaythrough is queued (and before the event handler runs).
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #28)
> (In reply to Jason Smith [:jsmith] from comment #27)
> > Hmm...then why is this test passing indicating the value is true?
> 
> 1) You're only testing getUserMedia streams, which won't get into the
> 'ended' state unless you call stop() on them, which you don't.
> 2) Even with other streams, it's highly unlikely they'll be in the ended
> state when canplaythrough fires. For that to happen you'd probably need a
> very short stream/media clip so that the stream ends almost immediately
> after canplaythrough is queued (and before the event handler runs).

Oh, I think I understand what you are getting at. For the general case of media streams, the ended check may not be valid there, but only be okay for gUM-based Media Streams.

So I see three possible updates to the patch so far to consider:

1. Evaluate and improve the existing tests to make use of timeUpdate as a way to improve test reliability and better analysis
2. In stopMedia, I might want to add a stop() call to the stream and verify that ended is fired
3. Parameterizing the ended evaluation so that the caller can determine what value ended should be at the point cited in the code

Does that align with your understanding? Anything else specifically in the patch I need to re-work?
(In reply to Jason Smith [:jsmith] from comment #29)
> 1. Evaluate and improve the existing tests to make use of timeUpdate as a
> way to improve test reliability and better analysis
> 2. In stopMedia, I might want to add a stop() call to the stream and verify
> that ended is fired

Those could work.

> 3. Parameterizing the ended evaluation so that the caller can determine what
> value ended should be at the point cited in the code

I don't think you can do that.
Attachment #678080 - Flags: review?(roc)
Attachment #678080 - Flags: review?(rjesup)
Sounds good. I'll work on an updated patch then to take those two pieces into account.
Attachment #678080 - Attachment is obsolete: true
Attached patch Basic Automated gUM coverage v6 (obsolete) — Splinter Review
Attachment #684563 - Attachment is obsolete: true
Attached patch Basic Automated gUM coverage v7 (obsolete) — Splinter Review
Attachment #684565 - Attachment is patch: true
Comment on attachment 684565 [details] [diff] [review]
Basic Automated gUM coverage v7

Updated based on Roc's feedback. The big changes here from the last patch are:

I'm not doing a "play for x time anymore" - I'm following a more reliable path Roc suggested by going through a flow that looks like:

- Get the fake media stream
- Set it the media element and start playing
- canplaythroughevent fires - some validation based off of the spec
- timeupdate fires - some validation off of the spec
- stop the media
- stop the stream

I ran this a bunch of times and see it running reliably. So I think it's ready for review again.

Sorry for taking a long time to get back to finishing this.
Attachment #684565 - Flags: review?(roc)
Attachment #684565 - Flags: review?(rjesup)
Comment on attachment 684565 [details] [diff] [review]
Basic Automated gUM coverage v7

Review of attachment 684565 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good

::: dom/media/tests/mochitest/mediaStreamPlayback.js
@@ +50,5 @@
> +      if(!canPlayThroughFired) {
> +        // Disable the canplaythrough event listener to prevent multiple calls
> +        canPlayThroughFired = true;
> +        self.mediaElement.removeEventListener('canplaythrough',
> +          canPlayThroughCallback, false);

Why do you need the boolean as well as removing the event listener? The latter should be enough to ensure we're not reentered.

@@ +55,5 @@
> +
> +        is(self.mediaElement.paused, false,
> +          "Media element should be playing");
> +        is(self.mediaElement.ended, false,
> +          "Media element should not have ended");

This is not necessarily true. For a very short clip (or a very slow test) we could get canplaythrough firing after the element has already ended.
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #35)
> Comment on attachment 684565 [details] [diff] [review]
> Basic Automated gUM coverage v7
> 
> Review of attachment 684565 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> Looks good
> 
> ::: dom/media/tests/mochitest/mediaStreamPlayback.js
> @@ +50,5 @@
> > +      if(!canPlayThroughFired) {
> > +        // Disable the canplaythrough event listener to prevent multiple calls
> > +        canPlayThroughFired = true;
> > +        self.mediaElement.removeEventListener('canplaythrough',
> > +          canPlayThroughCallback, false);
> 
> Why do you need the boolean as well as removing the event listener? The
> latter should be enough to ensure we're not reentered.

Good point. I think I was originally trying to be sure in two different ways, but removing the event listener as you've stated should be sufficient. I'll go fix that.

> 
> @@ +55,5 @@
> > +
> > +        is(self.mediaElement.paused, false,
> > +          "Media element should be playing");
> > +        is(self.mediaElement.ended, false,
> > +          "Media element should not have ended");
> 
> This is not necessarily true. For a very short clip (or a very slow test) we
> could get canplaythrough firing after the element has already ended.

Okay. Probably the best option is to remove checking for this ended attribute then if it's not always going to be false. I'll go fix that.
Attachment #684565 - Flags: review?(roc)
Attachment #684565 - Flags: review?(rjesup)
(In reply to Jason Smith [:jsmith] from comment #36)
> (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #35)
> > Comment on attachment 684565 [details] [diff] [review]
> > Basic Automated gUM coverage v7
> > 
> > Review of attachment 684565 [details] [diff] [review]:
> > -----------------------------------------------------------------
> > 
> > Looks good
> > 
> > ::: dom/media/tests/mochitest/mediaStreamPlayback.js
> > @@ +50,5 @@
> > > +      if(!canPlayThroughFired) {
> > > +        // Disable the canplaythrough event listener to prevent multiple calls
> > > +        canPlayThroughFired = true;
> > > +        self.mediaElement.removeEventListener('canplaythrough',
> > > +          canPlayThroughCallback, false);
> > 
> > Why do you need the boolean as well as removing the event listener? The
> > latter should be enough to ensure we're not reentered.
> 
> Good point. I think I was originally trying to be sure in two different
> ways, but removing the event listener as you've stated should be sufficient.
> I'll go fix that.

Now I remember why I had the boolean - the boolean was originally being used for the timeout below primarily that checks that the event eventually fired. The extra check isn't needed here though in the if though, so I'll fix that.
Attachment #684565 - Attachment is obsolete: true
Attached patch Basic Automated gUM coverage v8 (obsolete) — Splinter Review
Attachment #684578 - Attachment is obsolete: true
Attached patch Basic Automated gUM coverage v9 (obsolete) — Splinter Review
Attachment #684579 - Attachment is patch: true
Attachment #684579 - Attachment is obsolete: true
Attached patch Basic Automated gUM coverage v9 (obsolete) — Splinter Review
Attachment #684582 - Attachment is patch: true
Attachment #684582 - Attachment is obsolete: true
Attached patch Basic Automated gUM coverage v9 (obsolete) — Splinter Review
Attachment #684583 - Attachment is patch: true
Attachment #684583 - Attachment is obsolete: true
Attached file Basic Automated gUM coverage v9 (obsolete) —
Attachment #684584 - Attachment is obsolete: true
Attached patch Basic Automated gUM coverage v9 (obsolete) — Splinter Review
Attachment #684586 - Attachment is obsolete: true
Attached patch Basic Automated gUM coverage v9 (obsolete) — Splinter Review
Attachment #684587 - Attachment is obsolete: true
Attached patch Basic Automated gUM coverage v9 (obsolete) — Splinter Review
Comment on attachment 684588 [details] [diff] [review]
Basic Automated gUM coverage v9

Updated based on your feedback:

1. Removed if checks in callback function against the boolean
2. Removed the ended attribute test

The boolean was kept mainly for the timeout test that varies that the event eventually fires respectively for timeupdate and canplaythrough.
Attachment #684588 - Flags: review?(roc)
Keywords: checkin-needed
Requesting checkin on a new test suite without a link to a single Try run? What could possibly go wrong with that? Sorry, I'm not feeling that lucky today.
https://tbpl.mozilla.org/?tree=Try&rev=f7e037dc5360
(In reply to Ryan VanderMeulen from comment #47)
> Requesting checkin on a new test suite without a link to a single Try run?
> What could possibly go wrong with that? Sorry, I'm not feeling that lucky
> today.
> https://tbpl.mozilla.org/?tree=Try&rev=f7e037dc5360

Ah. Good point. Sorry about that.
Looking pretty failtastic on Try. Not sure why this would be causing OSX debug plugin tests to timeout, but it looks real. The other OSX opt failures look pretty self-explanatory.
Keywords: checkin-needed
(In reply to Ryan VanderMeulen from comment #49)
> Looking pretty failtastic on Try. Not sure why this would be causing OSX
> debug plugin tests to timeout, but it looks real. The other OSX opt failures
> look pretty self-explanatory.

Okay. So what I see then is:

1. Lots of leaks, so this is blocked from landing prefed on. Similar issue to the crashtests issue whimboo is hitting.
2. canplaythrough event failing to fire on OS X 10.7 and 10.8 opt builds

(1) sadly we've hit this issue before, so I'll have to file implementation followups to correct the leaks. (2) I'm not sure about - it almost looks like a bug in gum, as it is consistently happening on OS X 10.7 opt and 10.8 opt builds.

Neither of those issues however look like the problem is with the actual test.

Maybe the best option here is to land this preffed off and file implementation bugs for followups.
Depends on: 814718
Depends on: 814721
Attachment #684588 - Attachment is obsolete: true
Comment on attachment 684821 [details] [diff] [review]
Basic Automated gUM coverage v10 - prefed off

In an effort to land something so that we can continue coding automation, I'd modified the above patch to pref off the tests on mozilla-central by commenting it out and putting a TODO in there to re-enable when the bug dependencies are fixed.

I'll file a separate bug to track prefing the automation on central.
Attachment #684821 - Flags: review?(roc)
I've filed bug 814807 to track prefing the automation on trunk.
Depends on: 814807
No longer depends on: 814718, 814721
Comment on attachment 684821 [details] [diff] [review]
Basic Automated gUM coverage v10 - prefed off

Review of attachment 684821 [details] [diff] [review]:
-----------------------------------------------------------------

::: dom/media/tests/mochitest/Makefile.in
@@ +16,5 @@
>    $(NULL)
>  
> +# Desktop only tests
> +# TODO: Re-enable this when bug 814718 and bug 814721 are fixed
> +#ifdef MOZ_PHOENIX

What is MOZ_PHOENIX?
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #54)
> Comment on attachment 684821 [details] [diff] [review]
> Basic Automated gUM coverage v10 - prefed off
> 
> Review of attachment 684821 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> ::: dom/media/tests/mochitest/Makefile.in
> @@ +16,5 @@
> >    $(NULL)
> >  
> > +# Desktop only tests
> > +# TODO: Re-enable this when bug 814718 and bug 814721 are fixed
> > +#ifdef MOZ_PHOENIX
> 
> What is MOZ_PHOENIX?

To my understanding, it's a constant that's only set to 1 if the build is Firefox Desktop. Digging through the source tree, this looked like the constant that was used to only have a certain set of code for desktop firefox only.
I don't think that's what you want to use here.

How about instead of using this #ifdef, have the tests check the UA string to see if we expect them to pass or not.
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #56)
> I don't think that's what you want to use here.

Hmm. This was what I think I was suggested to use originally when I asked Clint about this. Clint - Do you have an opinion here?

> 
> How about instead of using this #ifdef, have the tests check the UA string
> to see if we expect them to pass or not.

I don't understand why that would be beneficial in this case. These tests were written to be desktop targeted only. The validity of the results of the test (pass or fail) really only holds true if it's desktop, because we know how the test should operate in a desktop environment. I don't know what would happen if this ran on Android for example. In fact, I'm not even sure which tests would pass or fail at this point, given that gum video/audio functionality currently targets desktop only.

Anyways - Even if cleanup is needed here or not, I'd like to at least get what I have landed so that I unblock other mochitest automation. If a followup ends up being necessary to use the user agent strategy after getting Clint's opinion, then I'd like to take care of that in followup. Does that work?
Flags: needinfo?(ctalbert)
(In reply to Jason Smith [:jsmith] from comment #57)
> I don't understand why that would be beneficial in this case. These tests
> were written to be desktop targeted only. The validity of the results of the
> test (pass or fail) really only holds true if it's desktop, because we know
> how the test should operate in a desktop environment. I don't know what
> would happen if this ran on Android for example. In fact, I'm not even sure
> which tests would pass or fail at this point, given that gum video/audio
> functionality currently targets desktop only.

Android support is coming soon. There is nothing specifically desktop about these tests; they should run wherever getUserMedia is supported, which should be every device with a camera and microphone.

How about making the tests treat "getUserMedia not supported" as a pass, so they pass on non-desktop?
Attachment #684821 - Attachment is obsolete: true
Attachment #684821 - Flags: review?(roc)
Attachment #685031 - Attachment is patch: true
Attachment #685031 - Attachment is obsolete: true
Attachment #685032 - Attachment is patch: true
Comment on attachment 685032 [details] [diff] [review]
Basic Automated gUM coverage v11 - prefed off

Here's an updated patch that does the UA check as you've specified.
Attachment #685032 - Flags: review?(roc)
Flags: needinfo?(ctalbert)
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #58)
> (In reply to Jason Smith [:jsmith] from comment #57)
> > I don't understand why that would be beneficial in this case. These tests
> > were written to be desktop targeted only. The validity of the results of the
> > test (pass or fail) really only holds true if it's desktop, because we know
> > how the test should operate in a desktop environment. I don't know what
> > would happen if this ran on Android for example. In fact, I'm not even sure
> > which tests would pass or fail at this point, given that gum video/audio
> > functionality currently targets desktop only.
> 
> Android support is coming soon. There is nothing specifically desktop about
> these tests; they should run wherever getUserMedia is supported, which
> should be every device with a camera and microphone.

Indeed, my hope is that part of the road to getting getUserMedia working on Android will be by changing code in the core (and potentially these tests) to make the tests pass.  If there's someone reason that's likely to be a bad strategy, I'd love to hear about that sooner rather than later...
Apparently try didn't like the use # comments within the mochitest files piece.

Missing separator is being thrown. I'll get a patch quickly.
Attachment #685032 - Attachment is obsolete: true
Attachment #685440 - Attachment is patch: true
Comment on attachment 685440 [details] [diff] [review]
Basic Automated gUM coverage v12 - prefed off

Fixes a very minor build error and I know it builds now with no new tests running. Carrying the original r+ and doing checkin-needed.
Attachment #685440 - Flags: review+
Keywords: checkin-needed
https://hg.mozilla.org/mozilla-central/rev/6b73c8bdc701
Status: ASSIGNED → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla20
We should get this backported to Aurora when it sticks green.
Whiteboard: [getUserMedia], [blocking-gum+] → [getUserMedia], [blocking-gum+] [qa-]
(In reply to Henrik Skupin (:whimboo) from comment #68)
> We should get this backported to Aurora when it sticks green.

Actually I have to revert that and ask why it has been landed on mozilla-central given that we have agreed on to get mochitests only landed on the alder branch for now.

To avoid the mess with merge conflicts I'm going to back this out from mozilla-central for now. I have talked with Randell over on IRC earlier today and he agreed on. Additional I have some follow-up code changes which I will explain in more detail with the upcoming patch. With that we should most likely be ready to get more mochitests written.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
(In reply to Henrik Skupin (:whimboo) from comment #69)
> (In reply to Henrik Skupin (:whimboo) from comment #68)
> > We should get this backported to Aurora when it sticks green.
> 
> Actually I have to revert that and ask why it has been landed on
> mozilla-central given that we have agreed on to get mochitests only landed
> on the alder branch for now.


Because gum automation needs to get prefed on first for the upcoming firefox release. Putting it on alder is okay for peerconnection, data channel, etc style tests, but not for gum. We need that on central by default given that we're targeting this to release first.

> 
> To avoid the mess with merge conflicts I'm going to back this out from
> mozilla-central for now. I have talked with Randell over on IRC earlier
> today and he agreed on. Additional I have some follow-up code changes which
> I will explain in more detail with the upcoming patch. With that we should
> most likely be ready to get more mochitests written.

No. That's fine with pc and data channel tests, not gum.
Status: REOPENED → RESOLVED
Closed: 12 years ago12 years ago
Resolution: --- → FIXED
Not sure why this has been landed on m-c given that all three tests are failing on Android. I can see that we have special cased Android but it doesn't seem to have an effect. Given that this has been landed a while back I will file a new bug for. But in the future please do not land tests which are broken. Thanks.

Try server results:
https://tbpl.mozilla.org/?tree=Try&rev=19e9c6b652d5
Blocks: webrtc-tests
Depends on: 826584
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: