Thursday, December 18, 2008

Unittest Consolidation and You

This morning was the culmination of months of work from RelEng. We have finally turned on unittests as well as a11y tests on the production Buildbot master which was already doing the nightlies and l10n builds.

Mostly this change is to make our lives easier. Now we have one instance of Buildbot to pay attention to (disk space, waterfall page) with one pool of slaves that can do several types of builds (nightlies, unittest, tracemonkey, l10n). This will make replacing burned out slaves incredibly simple. Doing this consolidation also led to a great unittest factory (thanks to catlee) that will make adding unittests to other branches much easier to deploy.

What does this mean for you? Well, if you are someone who looks at unittest output regularly you are probably familiar with something like this:

Where each unittest build has its own waterfall column and you can gauge how the unittests are doing by comparing them to each other (changeset, start time, colour).

The new way will be different:

There will only be one column on the waterfall, with no machine name since the unittest build will be coming from a pool of over 14 slaves for that platform. When you look at this waterfall column you will see the colour (thus result) of the most recent build, but there may be several builds that start within moments of each other.

More investigation will be needed to look at changesets and start times with this new method.

Hopefully people will adjust to this, and we welcome your feedback about the new setup.

It's a very big step for us, one that leads to hopefully parallelizing the unittest steps for quicker unittest results, and eventually doing unittests on builds independently - where right now the unittest run includes a build step every time, we would like to get to a place where unittests can be run on a completed build so that the suite of tests could be run repeatedly on the same build for example.

We'll wait a day or so to make sure that this consolidation is running smoothly and then the standalone production unittests will be turned off, the mac slave machines will be moved into the new pool and the linux/windows vms will be deleted and then new ones created for the production pool. Hopefully there won't be too much of a backlog on builds pending while we wait to get those new machines added.

Monday, December 8, 2008

Dubai and Dashboards

Two things on my mind these days:

Item One:

I will be attending the Education Without Borders conference as a delegate from Seneca College. Five students were chosen to represent Seneca from several fields.

My goal was to try and get a spot presenting about Mozilla's partnership with the Open Source curriculum that we have at school, taught by Chris Tyler and Dave Humphrey. Sadly, I was not selected to present. There were about 1000 attendees from all over the world submitting their proposals so I trust there was lots of competition and that the selected presentations with be mind-blowing. Many of the attendees will be grad students presenting academic papers. It will be great to be there regardless, and I will certainly be pitching Mozilla development and sharing the Seneca teaching model to anyone who will listen.

Item Two:

Q4 is rapidly approaching its end and while I am working on the consolidation of build and unittest to the same buildbot master (when I am not studying for my exams), I am also looking forward to Q1 where I would like to spend some time gathering requirements for a meaningful dashboard of the unittest information that would be useful to developers.

I'm not sure how one gets this kind of conversation started. It would be great to hear from people who care about unittest results and have opinions on what they use the information for. Would a questionnaire be useful? Should I start a forum discussion?

I'll continue to think on that and ask around for ideas on how to do this right. Has anyone been successful in creating a dashboard that is used frequently?