DidRocks' blog

apt-get install freedom

Aller au contenu | Aller au menu | Aller à la recherche

Mot-clé - quality

Fil des billets - Fil des commentaires

mardi, novembre 4 2014

Just released Ubuntu Developer Tools Center 0.1.1

In a nutshell, this release is fixing the changes introduced by the new Android Studio (0.8.14) download in beta channel.

The web page changed from a md5 checksum to a sha1. We do that check as we grab that information from a secure https connexion, but then, we download from dl.google.com which isn't https enabled.

Thanks again to Tin Tvrtković who did the work to add this support and makes the necessary changes! :) I just had to do a little cleanup and then releasing it!

How did we noticed it?

Multiple source of inputs:

  • The automated tests which are running every couple of hours were both failing reliably in trunk and latest release version, on both architecture, only for android studio in the large tests (medium tests, with fake assets were fine). This clearly meant that a third-party broke us!
  • Some bug reports from the community
  • Some directly addressed emails!

It was a nice way to see that Ubuntu Developer Tools Center is quite used, even if this is through an external breakage. Thanks to all those inputs, the fix went in shortly!

On Android Studio not shipping the sdk anymore.

With this new release, Android Studio is not shipping the sdk embedded anymore and it needs to be downloaded separately. I think this is a nice move: decoupling editor code from sdk tools just makes sense.

However, in my opinion, the initial setup seems a little bit not obvious for people who just want to start android development (you have to download the sdk tools, and then, find in Android Studio where to set that path to the sdk, knowing that reopening the same window will keep you previous path only you have one sdk version installed from those sdk tools).

It seems as well that there is no way to set that up from the command line or configuration file without listing installed sdk versions, which will then not be a robust way for the ubuntu developer tools center to do it for you.

So, for now, we are following the upstream (I think they have a plan for better integration in the future) way of letting the user downloading the sdk, I will be interested in getting feedback from new users on how it went for them on that step.

Availability

0.1.1 is now available to the regular channels: - in vivid - in the ubuntu-developer-tools-center ppa for Ubuntu 14.04 LTS and 14.10.

mercredi, octobre 29 2014

Eclipse and android adt support now in Ubuntu Developer Tools Center

Eclipse and Android ADT support now in Ubuntu Developer Tools Center

Now that the excellent Ubuntu 14.10 is released, it's time to focus as part of our Ubuntu Loves Developers effort on the Ubuntu Developer Tools Center and cutting a new release, bringing numerous new exciting features and framework support!

0.1 Release main features

Eclipse support

Eclipse is now part of the Ubuntu Developer Tools Center thanks to the excellent work of Tin Tvrtković who implemented the needed bits to bring that up to our users! He worked on the test bed as well to ensure we'll never break unnoticed! That way, we'll always deliver the latest and best Eclipse story on ubuntu.

To install it, just run:

$ udtc ide eclipse

and let the system set it up for you!

eclipse.png

Android Developer Tools support (with eclipse)

The first release introduced the Android Studio (beta) support, which is the default in UDTC for the Android category. In addition to that, we now complete the support in bringing ADT Eclipse support with this release.

eclipse-adt.png

It can be simply installed with:

$ udtc android eclipse-adt

Accept the SDK license like in the android studio case and be done! Note that from now on as suggested by a contributor, with both Android Studio and Eclipse ADT, we add the android tools like adb, fastboot, ddms to the user PATH.

Ubuntu is now a truly first-class citizen for Android application developers as their platform of choice!

Removing installed platform

As per a feature request on the ubuntu developer tools center issue tracker, it's now really easy to remove any installed platform. Just enter the same command than for installing, and append --remove. For instance:

$ udtc android eclipse-adt --remove
Removing Eclipse ADT
Suppression done

Enabling local frameworks

As requested as well on the issue tracker, users can now provide their own local frameworks, by using either UDTC_FRAMEWORKS=/path/to/directory and dropping those frameworks here, or in ~/.udtc/frameworks/.

On glorious details, duplicated categories and frameworks loading order is the following:

  1. UDTC_FRAMEWORKS content
  2. ~/.udtc/frameworks/ content
  3. System ones.

Note though that duplicate filenames aren't encouraged, but supported. This will help as well testing for large tests with a basic framework for all the install, reinstall, remove and other cases common in all BaseInstaller frameworks.

Other enhancements from the community

A lot of typo fixes have been included into that release thanks to the excellent and regular work of Igor Vuk, providing regular fixes! A big up to him :) I want to highlight as well the great contributions that we got in term of translations support. Thanks to everyone who helped providing or updating de, en_AU, en_CA, en_GB, es, eu, fr, hr, it, pl, ru, te, zh_CN, zh_HK support in udtc! We are eager to see what next language will enter into this list. Remember that the guide on how to contribute to Ubuntu Developer Tools Center is available here.

Exciting! How can I get it?

The 0.1 release is now tagged and all tests are passing (this release brings 70 new tests). It's available directly on vivid.

For 14.04 LTS and 14.10, use the ubuntu-developer-tools-center ppa where it's already available.

Contributions

As you have seen above, we really listen to our community and implement & debate anything coming through. We start as well to see great contributions that we accept and merge in. We are just waiting for yours!

If you want to discuss some ideas or want to give a hand, please refer to this blog post which explains how to contribute and help influencing our Ubuntu loves developers story! You can as well reach us on IRC on #ubuntu-desktop on freenode. We'll likely have an opened hangout soon during the upcoming Ubuntu Online Summit as well. More news in the coming days here. :)

mercredi, septembre 24 2014

Ubuntu Developer Tools Center: how do we run tests?

We are starting to see multiple awesome code contributions and suggestions on our Ubuntu Loves Developers effort and we are eagerly waiting on yours! As a consequence, the spectrum of supported tools is going to expand quickly and we need to ensure that all those different targeted developers are well supported, on multiple releases, always delivering the latest version of those environments, at anytime.

A huge task that we can only support thanks to a large suite of tests! Here are some details on what we currently have in place to achieve and ensure this level of quality.

Different kinds of tests

pep8 test

The pep8 test is there to ensure code quality and consistency checking. Tests results are trivial to interpret.

This test is running on every commit to master, on each release during package build as well as every couple of hours on jenkins.

small tests

Those are basically unit tests. They are enabling us to quickly see if we've broken anything with a change, or if the distribution itself broke us. We try to cover in particular multiple corner cases that are easy to test that way.

They are running on every commit to master, on each release during package build, every time a dependency is changed in Ubuntu thanks to autopkgtests and every couple of hours on jenkins.

large tests

Large tests are real user-based testing. We execute udtc and type in stdin various scenarios (like installing, reinstalling, removing, installing with a different path, aborting, ensuring the IDE can start…) and check that the resulting behavior is the one we are expecting.

Those tests enables us to know if something in the distribution broke us, or if a website changed its layout, the download links are modified, or if a newer version of a framework can't be launched on a particular Ubuntu version or configuration. That way, we are aware, ideally most of the time even before the user, that something is broken and can act on it.

Those tests are running every couple of hours on jenkins, using real virtual machines running an Ubuntu Desktop install.

medium tests

Finally, the medium tests are inheriting from the large tests. Thus, they are running exactly the same suite of tests, but in a Docker containerized environment, with mock and small assets, not relying on the network or any archives. This means that we ship and emulate a webserver delivering web pages to the container, pretending we are, for instance, https://developer.android.com. We then deliver fake requirements packages and mock tarballs to udtc, and running those.

Implementing a medium tests is generally really easy, for instance:

class BasicCLIInContainer(ContainerTests, test_basics_cli.BasicCLI):

"""This will test the basic cli command class inside a container"""

is enough. That means "takes all the BasicCLI large tests, and run them inside a container". All the hard work, wrapping, sshing and tests are done for you. Just simply implement your large tests and they will be able to run inside the container with this inheritance!

We added as well more complex use cases, like emulating a corrupted downloading, with a md5 checksum mismatch. We generate this controlled environment and share it using trusted containers from Docker Hub that we generate from the Ubuntu Developer Tools Center DockerFile.

Those tests are running as well every couple of hours on jenkins.

By comparing medium and large tests, as the first is in a completely controlled environment, we can decipher if we or the distribution broke us, or if a change from a third-party changing their website or requesting newer version requirements impacted us (as the failure will only occurs on the large tests and not in the medium for instance).

Running all tests, continuously!

As some of the tests can show the impact of external parts, being the distribution, or even, websites (as we parse some download links), we need to run all those tests regularly[1]. Note as well that we can experience different results on various configurations. That's why we are running all those tests every couple of hours, once using the system installed tests, and then, with the tip of master. Those are running on various virtual machines (like here, 14.04 LTS on i386 and amd64).

By comparing all this data, we know if a new commit introduced regressions, if a third-party broke and we need to fix or adapt to it. Each testsuites has a bunch of artifacts attached to be able to inspect the dependencies installed, the exact version of UDTC tested here, and ensure we don't corner ourself with subtleties like "it works in trunk, but is broken once installed".

jenkins test results

You can see on that graph that trunk has more tests (and features… just wait for some days before we tell more about them ;)) than latest released version.

As metrics are key, we collect code coverage and line metrics on each configuration to ensure we are not regressing in our target of keeping high coverage. That tracks as well various stats like number of lines of code.

Conclusion

Thanks to all this, we'll probably know even before any of you if anything is suddenly broken and put actions in place to quickly deliver a fix. With each new kind of breakage we plan to back it up with a new suite of tests to ensure we never see the same regression again.

As you can see, we are pretty hardcore on tests and believe it's the only way to keep quality and a sustainable system. With all that in place, as a developer, you should just have to enjoy your productive environment and don't have to bother of the operation system itself. We have you covered!

Ubuntu Loves Developers

As always, you can reach me on G+, #ubuntu-desktop (didrocks) on IRC (freenode), or sending any issue or even pull requests against the Ubuntu Developer Tools Center project!

Note

[1] if tests are not running regularly, you can consider them broken anyway

mercredi, août 14 2013

Release early, release often, release every 4h!

It's been a long time I didn't talk about our daily release process on this blog.

For those who are you not aware about it, this is what enables us to release continuously most of the components we, as the ubuntu community, are upstream for, to get into the baseline.

Sometimes, releases can feel like pushing huge rocks

Historic

Some quick stats since the system is in place (nearly since last December for full production):

  • we are releasing as of now 245 components to distro every day (it means, everytime a meaningfull change is in any of those 245 components, we will try to release it). Those are segregated in 22 stacks.
  • this created 309 uploads in raring
  • we are now at near 800 uploads in saucy (and the exact number is changing hour after hour)

Those numbers are even without counting feature branches (temporary forks of a set of branches) and SRUs.

Getting further

However, seeing the number of transitions we have everyday, it seems that we needed to be even faster. I was challenged by Rick and Alexander to speed up the process, even if it was always possible to run manually some part of it[1]. I'm happy to announce that now, we are capable of releasing every 4h to distro! Time for a branch proposed to trunk to the distro is drastically reduced thanks to this. But we still took great care to keep the same safety net though with tests running, ABI handling, and not pushing every commit to distro.

This new process was in beta since last Thursday and now that everyone is briefed about it, it's time for this to get in official production time!

Modification to the process

For this to succeed, some modifications to the whole process have been put in place:

  • now that we release every 4 hours, it means we need a production-like team always looking at the results. That was put in place with the ~ubuntu-unity team, and the schedule is publically available here.
  • the consequence is that we have no more "stack ownership", everyone in the team is responsible for the whole set now.
  • it means as well that upstream now have a window of 4 hours before the "tick" (look at the cross on the schedule) to push stuff in different trunks in a coherence piece rather than once a day, before 00 UTC. It's the natural pressure between speed versus safety for big transitions.
  • better communication and tracking were needed (especially as the production is looked after by different people along the day). We needed as well to ensure everything is available for upstreams to know where their code is, what bugs affects them and so on… So now, everytime there is an issue, a bug is opened, upstream is pinged about it and we write about those on that tab. We escalate after 3 days if nothing is fixed by then.
  • we will reduce as much as possible "manual on demand rebuild" as the next run will fix it.

Also, I wanted that to not become a burden for our team, so some technical changes have been put in place:

  • not relying anymore on the jenkins lock system as the workflow is way more complex than what jenkins can handle itself.
  • possible to publish the previous run until the new stack started to build (but still possible even if the stack started to wait).
  • if a stack B is blocked on stack A because A is in manual publishing mode (packaging changes for instance), forcing the publication of A will try to republish B if it was tested against this version of A. (also, having that scaling up and cascading as needed). So less manual publication needed and push button work \o/
  • Some additional "force rebuild mode" which can retrigger automatically some components to rebuild if we know that it's building against a component which doesn't ensure any ABI stability (but only rebuild if that component was renewed).
  • ensure as well that we can't deadlock in jenkins (having hundreds of jobs running every 4h).
  • the dependency and order between stacks are not relying anymore on any scheduling, it's all computed and ordered properly now (thanks Jean-Baptiste to have help on the last two items).

Final words

Since the beginning of the week, in addition to seeing way more speed up for delivering work to the baseline, we also have seen the side benefit that if everybody is looking at the issues more regularly, there are less coordination work to do and each tick is less work in the end. Now, I'm counting on the ~ubuntu-unity team to keep that up and looking at the production throughout the day. Thanks guys! :)

I always keep in mind the motto "when something is hard, let's do it more often". I think we apply that one quite well. :)

Note

[1] what some upstreams were used to ask us

mardi, février 5 2013

Unity: release early, release often… release daily! (part 5 and conclusion)

This post is part of the Unity daily release process blog post suite.

After a week to let people ask questions about the daily release process, I guess it's time to catch up and conclude this serie with a FAQ and some thoughts for the future.

FAQ

The FAQ is divided in multiple sequences depending on your role in the development of ubuntu, with the hope that you will be able to find what you are looking for quicker this way. If you have any question that isn't addressed below, please ping didrocks on IRC freenode (#ubuntu-unity) and I'll complete this page.

Upstream developers

What's needed to be done when proposing a branch or reviewing?

As discussed in this part, when proposing a branch or reviewing a peer's work, multiple rules have been established. The developers needs to:

  • Design needs to acknowledge the change if there is any visual change involved
  • Both the developer, reviewer and the integration team ensure that ubuntu processes are followed (UI Freeze/Feature Freeze for instance). If exceptions are required, they check before approving for merging that they are acknowledged by different parts. The integration team can help smooth to have this happened, but the requests should emerge from developers.
  • Relevant bugs are attached to the merge proposal. This is useful for tracking what changed and for generating the changelog as we'll see in the next part
  • Review of new/modified tests (for existence)
  • They ensure that it builds and unit tests are passing automated
  • Another important one, especially when you refactor, is to ensure that integration tests are still passing
  • Review of the change itself by another peer contributor
  • If the change seems important enough to have a bug report linked, ensure that the merge request is linked to a bug report (and you will get all the praise in debian/changelog as well!)
  • If packaging changes are needed, ping the integration team so that they acknowledge them and ensure the packaging changes are part of the merge proposal.

When you approve a change, do not forget to check that there is a commit message in launchpad provided and to set the global status to "approved".

When do changes need to land in a coherent piece?

Some changes can happen and touch various components (even across different stacks). To avoid loosing a daily release because only half of transition landed in some upstream projects (and so, tests will fail because they catch this issues, isn't it? ;)), it's asked to ensure that all transitions are merged by 00 UTC. Then, you can start approving again transition starting from 06 UTC. We can try to make this window shorter in the future, but let's see first how this work in practice. For any big transition, please coordinate with the ubuntu-unity team team so that they can give a hand.

I have a change involving packaging modifications

As told in a previous section, just ask for the ubuntu-unity team to do some review or assisting in doing those changes.

I want to build the package locally myself

If you have done some packaging changes, or added a C symbol, you maybe want to ensure that it's building fine. Quite simple, just go into your branch directory and run $ bzr bd (from the bzr-builddeb package). This will build your package on your machine using the same flags and checks than on the builders. No need to autoreconf if you are using autotools, everything is handled for you. You will get warned as well if some installed files are missing. ;)

I need to break an API/ABI

The first question is: "are you sure?". Remember that API changes are a PITA for all third parties, in particular if your API is public. Retro-compatibility is key. And yes, your dbus interface should be considered as part of your API!

If you still want to do that, ensure that you bump your soname if the ABI is broken, then, multiple packaging changes are needed. Consequently, ensure to ping your lovely ubuntu-unity team to request assistance. The transition should happen (if no retrocompatibility is provided) within the previously mentionned timeframe to have everything landing in coherence.

You also need to ensure that for all components build-depending on the API you changed, you bumped the associated build dependency version in debian/control.

To what version should I bump the build dependency version?

Let's say you added an API to package A. B is depending on A and you want to start using this new awesome API. What version should be used in debian/control to ensure I'm building against the new A?

This is quite easy, the build-dependencies in debian/control should look like: A >= 6.0daily13.03.01 (eventually ended by -0ubuntuX). Strip the end -0ubuntuX and append then bzr<rev_where_you_added_the_api_in_trunk>. For instance, let's imagine you added the API in rev 42 in trunk, so the build-dep will change from:

A >= 6.0daily13.03.01

to:

A >= 6.0daily13.03.01bzr42

This is to ensure the upstream merger will be able to take it.

If no "daily" string is provided in the dependency you add, please check with the ubuntu-unity team.

I'm exposing a new C symbols in my library, it seems that some packaging changes are needed…

Debian packages have a debian/<packagename>.symbols file which lists all exposed symbols from your library (we only do that for C libraries as C++ does mangle the name per architecture). When you try to build the package containing a new symbol, you will see something like:

--- debian/libdee-1.0-4.symbols (libdee-1.0-4_1.0.14-0ubuntu2_amd64)

+++ dpkg-gensymbolsvPEBau 2013-02-05 09:43:28.529715528 +0100

-4,6 +4,7

dee_analyzer_collate_cmp@Base 0.5.22-1ubuntu1

dee_analyzer_collate_cmp_func@Base 0.5.22-1ubuntu1

dee_analyzer_collate_key@Base 0.5.22-1ubuntu1

+ dee_analyzer_get_type@Base 1.0.14-0ubuntu2

dee_analyzer_new@Base 0.5.22-1ubuntu1

dee_analyzer_tokenize@Base 0.5.22-1ubuntu1

dee_client_get_type@Base 1.0.0

The diff shows that debian/libdee-1.0-4.symbols doesn't list the exposed dee_analyzer_get_type new symbol. You see a version number next to it, corresponding to the current package version. The issue is that you don't know what will be the package version in the future when the next daily release will happen (even if you can infer), what to put then?

The answer is easier than you might think. As explained in the corresponding section, the daily release bot will know what version is needed, so you just need to give a hint that the new symbol is there for the upstream merger to pass.

For than, just include the symbol, with 0replaceme[1] as the version for the branch you will propose as a merge request. You will thus set in your symbols file:

dee_analyzer_collate_cmp@Base 0.5.22-1ubuntu1

dee_analyzer_collate_cmp_func@Base 0.5.22-1ubuntu1

dee_analyzer_collate_key@Base 0.5.22-1ubuntu1

dee_analyzer_get_type@Base 0replaceme

dee_analyzer_new@Base 0.5.22-1ubuntu1

dee_analyzer_tokenize@Base 0.5.22-1ubuntu1

dee_client_get_type@Base 1.0.0

The dee_analyzer_get_type@Base 0replaceme line will be replace with the exact version we release in the next daily release.

My name deserves to be in the changelog!

I'm sure you want the world to know what great modifications you have introduced. To get this praise (and blame ;)), we strongly advise you to link your change to a bug. This can be done in multiple ways, like link bugs to a merge proposal before it's approved (you link a branch to the bug report in launchpad), or use "bzr commit --fixes lp:XXXX" so that automatically links it for you when proposing the merge for reviewing, or put in a commit message something like "this fixes bug…". This will avoid the changelog to have only empty content.

I invite you to read "Why not including all commits in debian/changelog?" from this section to understand why we don't include everything, but only bug reports title in the changelog.

Ubuntu Maintainers

I want to upload a change to a package under daily release process

Multiple possibilities there.

The change itself can wait next daily release

If it's an upstream change, we strongly advise you to follow the same procedure and rules than the upstream developers, meaning, proposing a merge request and so on… bzr lp-propose and bzr branch lp:<source_package_name> are regularly and automatically adjusted as part of the daily release process to point to the right branches so that you can find them easily. Vcs-Bzr in debian/control should point to the right location as well.

Then, just refer to the upstream merging guidelines above. Once approved, this change will be in the next daily release (if tests pass).

An urgent change is needed

If the change needs to be done right away, but can wait for a full testing round (~2 hours), you can:

  • Propose your merge request
  • Ping someone from the ubuntu-unity team who will ensure the branch is reviewing in priority and rerun a daily release manually including your change. That way, your change will get the same quality reviews and tests checking than anything else entering the distro.
It's really urgent

You can upload right away your change then, the next daily will be blocked for that component though (and only for that component) until your change reaches upstream. So please, be a good citizen and avoid more churn in proposing your change back to the upstream repo (including changelog), pinging the ubuntu-unity team preferably.

I'm making a change, should I provide anything in debian/changelog?

Not really, as mentioned earlier, if you link to a bug or mention in a commit message a bug number in your branch before proposing for review, this will be taken into account and debian/changelog will be populated automatically with your name as part of the next daily release. You can still provide it manually and the daily release will ensure there is no duplication if it was already mentionned.

If you provide it manually, just: dch -i and ensure you have UNRELEASED for an unreleased version to ubuntu (without direct upload).

ubuntu-unity team

Note that in all following commands, if -r <release> is not provided, "head" is assumed. Also, ensure you have your credentials working and be connected to the VPN (the QA team is responsible for providing those jenkins credentials).

Where is the jenkins start page?

There it is.

Forcing a stack publication

After reviewing and ensuring that we can release a stack (see section about manual publication causes, like packaging changes to review or upstream stack failing/in manual mode). Then run:

$ cu2d-run -P <stack_name> -r <release>

example for indicators, head release:

$ cu2d-run -P indicators

Remember that the master "head" job is not rerun, so it will stay in its current state (unstable/yellow). The publish job should though goes from unstable/yellow to green if everything went well.

Rerun a full stack daily release, rebuilding everything

$ cu2d-run -R <stack_name> -r <release>

Rerun a partial stack daily release, rebuilding only some components

$ cu2d-run -R <stack_name> -r <release> <components>

/!\ this will keep the state and take into account (failure to build state for instance) and the binaries packages for the other components which were part of this daily release, but not rebuilt.

for instance, keeping dee and libunity, but rebuilding compiz and unity from the unity, head release stack:

$ cu2d-run -R unity compiz unity

The integration tests will still take everything we have (so including dee and libunity) at the latest stack with the new trunk content for compiz and unity when we relaunched this command.

Rerun integration tests, but taking all ubuntu-unity ppa's content (like transition needing publication of the indicator and unity stacks):

$ cu2d-run -R <stack_name> -r <release> --check-with-whole-ppa

This command doesn't rebuild anything, just run the "check" step, but with the whole ppa content (basically doing a dist-upgrading instead of selecting only desired components from this stack). This is handy when for instance a transition involved the indicator and unity stack. So the indicator integration tests failed (we don't have the latest unity stack built yet). Unity built and tests pass. To be able to release the indicator stack as well as the unity stack, we need the indicator stack to publish, for that, we relaunch only the indicator tests, but with the whole ppa content, meaning last Unity.

In that case, the publish for the indicators will be manual (you will need to force the publication) and as well, you should ensure that you will publish the unity stack at the same time as well to have everything in coherence copied to the -proposed pocket. More information on the stack dependency in this section.

Adding/removing components to a stack

The stack files are living in jenkins/etc/. This yaml file structure should be straightforward. Note that the tuple component: branch_location is optional (it will be set to lp:component if not provided). So add/change what you need in those file.

Be aware that all running jobs are stopped for this stack.

$ cu2d-update-stack -U <path_to_stack_file>

This will reset/reconfigure as well all branches, add -S if you don't have access to them (but ensure someone will configure them still at least once)

Different level of failures acceptance.

We do accept some integration tests failing (no unit tests should fail though as ran as part of package building). The different level of those triggers are defined in a file like jenkins/etc/autopilot.rc. Those are different level of failures, regressions, skipped and removed tests per stack. The real file depends on stackname and are found in $jenkins/cu2d/<stack_name>.autopilotrc as described in the -check template.

Monitoring upstream uploads and merges

Remember to regularly look at the -changes mailing list to pick if a manual upload has been done and port the changes back (if not proposed by the ubuntu developer making the upload) to the upstream branch. The component which had a manual upload is ignored for daily release until this backport is done, meaning that no new commits will be published until then (you can see the prepare job for that component will turn unstable).

Also, ensure as you are watching all upstream merges to follow our acceptance criterias that the "latestsnapshot" branches generated automatically by the daily release process has been merged successfully by the upstream merger.

Boostrapping a new component for daily release

A lot of information is available on this wiki page.

Conclusion

Phew! This was a long serie of blog post. I hoped that you appreciated to have a deeper look of what daily release means, how this work, and also that the remaining questions have been answered by the above FAQ. I've just ported all those information on the https://wiki.ubuntu.com/DailyRelease.

Of course, this is not the end of the road, I have my personal Santa Claus wishlist like UTAH being more stable (this is under work), the daily image provisionning with success more often than now, refactorisation work taking integration tests into account, less Xorg crashes when completing those tests… This is currently adding a little bit of overhead as of course, any of this will reject the whole test suite and thus the whole stack. We prefer to always be on the safe side and not accepting something in an unknown state!

Also, on the upstream stack and daily release process as well, there are rooms for improvments, like having less upstream integration tests failing (despite awesome progress already there), I wish I can soon turn that down to 3% of accepted failure at most for unity instead of the current 7% one. Dedicated integrations tests for indicators would be awesome as well (stealing the unity ones for now and only running those involving indicators, but of course, not everything is covered). Having also some oif, webapps and wecreds integration tests is one of my main priority, and we are working with the various upstream developers to get this done ASAP so that we have a better safety net than today. On the daily release process itself, maybe freezing all trunks in a go would be a nice improvement, more tests for the system itself as well is under work and my personal goal to reach 100% test coverage for the scripts before making any additional changes. Finally, some thoughts for later but a dashboard would enable to have a higher view than just using jenkins directly for reporting, to get a quicker view on all stack statuses.

Of course, I'm sure that we'll experience some bumpy roads with this daily release process through the cycle, but I'm confident we'll overcome them and I am really enthousiast about our first releases and how smoothly they went until now (we didn't break the distro… yet :p[2]. I'm sure this is a huge change for everyone and I want again give a big thank you to everyone involved into that effort, from the distro people to upstream and diverse contributors on the QA team. Changing to such a model isn't easy, it's not only having yet another process, but it's really a change of mindset in the way we deliver new features and components to Ubuntu. This might be frightening, but I guess it's the only way we have to ensure a systematic delivery, continuous and increasing quality while getting well a really short feedback loop to be able to react promptly to the changes we are introducing to our audience. I'm confident we are already collecting all the fruits of this effort and going to expand this more and more in the future.

For all those reasons, thanks again to all parties involved and let's have our daily shot of Unity!

I couldn't finish without a special thanks to Alan Pope for being a patient first reader, and fixing tediously various typos that I introduced into the first draft of those blog posts. ;)

Notes

[1] "0replacemepleasepleaseplease" or anything else appended to 0replaceme is accepted as well if you feel adventurous ;)

[2] fingers crossed, touching wood and all voodoo magic required

vendredi, janvier 25 2013

Unity: release early, release often… release daily! (part 4)

This post is part of the Unity daily release process blog post suite.

You hopefully recovered from your migraine on reading yesterday's blog post on the insight of daily release and are hungry for more. What's? That's not it? Well, mostly, but we purposely dismissed one of the biggest point and consequences on having stacks: they depends on each other!

Illustrating the problem

Let's say that Mr T. has an awesome patch for the indicator stack, but this one needs as well some changes in Unity and is not retro-compatible. Ok, indicators integration tests should fail, isn't it? as long as we don't have the latest version of Unity rebuilt in the ppa… But then, the indicator stack is not published (because of integration tests failing), but Unity, with its integration tests will pass as they are run against the latest indicator stack built in the same ppa. Consequently, we may end up publishing only half of what's needed (the Unity stack, without the indicator requirement).

Another possible case: let's imagine that yesterday's daily-build failed and so we have a new unity staging in the ppa (containing the indicator-compatible code), so the next day, the indicator integration tests will pass (as it's installing Unity from the ppa), but will only publish the indicator components, not the Unity ones. Bad bad bad :-)

We clearly see we need to declare dependencies between stacks and handle them smartly.

What's the stack dependency as of today?

Well, a diagram is worth 1 000 words:

Stack dependencies

We can clearly see here that stacks are depending on each other, this is specified in the stack configuration file, each stack telling on what other they depend on.

This has multiple consequences.

Waiting period

First, a stack will start building only once all each stack it depends on finished to build. This enables us to ensure we are in a predictable state and not in a in-between with half of the components building against the old upstream stack and the other half against the new upstream stack because it finished to build in the same ppa.

The previous diagram needs to be then modified for the unity stack with this "wait" optional new step. This step is only deployed with cu2d-update-stack if we declared some dependent stacks.

Daily Release, jenkins jobs with wait optional step

The wait script is really simple to achieve that task. However, waiting is not enough as we've seen above, we need to take the test result into consideration, which we'll do later on during the publisher step.

Running integration tests in isolation

Ok, that's answering some part of the consistency issue, however, imagine now the second case we mentionned where yesterday's daily build failed, it means that if we take the whole ppa content, we'll end up testing the new indicator integration with a version of Unity which isn't the distro one, nor the one we'll publish later on. We need to ensure we only tests what will make sense.

The integration jobs are then not taking the whole ppa content, but only the list of components generated by the current stack. It means that we are explicitely only installing components from this stack and nothing else. Then, we log the result of the install command and filters against it. If a soname change for instance (like libbamf going from 3.0 to 3.1), ending up in a package renaming will grab a new Unity to be able to run, we'll then bail out.

Here is a real example where we can find here the list of packages we are expecting to install, then here is the list of what happened when we asked to install only those packages. The filtering happening as the first test will then exit the job in failure, meaning we can't publish that stack.

So we know if the we are ensuring we know the retro-compatibility status this way. If retro-compatibility is achieved, the tests are passing right away, and so the stack is published regardless other stacks, as we tested against Unity distro's version. This ensure we are not regressing anything then. In other case when retro-compatibility seems to fails, as in the previous example, this means that we would eventually have to publish both stacks at the same time, which is what we can do.

Ensuring that the leaf stack is not published

In a case like the previous one, the Unity integration tests will run against the latest indicator stack, and then they pass (having latest everything), but wait! We shouldn't publish without the indicator components (and those failed, we are not even sure the fixes in Unity will make them pass and we haven't tested againt the whole PPA).

Here comes the second "manual publishing" case from the publisher script I teased about yesterday. If a stack we depend on didn't finish with success, the current stack is put in manual publication. The xml artefacts giving the reason is quite clear:

indicators-head failed to build. Possible causes are:

* the stack really didn't build/can be prepared at all.

* the stack have integration tests not working with this previous stack.

What's need to be done:

* The integration tests for indicators-head may be rerolled with current dependent stack. If they works, both stacks should be published at the same time.

* If we only want to publish this stack, ensure as the integration tests were maybe run from a build against indicators-head, that we can publish the current stack only safely.

So from there, we can be either sure that the upstream stack has no relationship with the current downstream one, and so, people with credentials can force a manual publishing with cu2d-run as indicated in the previous blog post. However, if there is a transition impacting both stacks, the cleverest way is to relaunch the upstream stack tests with the whole ppa to validate that hypothesis.

For that, cu2d-run has another mode, entitled "check with whole ppa". This option will rerun a daily-release, but won't prepare/rebuild any component. It will though still takes their build status into account for the global status. The only difference is that the integration tests will take into account this time the whole ppa (and so no more filtering on what we install if it's part of the current stack or not), doing a classic dist-upgrade. This way, we validate with "all latest" that the tests are passing.

Note that even if tests pass, as we'll probably need to publish at the same time than the dependent stack, we'll automatically pass in manual publishing mode. Then, once all resolved, we can publish all stacks we want or need with cu2d-run.

Manual publish cascades to manual publish

Another case when a dependent stack can fallback to manual publishing mode doesn't necessarily imply tests failing. Remember the other case when we fallback in manual publishing mode? Exactly, packaging changes! As the dependent stack is built against the latest upstream stack, and as it's what we tested, we probably need to ensure both are published (or force a manual publishing on only one if we are confident that there is no interleaved depends in the current change).

Here is what the artefacts will provide as infos on that case:

indicators-head is in manually publish mode. Possible causes are:

* Some part of the stack has packaging changes

* This stack is depending on another stack not being published

What's need to be done:

* The other stack can be published and we want to publish both stacks at the same time.

* If we only want to publish this stack, ensure as the integration tests were run from a build against indicators-head, that we can publish the current stack only safely.

So, here, same solutions than in the previous case, with cu2d-run, we either publish both stack at the same time (because the packaging changes are safe and everything is according to the rules) or just the leaf one if we are confident there is no impact.

Note that we do handle as well if something unexpected happened on a stack and that we can't know its status.

In a nutshell

Stacks don't live in their own silos, they are high dependency behavior between them which are ensured by the fact that they live and are built against each other in the same ppa. We as well ensure the retro-compatibility mode, which is what most of daily release will be, than only publishing some stacks. We also handle that way transitions impacting multiple stacks and can publish everything in one shot.

That's it for the core of the daily release process (phew, isn't it?). Of course this is largely simplified and we have a lot of corner cases I didn't discuss about, but this should give enough high coverage on how the general flow is working out. I'll wrap it up in a week with some kind of FAQ from the most common questions that are still maybe pending (do not hesitate to comment here) and give this time to collect feedbacks. I will publish this FAQ then and at the same time will draw a conclusion from there. See you in a week. ;)

jeudi, janvier 24 2013

Unity: release early, release often… release daily! (part 3)

This post is part of the Unity daily release process blog post suite.

Now that we know how branches are flying to trunk and how we ensure that the packaging metadata are in sync with our delivery, let's swing to the heart of the daily release process!

Preliminary notes on daily release

This workflow is heavily using other components that we rely on. In addition to our own tool for daily release, which is available here, we needed to use jenkins for scheduling, controlling and monitoring the different parts of the process. This would not have been possible without the help of Jean-Baptiste and the precious and countless hours of hangouts on setting up jenkins, debugging, looking at issues and investigating autopilot/utah failures. In addition to that, thanks to the #bzr channel as this process demanded us to investigate some bzr internal behavior, and even patching it (for lp-propose). Not forgetting pbuilder-classic and other uses that we needed to benchmark and ensure we are doing the right thing for creating the source packages. Finally, we needed to fix (and automate the fix) of a lot of configuration on the launchpad side itself for our projects. Now it's clean and we have tools to automatically keeps that this way! Thanks to everyone involved into those various efforts.

General flow of daily release

Daily release, general flow The global idea of the different phases for achieving a stack delivery is the following:

  • For every component of the stack (new word spotted, will be definied in the next paragraph), prepare a source package
  • Upload them to ppa and build the packages
  • Once built, run the integration tests associated to the stack
  • If everything is ok, publish those to Ubuntu

Of course, this is the big picture, let's enter into more details now. :)

What is a stack?

Daily release is working per stack. Jenkins is showing all of them here. We validate a stack all together or reject all components that are part of it. A stack are various components having close interactions and relationships between themselves. They are generally worked on by the same team. For instance, those are the different stacks we have right now:

  • the indicator stack for indicators :)
  • the open input framework stack
  • the misc stack contains all testing tools, as well as misc components like theming, wallpapers, design assets. This is generally all components that don't have integration tests (and so we don't run any integration tests step)
  • the web application stack (in progress to have integration tests so that we can add more components)
  • the web credentials stack (in progress to have integration tests so that we can add more components)
  • the Unity stack, containing the core components of what defines the Unity experience.

The "cu2d" group that you can see in the above links is everything that is related to the daily release.

As we are working per stack, let's focus here on one particular stack and seeing the life of its components.

Different phases of one stack

General diagram

Let's have a look at the general diagram in a different way to see what is running concurrently:

Daily Release, jenkins jobs

You should recognize here the different jenkins jobs that we can list per stack, like this one for instance.

The prepare phase

The prepare phase is running one sub-prepare job per components. This is all control by the prepare-package script. Each projects:

  • Branch latest trunk for the component
  • Collect latest package version that were available to this branch and compare to distro ones, we can get multiple case here:
    • the version in the distro is the same than the one we have in the branch, we are then downloading the source code from Ubuntu of the latest available package to check its content against the content of the branch (this is a double security check).
    • If everything matches, we are confident that we won't overwrite anything in the distribution (by a past direct upload) that is not in the upstream trunk and can go on.
    • However, maybe a previous release was tempted as part of the daily release, so we need to take into account the version in the ppa we are using for daily release to eventually bump the versioning (it would be like the previous version in the ppa never existed)
    • if the version is less than the one in the distro, it means that there has been a direct upload to the distro not backported to the upstream trunk. The prepare job for this component is exiting as unstable, but we don't block the rest of the process. We'll simply ignore that component and try to validate all the other changes on the other projects without anything new from that one (if this component was really needed, the integration tests will fail later on). The integration team will then work to get those additional changes in the distro merged back into trunk for the next daily release.
  • We then check that we have new useful commits to publish. It means that a change which isn't restricted to debian/changelog or po files update has been made. If no useful revision is detected, we won't release that component today, but still go on with the other projects.
  • We then create the new package versionning. See the paragraph about versionning below.
  • We then update the symbols file. Indeed, the symbols file needs to know in which exact version a new symbol was introduced. This can't be done as part of the merge process as we can't be sure that next day, the release will be successful. So upstream is using the magic string "0replaceme" instead of providing a version as part the merge adding a symbol to a library. As part of making that package a candidate for daily release, we then replace automatically all the occurences of "0replaceme" by the exact versionning we computed just before. We add an entry to the changelog if we had to do that to document.
  • Then, we prepare the changelog content.
    • We first scan the changelog itself to grab all bugs that were manually provided as part of upstream merges by the developers since that last package upload.
    • Then, we scan the whole bzr history until latest daily release (we know the revision of the previous daily release where the snapshot has been taken by extracting the string "Automatic snapshot from revision <revision>". Fetching in detail this history (included merged branch content), we are looking for any trace of bugs attached or commit message talking about "bug #…" (there is a quite flexible regexp for extracting that).
    • We then look at the committer for each bug fixes and grab the bug title from launchpad. We finally populate debian/changelog (if it wasn't already previously mentionned in our initial parsing of debian/changelog to avoid duplication). This mean that it's directly the person fixing the issue who gets the praise (and blames ;)) for this fix. You can then see a result like that. As we only deduplicate since the last package upload, it means that we can mention again the same bug number if the previous fix in last upload wasn't enough. Also all the fixes are grouped and ordered by names, making the changelog quite clear (see here).
    • We finally append the revision from where the snapshot was taken to the changelog to make clear what's in this upload and what's not.
  • Then, we generate a diff if we have any meaningful packaging changes since last upload to Ubuntu (so ignoring changelog and symbols automatically replaced). In case we detect some, we include all meta-build information (like configure.ac, cmake files, automake files) that changed as well since last release to be able to have a quick look at why something changed. This information will be used during the publisher step.
  • We then sync all bugs that were previously detected, opening downstream bugs in launchpad so that they are getting closed by the package upload.
  • We then commit the branch and keep it there for now.
  • Building the source package, inside a chroot to try to get all build-dependencies resolved is then the next step. This is using pbuilder-classic and a custom setup to do exactly the job we want to do (we have our own .pbuilderrc and triggering tool, using cowbuilder to avoid having to extract a chroot). This step is creating as well the upstream tarball that we are going to use.
  • Finally, we upload that source package that we just got to the ubuntu-unity ppa, and save various configuration parts.

All those steps are happening in parallel for any component of the stack.

For those interested, here are the logs when there is some new commits to publish for that component. Here is another example, when there is nothing relevant. Finally, an upload not being on trunk is signaled like that (and the component is ignored, but other goes on).

Monitoring the ppa build

This is mostly the job of this watch-ppa script. It monitors the ubuntu-unity daily-ppa build status for the components we just uploaded and ensure that they are published and built successfully on all architectures. As we are running unit tests as part of the packaging build, we already have unit tests passing with latest Ubuntu ensured that way. In addition, the script generates some meta-information when packages are published. It will make the jenkins job failing if any architecture failed to build, or if a package that was supposed to start building had never its source published (after a timeout) in the ppa.

Running the integration tests

In parallel to monitoring the build, we are running the integration tests (if any). For that, we start monitoring using the previous script, but only on i386. Once all new components for this arch are built and published, we start a slave jenkins jobs running the integration tests. Those will, using UTAH, provision a machine, installing latest Ubuntu iso for each configuration (intel, nvidia, ati), then we add the packages we need from this stack using the ppa and run the tests corresponding to the current stack. Getting all Unity autopilot tests stabilized for that, was a huge task. Thanks in particular to Łukasz Zemczak and with some help of the upstream Unity team, we finally got the number of autopilot tests failing under control (from 130 to 12/15 over the 450 tests). We are using them to validate Unity nowadays. However, the indicator and oif tasks had no integration tests at all. To not block the process and trying to move on, we then "stole" the relevant (hud + indicators one for indicators) Unity autopilot tests and only run those. As oif has no integration test at all, we just ensure that Unity is still starting and that the ABI is not broken.

We are waiting on web credentials and web apps stack to grow some integration tests (which are coming in the next couple of weeks with our help) to be able to add more components from them and ensuring the quality status we target for.

Then, the check job is getting back the results of those tests and thanks to a configuration per-stack mechanism, we have triggers to decide if the current state is acceptable or not to us. For instance, over the 450 tests running for Unity (x3 as we run on intel, nvidia and ati), we accept for instance 5% of failures. We have different level for regressions and skipped tests as well. If the collected values are below those triggers, we are considering that the tests are passing.

  • Logs of tests passing. You can get a more detailed view on which tests passed and failed looking at the son job, which is the one running really the tests: https://jenkins.qa.ubuntu.com/job/ps-unity-autopilot-release-testing/. (unstable, yellow means that some tests failed). The parent one is only launching it when ready and doing the collect.
  • Logs of more tests than the acceptable threshold failing.
  • Logs of tests failing because UTAH failed to provision the machine or the daily build iso wasn't installable.

And finally, publishing

If we reach this step, it means that:

  • we have at least one component with something meaningful to publish
  • the build was successful on all architectures.
  • the tests are in the acceptable tests passing rate

So, let's go on and publish! The script has still some steps to proceed. In addition to have a "to other ppa" or "to distro" mode as a publishing destination, it:

  • will check if we have manual packaging changes (remember this diff we did for each component in the prepare stage?). If we actually do have some, we put the publishing in a manual mode. This means that we won't publish this stack automatically and only people with upload rights have special credentials to force the publish.
  • it will then propose and autoapprove upstream for each components the changes that we made to the package (edition of debian/changelog, symbols files…). Then, the upstream merger will grab those branches and merge that upstream. We are only pushing those changes back upstream now (and not during the prepare steps) as we are only finally sure we are releasing them.

Most of the time, we don't have packaging changes, so the story is quite easy. When we do, the job is showing it in the log and is marked as unstabled. The packaging diff, plus the additional build system contexts are attached as artefacts for an easy review. If the integration team agreed with those changes, they have special credentials to run cu2d-run in a special "manual" mode, ignoring the packaging changes and doing the publish if nothing else is blocking. This way, we address the concerns of "only people with upload right can ack/review packaging changes". This is our secondary safety net (the first one being the integration team looking at upstream merges).

The reality is not that easy and that's not the only case when the publish can be set as manual, but I'll discuss that later on.

Small note on the publisher, you see that we are merging back our changes to upstream, but upstream didn't sleep while we take our snapshot, build everything and have those integration tests running, so we usually end up where some commits were done in trunk in between the latest snapshot and the time we push back that branch. That's the reason why we have this "Automatic snapshot from revision <revision>" in the changelog to clearly state where we did the cut and don't rely on the commit of this changelog modification to be merged back.

Copy to distro

In fact, publishing is not the end of the story. Colin Watson had the concern that we need to separate the power and not having too many launchpad bots having upload rights to Ubuntu. That's true that all previous operations are using a bot for committing, pushing to the ppa, with its own gpg and ssh key. Having those credentials widespread on multiple jenkins machine can be scary if they give upload privileges to Ubuntu. So instead of the previous step directly being piloted by a bot having upload rights to the distro, we only generated a sync file with various infos, like this one.

Then, on the archive admin machines, we have a cron using this copy script which:

  • collects all available rsync files over the wire
  • then, check that the info in this file are valid, like ensuring that components to be published are part of this whitelist of projects that can be synced (which is the same list and metainfo used to generate the jenkins jobs and lives there).
  • do a last final check from version compared to the distro if new version were uploaded since the prepare phase (indeed, maybe an upload happened on one of those components while we were busy building/testing/publishing).
  • if everything is fine, the sources and binaries are copied from the ppa to the proposed repository.

Copy to distro

The fact to execute binary copies from the ppa (meaning that the .debs in the ppa are exactly the one that are copied to Ubuntu) gives us the confidence that what we tested is exactly what is delivered, built in the same order, to our users. The second obvious advantage as well is that we don't rebuild what's already ready.

Rebuild only one or some components

We have an additional facility. Let's say that test integration failed because of one component (like one build dependency not being bumped), we have the possibility to only rerun (still using the same cu2d-run script by people having the credentials) the whole workflow for this or those defined on the command line components. We keep for the others components exactly the same state, meaning, not rebuilding them, but they would still be part of what we tests as part of the integration suite and as part of what we are going to publish. Some resources are spared this way and we can get faster to our result.

However, for the build results, it will still take all the components (even those we don't rebuild) into account. We won't let a FTBFS slipping by that easily! :)

Why not including all commits in debian/changelog?

We made the choice to only mention changes associated bugs. This also mean that if upstream never link bugs to merge proposal, or use "bzr commit --fixes lp:XXXX" or put in a commit message something like "this fixes bug…", the upload will have an empty changelog (just having "latest snapshot from rev …") and won't detail the upload content.

This can be seen as suboptimal, but I think that we should really engage our upstream to link fixes to bugs when they are important to be broadcasted. The noise of every commit message to trunk isn't suitable for people reading debian/changelog, which is visible in update-manager. If people wants to track closely upstream, they can look at the bzr branch for that. You can see debian/changelog as a NEWS file, containing only important information from that upload. We need of course to have more and more people upstream realizing that and being educated that linking to a bug report is important.

If an information is important to them but they don't want to open a bug for it if there isn't one, there is still the possibility, as part of the upstream merge, to directly feed debian/changelog with a sensible description of the change.

Versionning scheme

We needed to come with a sane versionning schema. The current one is:

  1. based on current upstream version, like 6.5
  2. appending "daily" to it
  3. adding the current date in yy.mm.dd format

So we end up with, for instance: 6.5daily13.01.24. Then, we add -0ubuntu1 as the packaging is separated in the diff.gz by using split mode.

If we need to rerun the same day a release for the same component, as we detect that a version was already released in Ubuntu or eventually in the ppa, this will become: 6.5daily13.01.24.1, and then 6.5daily13.01.24.2… I think you got it :)

We can use this general pattern: <upstream_version>daily<yy.mm.dd(.minor)>-0ubuntu1 for the package. The next goal is to have upstream_version as small (one, two digits?) as possible.

Configuring all those jobs

As we have one job per project per series + some metajob per stack, we are about having 80 jobs right now per supported version. This is a lot, the only way to not have bazillions of people just to maintain those jenkins jobs is to automate, automate, automate.

So every stack defines some metadata like a serie, projects that they cover, when they need to run, eventual extracheck step, ppa used as source and destination, optional branches involved… Then, using cu2d-update-stack -U, this will update all the jenkins jobs to latest configuration using the templates we defined to standardize those jobs. In addition, that will reset as well some bzr configuration in launchpad for those branches to ensure that various components like lp-propose or branching the desired target will indeed do the right thing. As told previously, as this list is as well used for filtering when copying to distro, we have very few chances to get out of sync by automating everything!

So, just adding a component to a stack is basically a one line change + the script to run! Hard to do something easier. :)

And btw, if you were quite sharp on the last set of links, you have seen a "dependencies" stanza in the stack definition. Indeed, we described here the simple case and eluded completely the fact that stacks are depending on each other. How do we resolve that? How do we avoid publishing an inconsistent state? That's a story for tomorrow! :)

mercredi, janvier 23 2013

Unity: release early, release often… release daily! (part 2)

This post is part of the Unity daily release process blog post suite.

As part of the new Unity release procedure, let's first have look at the start of the story of a branch, how does it reach trunk?

The merge procedure

Starting the 12.04 development cycle, we needed upstream to be able to reliably and easily get their changes into trunk. To ensure that every commits in trunk pass some basic unit tests and doesn't break the build, that would obviously mean some automation would take place. Here comes the merger bot.

Merge upstream branch workflow

Proposing my branch for being merged, general workflow

We require peer-review of every merge request on any project where we are upstream. No direct commit to trunk. It means that any code change will be validated by an human first. In addition to this, once the branch is approved, the current branch will be:

  • built on most architectures (i386, amd64, armhf) in a clean environment (chroot with only the minimal dependencies)
  • unit tests will be run (as part of the packaging setup) on those archs

Only if all this passes, the branch will be merged into trunk. This way, we know that trunk is at a high standard already.

You will notice in this example of a merge request that in advance of phase (thanks to the work of Martin Mrazik, Francis Ginther), a continuous integration job is kicking in to give some early feedback to both the developer and the reviewer. This can indicate if the branch is good for merging even before approving it. This job kicks back if additional commit is proposed as well. This rapid feedback loop helps to give an additional advice on the branch quality and direct link to a public jenkins instance to see the eventual issues during the build.

Once the global status of a merge request is set to "approved", the merger will validate the branch, then takes the commit message (and falling back to the description on some project as a commit message if nothing is set), will eventually take attached bug reports (that the developer attached manually to the merge proposal or directly in a commit with "bzr commit --fixes lp:<bugnumber>") and merge that to the mainline, as you can see here.

How to handle dependencies, new files shipped and similar items

We told in the previous section that the builds are done in a chroot, clean environnement. But we can have dependencies that are not released into the distribution yet. So how to handle those dependencies, detect them and taking the latest stack available?

For that, we are using our debian packages. As this is what the finale "product" will be delivered to our users, using packages here and similar tools that we are using for the Ubuntu distribution itself is a great help. This means that we have a local repository with the latest "trunk build" packages (appending "bzr<revision>" to the current Ubuntu package version) so that when it's building Unity, it will grab the latest (eventually locally built) Nux and Compiz.

Ok, we are using packages, but how to ensure that when I have a new requirement/dependency, or when I'm shipping a new file, the packaging will be in sync with this merge request? Previously and historically, the packaging branch was separated from upstream. This was mainly for 3 reasons:

  • we don't really want to require that our upstream learns how to package and all the small details around this
  • we don't want to be seen as distro-specific for our stack and be as any other upstream
  • we the integration team wants to have the control over their packaging

This mostly worked in the sense that people had to ping the integration team just before setting a merge to "approve", and ensure no other merges was in process meanwhile (to not take the wrong packaging metadata with other branches). However, it's quite clear this can't scale at all. We did have some rejections because ensuring that we can be in sync was difficult.

So, we decided this cycle to have the packaging inlined with the upstream branch. This doesn't change anything for other distributions as "make dist" is used to create a tarball (or they can grab any tarball from launchpad from the daily release) and those don't contains the packaging infos. So we are not hurting them here. However, this ensures that we are in sync between what we will deliver the next day to Ubuntu and what upstream is setting into their code. This work started at the very beginning of the cycle and thanks to the excellent work of Michael Terry, Mathieu Trudel-Lapierre, Ken VanDine and Robert Bruce Park, we got that quickly in. Though, there were some exceptions where achieving this was really difficult, because of unit tests were not really in shape to work in isolation (meaning in a chroot, with mock objects like Xorg, dbus…). We are still working on getting the latest elements bootstrapped to this process and having those tests smoothly running. I clearly know this idea of using the packaging to build the upstream trunk and having this inlined is a drastic change, but from what we can see since October, we have pretty good results with this and it seems to have worked out quite well! I would like to thanks again the whole awesome product strategy (the canonical upstream) team to have let that idea going through and facilitate as much as possible this process. Thanks as well to the jenkins master (2 of them already announced previously, plus Allan LeSage and Victor R. Ruiz) to have completed all the jenkins/merger machinery changes that were needed on each project for that.

We can't expect that every upstream will know everything about the packaging, consequently the integration team is here and available for giving any help which is needed. I think on the long term that basic packaging changes will be directly done by upstream (we are already seeing some people bumping the build-dependency requirement themselves, adding a new file to install, declaring a new symbol as part of the library…). However, we have processes inside the distribution and only people with upload rights in Ubuntu is supposed to do or review the changes. How does this work with this process? Also we have some feature freeze and other Ubuntu processes, how will we ensure that upstream are not breaking those rules?

Merge guidelines

As you can see in the first diagram, some requirements during a merge request, both controlled by the acceptance criterias and the new conditions from inline packaging, are set:

  • Design needs to acknowledge the change if there is any visual change involved
  • Both the developer, reviewer and the integration team ensure that ubuntu processes are followed (UI Freeze/Feature Freeze for instance). If exceptions are required, they check before approving for merging that they are acknowledged by different parts. The integration team can help smooth to have this happened, but the requests should emerge from developers.
  • Relevant bugs are attached to the merge proposal. This is useful for tracking what changed and for generating the changelog as we'll see in the next part
  • Review of new/modified tests (for existence)
  • They ensure that it builds and unit tests are passing automated
  • Another important one, especially when you refactor, is to ensure that integration tests are still passing
  • Review of the change itself by another peer contributor
  • If the change seems important enough to have a bug report linked, ensure that the merge request is linked to a bug report (and you will get all the praise in debian/changelog as well!)
  • If packaging changes are needed, ping the integration team so that they acknowledge them and ensure the packaging changes are part of the merge proposal.

The integration team, in addition to be ready to help on request by any developer of our upstream, is having an active monitoring role over everything that is merged upstream. Everyone has some part of the whole stack attributed under his responsibility and will spot/start some discussion as needed if we are under the impression that some of those criterias are not met. If something not following those guidelines are confirmed to be spotted, anyone can do a revert by simply proposing another merge.

This enables to mainly answer the 2 other fear of what inline packaging may interfere by giving upstream control over the packaging. But this is the first safety net, we have a second one involved as soon as there is a packaging change since last daily release that we'll discuss in the next part.

Consequences for other Ubuntu maintainers

Even if we have our own area of expertise, Ubuntu maintainers can touch any part of what constitutes the distribution (MOTUs on universe/multiverse and core developers on anything). On that purpose, we didn't want daily release and inline packaging to change anything for them.

We added some warning on debian/control, pointing the Vcs-Bzr to the upstream branch with a comment above, this should highlight that any packaging change (if not urgent) needs to be a merge proposal against the upstream branch, as if we were going to change any of the upstream code. This is how the integration team is handling transitions as well, as any developer.

However, it happens that sometimes, an upload needs to be done in a short period of time, and we can't wait for next daily. If we can't even wait for manually triggering a daily release, the other option is to directly upload to Ubuntu, as we normally do for other pieces where we are not upstream for and still propose a branch for merging to the upstream trunk including those changes. If the "merge back" is is not done, the next daily release for that component will be paused, as it's detecting that there are newer changes in the distro, and the integration team will take care of backporting the change to the upstream trunk (as we are monitor as well uploads to the distributions).

Of course, the best is always to consult any person of the integration team first in case of any doubt. :)

Side note on another positive effects of inline packages

Having to inline all packages was a hard and long work, however in addition to the previous benefits hilighted, this enabled us to standardize all ~60 packages from our internal upstream around best practices. They should now all look familar once you have touch any of them. Indeed they all use debhelper 9, --fail-missing to ensure we ship all installed files, symbol files for C libraries using -c4 to ensure we force updating them, running autoreconf, debian/copyright using latest standard, split packages… In addition to be easier for us, it's as well easier for upstream as they are in a familar environment if they need to do themselves any changes and they can just use "bzr bd" to build any of them.

Also, we were able to remove and align more the distro patch we had on those components to push them upstream.

Conclusion on an upstream branch flow

So, you should now know everything on how upstream and packaging changes are integrated to the upstream branch with this new process, why we did it this way and what benefits we immediately can get from those. I guess having the same workflow for packaging and upstream changes is a net benefit and we can ensure that what we are delivering between those 2 are coherent, of a higher quality standard, and in control. This is what I would keep in mind if I would have only one thing to remember from all this :). Finally, we are taking into account the case of other maintainers needing to make any change to those components in Ubuntu and try to be flexible in our process for them.

Next part will discuss what happens to validate one particular stack and uploading that to the distribution on a daily basis. Keep in touch!

mardi, janvier 22 2013

Unity: release early, release often… release daily! (part 1)

This post is part of the Unity daily release process blog post suite. This is part one, you can find:

For almost the past 2 weeks (and some months for other part of the stacks), we have automated daily release of most of the Unity components directly delivered to Ubuntu raring. And you can see those different kinds of automated uploads published to the ubuntu archive.

I'm really thrilled about this achievement that we discussed and setup as a goal for the next release at past UDS.

Why?

The whole Unity stack has grown tremendously in the past 3 years. At the time, we were able to release all components, plus packaging/uploading to ubuntu in less than an hour! Keeping the one week release cadence by then was quite easy, even though a lot of work. The benefit was that it enabled us to ensure that we have a fluid process to push what we developped upstream to our users.

As of today, teams have grown by quite some extends, and if we count everything that we develop for Unity nowadays, we have more than 60 components. This covers from indicators to the theme engine, from the open input framework to all our tests infrastructure like autopilot, from webapps to web credentials, from lenses to libunity, and finally from compiz to Unity itself without forgetting nux, bamf… and the family is still growing rapidly with a bunch of new scopes coming down the pipe through the 100 scopes project, our own SDK for the Ubuntu phone, the example applications for this platform we are about to upload to Ubuntu as well… Well, you got it, the story is far from ending!

So, it's clear that those numbers of components that we develop and support will only go higher and higher. The integration team already scaled by large extends their work hours and rush to get everything delivered timely to our user base[1]. However, it's with no question that we won't be able to do that forever. We don't want as well introducing artificial delays to our own upstream on when we are delivering stuff to our users. We needed to solve that issue while not paying any price on quality, nor balancing the experience we deliver to our users. We want to keep high standards, and even, why not allying this need while providing an even a better, more reliable, and better evaluation before releasing of what we eventually upload to Ubuntu from our upstreams. Getting our cake and eat it too! :)

Trying to change for the better

What was done in the last couple of cycles was to separate between 2 groups the delivery of those packages to the users. There was an upstream integration team, which will hand over to the Ubuntu platform team theorically ready-to-upload packages, making the reviews, helping them, fixing some issues and finally sponsoring their work to Ubuntu. However, this didn't really work for various reasons and we quickly realized that this ended up just complexifying the process instead of easing it out. You can see a diagram of where we ended up when looking back at the situation:

end of 12.04 and 12.10 release process

Seems easy isn't it? ;) Due to those inner loops and gotchas, in addition to the whole new set of components, we went from a weekly release cadence to doing 4 to 5 solid releases during a whole cycle.

Discussing about it with Rick Spencer, he gave me a blank card on thinking how we can make this more fluid to our users and developers. Indeed, with all the work piling up, it wasn't possible to release immediatly the good work that upstream did in the past days, which can lead to some frustration as well. I clearly remember that Rick used the term "think about platform[2] as a service". This kind of immediatly echoed in me, and I thought, "why not trying that… why not releasing all our code everyday and delivering a service enabling us to do that?"

Daily release, really simplified diagram

Even if not planned from the beginning (sorry, no dark, hidden, evil plan here), thinking about it, this makes sense as part of some kind a logical progression from where we started since Unity exists:

  • Releasing every week, trying manually to get the release in a reasonable shape before uploading to Ubuntu
  • Raise the quality bar, put some processes for merge reviewing.
  • Adding Acceptance Criterias thanks to Jason Warner, ensuring that we are getting some more and more good tests and formal conditions on doing a release
  • Automate those merges through a bot, ensuring that every commits in trunk builds fine, that unit tests are passing
  • Raise again the quality bar, adding more and more integration tests

Being able and ensuring we are able to release daily seems really, looking at it, par of the next logical step! But it wouldn't have been possible without all those past achievements.

Advantages of daily releases

It's quite immediate to see a bunch of positive aspects of doing daily releases:

  • We can spot way faster regressions. If a new issue arose, it's easier to bisect through packages and find the day when the new regression or incorrect behavior started to happen, then, looking at the few commits to trunk (3-5?) that was done this day and pinpoint what introduced it.
  • This enables us to deliver everything in a rapid, reliable, predictable and fluid process. We won't have crazy rushes as we had in the past cycles around goal dates like feature freezes to get everything in the hand of the user by then. This will be delivered automatically the day after to everyone.
  • I see also this as a strong motivation for the whole community who contributes to those projects. Not having to wait a random date hidden in a wiki for a "planned release date" to see the hard work you put into your code to be propagated to the user's machines. You can immediately see the effect of your hacking on the broader community. If it's reviewed and approved for merging (and tests passes, I'll come back to that later), it will be in Ubuntu tomorrow, less than 24 hours after your work reached trunk! How awesome is that?
  • This also means that developers will only need to build the components they are working on. No need for instance to rebuild compiz or nux to do an Unity patch because the API changed and you need "latest everything" to build. Lowering the entry for contributing and chances that you have unwanted files in /usr/local staying around conflicting with the system install.

Challenges of daily release

Sure, this comes with various risks that we had to take into account when designing this new process:

  • The main one is "yeah, it's automated, how can you be sure you don't break Ubuntu pushing blindly upstream code to it?". It's a reasonable objection and we didn't ignore it at all (having the history of years of personal pain on what it takes to get a release out in an acceptable shape to push to Ubuntu) :)
  • How to interact properly with the Ubuntu processes? Only core developpers, motus, and per-package uploaders have upload rights to Ubuntu. Will this new process in some way give our internal upstream the keys to the archives, without having them proper upload rights?
  • How ensuring packaging changes and upstream changes are in sync, when preparing a daily release?
  • How making useful information in the changelog so that someone not following closely upstream merges but only looking at the published packages in raring-changes mailing list or update-manager can see what mainly changed in an upload?
  • How to deal with ABI breaks and such transitions, especially when they are cross-stacks (like a bamf API change impacting both indicators and Unity)? How to ensure what we deliver is consistent across the whole stacks of components?
  • How do we ensure that we are not bindly releasing useless changes (or even an upload with no change at all) and so, using more bandwith, build time, and so on, for nothing?

How then?

I'll detail much of those questions and how we try to address those challenges in the subsequent suite of blog posts. Just for now, to sum up, we have a good automated test suite, and stacks are only uploaded to Ubuntu if:

  1. their internal and integration tests are passing above a certain theshold of accepted failures.
  2. they don't regress other stacks

Stabilizing those tests to get reliable result was a tremendous work for a cross-team effort, and I'm really glad that we are confident in them to finally enable those daily release.

On another hand, additional control is made to ensure that packaging changes are not committed without someone with upload right being in the loop for acking the final change before the copy to the distribution happens.

Of course, the whole machinery is not limited to this and is in fact way more complicated, I'll have the pleasure the write about those in separate blog posts in the following days. Stay tuned!

Notes

[1] particularly around the feature freeze

[2] the Ubuntu platform team here, not Ubuntu itself ;)