DidRocks' blog

apt-get install freedom

Aller au contenu | Aller au menu | Aller à la recherche

mercredi, septembre 24 2014

Ubuntu Developer Tools Center: how do we run tests?

We are starting to see multiple awesome code contributions and suggestions on our Ubuntu Loves Developers effort and we are eagerly waiting on yours! As a consequence, the spectrum of supported tools is going to expand quickly and we need to ensure that all those different targeted developers are well supported, on multiple releases, always delivering the latest version of those environments, at anytime.

A huge task that we can only support thanks to a large suite of tests! Here are some details on what we currently have in place to achieve and ensure this level of quality.

Different kinds of tests

pep8 test

The pep8 test is there to ensure code quality and consistency checking. Tests results are trivial to interpret.

This test is running on every commit to master, on each release during package build as well as every couple of hours on jenkins.

small tests

Those are basically unit tests. They are enabling us to quickly see if we've broken anything with a change, or if the distribution itself broke us. We try to cover in particular multiple corner cases that are easy to test that way.

They are running on every commit to master, on each release during package build, every time a dependency is changed in Ubuntu thanks to autopkgtests and every couple of hours on jenkins.

large tests

Large tests are real user-based testing. We execute udtc and type in stdin various scenarios (like installing, reinstalling, removing, installing with a different path, aborting, ensuring the IDE can start…) and check that the resulting behavior is the one we are expecting.

Those tests enables us to know if something in the distribution broke us, or if a website changed its layout, the download links are modified, or if a newer version of a framework can't be launched on a particular Ubuntu version or configuration. That way, we are aware, ideally most of the time even before the user, that something is broken and can act on it.

Those tests are running every couple of hours on jenkins, using real virtual machines running an Ubuntu Desktop install.

medium tests

Finally, the medium tests are inheriting from the large tests. Thus, they are running exactly the same suite of tests, but in a Docker containerized environment, with mock and small assets, not relying on the network or any archives. This means that we ship and emulate a webserver delivering web pages to the container, pretending we are, for instance, https://developer.android.com. We then deliver fake requirements packages and mock tarballs to udtc, and running those.

Implementing a medium tests is generally really easy, for instance:

class BasicCLIInContainer(ContainerTests, test_basics_cli.BasicCLI):

"""This will test the basic cli command class inside a container"""

is enough. That means "takes all the BasicCLI large tests, and run them inside a container". All the hard work, wrapping, sshing and tests are done for you. Just simply implement your large tests and they will be able to run inside the container with this inheritance!

We added as well more complex use cases, like emulating a corrupted downloading, with a md5 checksum mismatch. We generate this controlled environment and share it using trusted containers from Docker Hub that we generate from the Ubuntu Developer Tools Center DockerFile.

Those tests are running as well every couple of hours on jenkins.

By comparing medium and large tests, as the first is in a completely controlled environment, we can decipher if we or the distribution broke us, or if a change from a third-party changing their website or requesting newer version requirements impacted us (as the failure will only occurs on the large tests and not in the medium for instance).

Running all tests, continuously!

As some of the tests can show the impact of external parts, being the distribution, or even, websites (as we parse some download links), we need to run all those tests regularly[1]. Note as well that we can experience different results on various configurations. That's why we are running all those tests every couple of hours, once using the system installed tests, and then, with the tip of master. Those are running on various virtual machines (like here, 14.04 LTS on i386 and amd64).

By comparing all this data, we know if a new commit introduced regressions, if a third-party broke and we need to fix or adapt to it. Each testsuites has a bunch of artifacts attached to be able to inspect the dependencies installed, the exact version of UDTC tested here, and ensure we don't corner ourself with subtleties like "it works in trunk, but is broken once installed".

jenkins test results

You can see on that graph that trunk has more tests (and features… just wait for some days before we tell more about them ;)) than latest released version.

As metrics are key, we collect code coverage and line metrics on each configuration to ensure we are not regressing in our target of keeping high coverage. That tracks as well various stats like number of lines of code.

Conclusion

Thanks to all this, we'll probably know even before any of you if anything is suddenly broken and put actions in place to quickly deliver a fix. With each new kind of breakage we plan to back it up with a new suite of tests to ensure we never see the same regression again.

As you can see, we are pretty hardcore on tests and believe it's the only way to keep quality and a sustainable system. With all that in place, as a developer, you should just have to enjoy your productive environment and don't have to bother of the operation system itself. We have you covered!

Ubuntu Loves Developers

As always, you can reach me on G+, #ubuntu-desktop (didrocks) on IRC (freenode), or sending any issue or even pull requests against the Ubuntu Developer Tools Center project!

Note

[1] if tests are not running regularly, you can consider them broken anyway

mercredi, septembre 10 2014

How to help on Ubuntu Developer Tools Center

Last week, we announced our "Ubuntu Loves Developers" effort! We got some great feedback and coverage. Multiple questions arose around how to help and be part of this effort. Here is the post to answer about this :)

Our philosophy

First, let's define the core principles around the Ubuntu Developer Tools Center and what we are trying to achieve with this:

  1. UDTC will always download, tests and support the latest available upstream developer stack. No version stuck in stone for 5 years, we get the latest and the best release that upstream delivers to all of us. We are conscious that being able to develop on a freshly updated environment is one of the core values of the developer audience and that's why we want to deliver that experience.
  2. We know that developers want stability overall and not have to upgrade or spend time maintaining their machine every 6 months. We agree they shouldn't have to and the platform should "get out of my way, I've got work to do". That's the reason why we focus heavily on the latest LTS release of Ubuntu. All tools will always be backported and supported on the latest Long Term Support release. Tests are running multiple times a day on this platform. In addition to this, we support, of course, the latest available Ubuntu Release for developers who likes to live on the edge!
  3. We want to ensure that the supported developer environment is always functional. Indeed, by always downloading latest version from upstream, the software stack can change its requirements, requiring newer or extra libraries and thus break. That's why we are running a whole suite of functional tests multiple times a day, on both version that you can find in distro and latest trunk. That way we know if:
  • we broke ourself in trunk and needs to fix it before releasing.
  • the platform broke one of the developer stack and we can promptly fix it.
  • a third-party application or a website changed and broke the integration. We can then fix this really early on.

All those tests running will ensure the best experience we can deliver, while fetching always latest released version from upstream, and all this, on a very stable platform!

Sounds cool, how can I help?

Reports bugs and propose enhancements

The more direct way of reporting a bug or giving any suggestions is through the upstream bug tracker. Of course, you can always reach us out as well on social networks like g+, through the comments section of this blog, or on IRC: #ubuntu-desktop, on freenode. We are also starting to look at the #ubuntulovesdevs hashtag.

The tool is really to help developers, so do not hesitate to help us directing the Ubuntu Developer Tools Center on the way which is the best for you. :)

Help translating

We already had some good translations contributions through launchpad! Thanks to all our translators, we got Basque, Chinese (Hong Kong), Chinese (Simplified), French, Italian and Spanish! There are only few strings up for translations in udtc and it should take less than half an hour in total to add a new one. It's a very good and useful way to contribute for people speaking other languages than English! We do look at them and merge them in the mainline automatically.

Contribute on the code itself

Some people started to offer code contribution and that's a very good and motivating news. Do not hesitate to fork us on the upstream github repo. We'll ensure we keep up to date on all code contributions and pull requests. If you have any questions or for better coordination, open a bug to start the discussion around your awesome idea. We'll try to be around and guide you on how to add any framework support! You will not be alone!

Write some documentation

We have some basic user documentation. If you feel there are any gaps or any missing news, feel free to edit the wiki page! You can as well merge some of the documentation of the README.md file or propose some enhancements to it!

To give an easy start to any developers who wants to hack on udtc iitself, we try to keep the README.md file readable and up to the current code content. However, this one can deviate a little bit, if you think that any part missing/explanation requires, you can propose any modifications to it to help future hackers having an easier start. :)

Spread the word!

Finally, spreading the word that Ubuntu Loves Developers and we mean it! Talk about it on social network, tagging with #ubuntulovesdevs or in blog posts, or just chatting to your local community! We deeply care about our developer audience on the Ubuntu Desktop and Server and we want this to be known!

uld.png

For more information and hopefully goodness, we'll have an ubuntu on air session session soon! We'll keep you posted on this blog when we have final dates details.

If you felt that I forgot to mention anything, do not hesitate to signal it as well, this is another form of very welcome contributions! ;)

I'll discuss next week how we maintain and runs tests to ensure your developer tools are always working and supported!

mardi, septembre 2 2014

Ubuntu loves Developers

Ubuntu is one of the best Linux platforms with an awesome desktop for regular users (and soon phone and tablets and more!) and great servers for system administrators and devops. A number of developers are choosing Ubuntu as their primary development system of choice, even if they develop for platforms other than Ubuntu itself, like doing some Android development, web development and so on.

uld.png

However, even if we fill the basic needs for this audience, we decided a few months ago to start a development and integration effort to make those users completely feel at home. Ubuntu loves developers and we are going to showcase it by making Ubuntu the best available developer platform!

Sounds great! What's up then?

We decided to start by concentrating on Android developers. We'll ramp up afterwards on other use cases like Go developers, web developers, Dart… but we want to ensure we deliver a stunning experience for each targeted audience before moving on to the next topic.

After analyzing how to setup an Android development machine on Ubuntu we realized that, depending on the system, it can takes up to 9 different steps to get proper IDE integration and all the dependencies installed. The whole goal was to reduce that to one single command!

Concretely speaking, we created the Ubuntu Developer Tools Center, a command line tool which allows you to download the latest version of Android Studio (beta), alongside the latest Android SDK, and all the required dependencies (which will only ask for sudo access if you don't have all the required dependencies installed already), enable multi-arch on your system if you are on a 64 bit machine, integrate it with the Unity launcher…

launcher-integration.png

As said, we focused on Android Studio (based itself on Intellij IDEA) for now as it seems that’s where Google has been focusing its Android tools development effort for over a year. However, the system is not restrictive and it will be relatively trivial in the near future to add ADT support (Android Development Tools using Eclipse)[1].

android-studio.png

Indeed, The Ubuntu Developer Tools Center is targeted as being a real platform for all developer users on Ubuntu. We carefully implemented the base platform with a strong technical foundation, so that it's easily extensible and some features like the advanced bash shell completion will even make more sense once we added other development tools support.

Availability

We will always target first the latest Ubuntu LTS version alongside the latest version in development. Yes! It means that people who want to benefit for the extensively tested and strong base experience that a Long Term Support version offers will always be up to date on their favorite developer tools and be first-class citizen. We strongly believe that developers always want the latest available tools on a strong and solid stable base and this is one of the core principle we are focusing on.

For now, the LTS support is through our official Ubuntu Developer Tools Center ppa, but we plan to move that to the backports archive with all the newly or updated libraries. For Utopic, it's already available in the 14.10 Ubuntu archive.

Initial available version

Be aware that the Ubuntu Developer Tools Center is currently in alpha. This tool will evolve depending on your feedback, so it's up to you to suggest the direction you want it to go! A blog post on how to contribute will follow in the next days. This initial version is available in English, French and Chinese!

Another blog post will expand as well how we test this tool. For now, just be aware that the extensive test suite is running daily, and it ensures that on all supported platforms we don't break, that the Ubuntu platform itself doesn't break us, or that any 3rd party on which we rely on (like website links and so on) don't change without us spotting it. This will ensure that our tools is always working, with limited downtime.

Example: how to install Ubuntu Developer tools and then, Android Studio

Ubuntu Developer Tools Center

If you are on Ubuntu 14.04 LTS, first, add the UDTC ppa:

$ sudo add-apt-repository ppa:didrocks/ubuntu-developer-tools-center $ sudo apt-get update

Then, installing UDTC:

$ sudo apt-get install ubuntu-developer-tools-center

How to install android-studio

Simply executes[2]:

$ udtc android

And then, accept the installation path and Google license. It will download, install all requirements alongside Android Studio and latest android SDK itself, then configure and fit it into the system like by adding an Unity launcher icon…

And that's it! Happy Android application hacking on Ubuntu. You will find the familiar experience with the android emulator and sdk manager + auto-updater to always be on the latest. ;)

android-sdk-tools.png

Feedback

We welcome any ideas and feedback, as well as contributions as we'll discuss more in the next post. Meanwhile, do not hesitate to reach me on IRC (didrocks on freenode, #ubuntu-desktop as the primary channel to discuss), or on Google+. You can as well open bugs on the launchpad project or github one

I'm excited about this opportunity to work on the developer desktop. Ubuntu loves Developers, and it's all on us to create a strong developer community so that we can really make Ubuntu, the developer-friendly platform of choice!

Notes

[1] For the technical audience, the Android Studio specific part is less than 60 lines

[2] android-studio is the default for the android development platform, you can choose it explicitely by executing "$ udtc android android-studio". Refer to --help or use the bash completion for more help and hints

mercredi, août 14 2013

Release early, release often, release every 4h!

It's been a long time I didn't talk about our daily release process on this blog.

For those who are you not aware about it, this is what enables us to release continuously most of the components we, as the ubuntu community, are upstream for, to get into the baseline.

Sometimes, releases can feel like pushing huge rocks

Historic

Some quick stats since the system is in place (nearly since last December for full production):

  • we are releasing as of now 245 components to distro every day (it means, everytime a meaningfull change is in any of those 245 components, we will try to release it). Those are segregated in 22 stacks.
  • this created 309 uploads in raring
  • we are now at near 800 uploads in saucy (and the exact number is changing hour after hour)

Those numbers are even without counting feature branches (temporary forks of a set of branches) and SRUs.

Getting further

However, seeing the number of transitions we have everyday, it seems that we needed to be even faster. I was challenged by Rick and Alexander to speed up the process, even if it was always possible to run manually some part of it[1]. I'm happy to announce that now, we are capable of releasing every 4h to distro! Time for a branch proposed to trunk to the distro is drastically reduced thanks to this. But we still took great care to keep the same safety net though with tests running, ABI handling, and not pushing every commit to distro.

This new process was in beta since last Thursday and now that everyone is briefed about it, it's time for this to get in official production time!

Modification to the process

For this to succeed, some modifications to the whole process have been put in place:

  • now that we release every 4 hours, it means we need a production-like team always looking at the results. That was put in place with the ~ubuntu-unity team, and the schedule is publically available here.
  • the consequence is that we have no more "stack ownership", everyone in the team is responsible for the whole set now.
  • it means as well that upstream now have a window of 4 hours before the "tick" (look at the cross on the schedule) to push stuff in different trunks in a coherence piece rather than once a day, before 00 UTC. It's the natural pressure between speed versus safety for big transitions.
  • better communication and tracking were needed (especially as the production is looked after by different people along the day). We needed as well to ensure everything is available for upstreams to know where their code is, what bugs affects them and so on… So now, everytime there is an issue, a bug is opened, upstream is pinged about it and we write about those on that tab. We escalate after 3 days if nothing is fixed by then.
  • we will reduce as much as possible "manual on demand rebuild" as the next run will fix it.

Also, I wanted that to not become a burden for our team, so some technical changes have been put in place:

  • not relying anymore on the jenkins lock system as the workflow is way more complex than what jenkins can handle itself.
  • possible to publish the previous run until the new stack started to build (but still possible even if the stack started to wait).
  • if a stack B is blocked on stack A because A is in manual publishing mode (packaging changes for instance), forcing the publication of A will try to republish B if it was tested against this version of A. (also, having that scaling up and cascading as needed). So less manual publication needed and push button work \o/
  • Some additional "force rebuild mode" which can retrigger automatically some components to rebuild if we know that it's building against a component which doesn't ensure any ABI stability (but only rebuild if that component was renewed).
  • ensure as well that we can't deadlock in jenkins (having hundreds of jobs running every 4h).
  • the dependency and order between stacks are not relying anymore on any scheduling, it's all computed and ordered properly now (thanks Jean-Baptiste to have help on the last two items).

Final words

Since the beginning of the week, in addition to seeing way more speed up for delivering work to the baseline, we also have seen the side benefit that if everybody is looking at the issues more regularly, there are less coordination work to do and each tick is less work in the end. Now, I'm counting on the ~ubuntu-unity team to keep that up and looking at the production throughout the day. Thanks guys! :)

I always keep in mind the motto "when something is hard, let's do it more often". I think we apply that one quite well. :)

Note

[1] what some upstreams were used to ask us

mardi, juin 25 2013

Versioning schema change in daily release

Just a quick note on Daily Releases (process to upload to the ubuntu distribution more than 200 components Canonical is upstream for in various ubuntu series).

crazy numbers

After some discussions on #ubuntu-release today, we decided to evoluate the daily release versioning schema:

  • we previously had <upstream_version>daily<yy.mm.dd>-0ubuntu1 as a daily release schema in the regular case
  • multiple releases a day for the same component would give: <upstream_version>daily<yy.mm.dd>.minor-0ubuntu1, where minor is an incremental digit
  • for maintenance branch, we previously had <upstream_version>daily<yy.mm.dd>(.minor)~<series-version>-0ubuntu1, where series-version is 13.04, 13.10…
  • feature branch appends the ppa name to it (after transformation), <upstream_version>daily<yy.mm.dd>.<ppa_name_dotted>


Mainly due to handling SRUs regarding feature branch, multiple releases per day, transitioning from next ppa to the distro, supporting maintenance branch released with the same upstream version, but being a potentially different content than trunk, the version schema has been (slightly) simplified:

  • now, the daily release version will be: <upstream_version>+<series-version>.<yyyymmdd>-0ubuntu1, like: 0.42+13.10.20130625-0ubuntu1 instead of 0.42daily13.06.25-0ubuntu1
  • for maintenance branches, we have thus now a similar schema: 0.42+13.04.20130625-0ubuntu1 for a daily happening the same day (instead of 0.42daily13.06.25~13.04-0ubuntu1)
  • the rules for minor versions (multiple releases a day) and feature branches remains the same

This still enables the above cases for getting all the concurrencies working with what we deliver to distro as well to various ppas. The upstream merger should still be working and being compatible as well. :)

The new code is now deployed in production after changing the configuration. In addition to changing the test cases to the new versioning schema, you will notice that there are as well some additional tests to ensure the transition from the old to new world should happen seemlessly :)

All the documentation and processes have been updated to the latest as well.

Happy daily releases!

jeudi, mars 7 2013

Followup UDS session on application lifecycle

I hope that everyone enjoyed the new virtual UDS format as we did. :)

However, don't despair, it's not really *over* yet! The discussion around the application lifecycle[1] was covering a too large spectrum and we didn't get time to properly finish discussing it. As we didn't want to run over other sessions, we decided to reschedule it just after UDS.

The new follow-up session "Application model: lifecycle" will happen this Friday, March 8, 16:00 – 16:25 UTC. It will be a hangout on air and people will follow up questions on #ubuntu-devel, IRC, freenode. We'll try to set up with the community an ubuntu on air session so that links are easier to find (this blog post will be updated with exact information near the event time).

Happy post vUDS session!

Note

[1] You can rewatch the video session here as it was live, thanks to the hangout on air!

mardi, février 5 2013

Unity: release early, release often… release daily! (part 5 and conclusion)

This post is part of the Unity daily release process blog post suite.

After a week to let people ask questions about the daily release process, I guess it's time to catch up and conclude this serie with a FAQ and some thoughts for the future.

FAQ

The FAQ is divided in multiple sequences depending on your role in the development of ubuntu, with the hope that you will be able to find what you are looking for quicker this way. If you have any question that isn't addressed below, please ping didrocks on IRC freenode (#ubuntu-unity) and I'll complete this page.

Upstream developers

What's needed to be done when proposing a branch or reviewing?

As discussed in this part, when proposing a branch or reviewing a peer's work, multiple rules have been established. The developers needs to:

  • Design needs to acknowledge the change if there is any visual change involved
  • Both the developer, reviewer and the integration team ensure that ubuntu processes are followed (UI Freeze/Feature Freeze for instance). If exceptions are required, they check before approving for merging that they are acknowledged by different parts. The integration team can help smooth to have this happened, but the requests should emerge from developers.
  • Relevant bugs are attached to the merge proposal. This is useful for tracking what changed and for generating the changelog as we'll see in the next part
  • Review of new/modified tests (for existence)
  • They ensure that it builds and unit tests are passing automated
  • Another important one, especially when you refactor, is to ensure that integration tests are still passing
  • Review of the change itself by another peer contributor
  • If the change seems important enough to have a bug report linked, ensure that the merge request is linked to a bug report (and you will get all the praise in debian/changelog as well!)
  • If packaging changes are needed, ping the integration team so that they acknowledge them and ensure the packaging changes are part of the merge proposal.

When you approve a change, do not forget to check that there is a commit message in launchpad provided and to set the global status to "approved".

When do changes need to land in a coherent piece?

Some changes can happen and touch various components (even across different stacks). To avoid loosing a daily release because only half of transition landed in some upstream projects (and so, tests will fail because they catch this issues, isn't it? ;)), it's asked to ensure that all transitions are merged by 00 UTC. Then, you can start approving again transition starting from 06 UTC. We can try to make this window shorter in the future, but let's see first how this work in practice. For any big transition, please coordinate with the ubuntu-unity team team so that they can give a hand.

I have a change involving packaging modifications

As told in a previous section, just ask for the ubuntu-unity team to do some review or assisting in doing those changes.

I want to build the package locally myself

If you have done some packaging changes, or added a C symbol, you maybe want to ensure that it's building fine. Quite simple, just go into your branch directory and run $ bzr bd (from the bzr-builddeb package). This will build your package on your machine using the same flags and checks than on the builders. No need to autoreconf if you are using autotools, everything is handled for you. You will get warned as well if some installed files are missing. ;)

I need to break an API/ABI

The first question is: "are you sure?". Remember that API changes are a PITA for all third parties, in particular if your API is public. Retro-compatibility is key. And yes, your dbus interface should be considered as part of your API!

If you still want to do that, ensure that you bump your soname if the ABI is broken, then, multiple packaging changes are needed. Consequently, ensure to ping your lovely ubuntu-unity team to request assistance. The transition should happen (if no retrocompatibility is provided) within the previously mentionned timeframe to have everything landing in coherence.

You also need to ensure that for all components build-depending on the API you changed, you bumped the associated build dependency version in debian/control.

To what version should I bump the build dependency version?

Let's say you added an API to package A. B is depending on A and you want to start using this new awesome API. What version should be used in debian/control to ensure I'm building against the new A?

This is quite easy, the build-dependencies in debian/control should look like: A >= 6.0daily13.03.01 (eventually ended by -0ubuntuX). Strip the end -0ubuntuX and append then bzr<rev_where_you_added_the_api_in_trunk>. For instance, let's imagine you added the API in rev 42 in trunk, so the build-dep will change from:

A >= 6.0daily13.03.01

to:

A >= 6.0daily13.03.01bzr42

This is to ensure the upstream merger will be able to take it.

If no "daily" string is provided in the dependency you add, please check with the ubuntu-unity team.

I'm exposing a new C symbols in my library, it seems that some packaging changes are needed…

Debian packages have a debian/<packagename>.symbols file which lists all exposed symbols from your library (we only do that for C libraries as C++ does mangle the name per architecture). When you try to build the package containing a new symbol, you will see something like:

--- debian/libdee-1.0-4.symbols (libdee-1.0-4_1.0.14-0ubuntu2_amd64)

+++ dpkg-gensymbolsvPEBau 2013-02-05 09:43:28.529715528 +0100

-4,6 +4,7

dee_analyzer_collate_cmp@Base 0.5.22-1ubuntu1

dee_analyzer_collate_cmp_func@Base 0.5.22-1ubuntu1

dee_analyzer_collate_key@Base 0.5.22-1ubuntu1

+ dee_analyzer_get_type@Base 1.0.14-0ubuntu2

dee_analyzer_new@Base 0.5.22-1ubuntu1

dee_analyzer_tokenize@Base 0.5.22-1ubuntu1

dee_client_get_type@Base 1.0.0

The diff shows that debian/libdee-1.0-4.symbols doesn't list the exposed dee_analyzer_get_type new symbol. You see a version number next to it, corresponding to the current package version. The issue is that you don't know what will be the package version in the future when the next daily release will happen (even if you can infer), what to put then?

The answer is easier than you might think. As explained in the corresponding section, the daily release bot will know what version is needed, so you just need to give a hint that the new symbol is there for the upstream merger to pass.

For than, just include the symbol, with 0replaceme[1] as the version for the branch you will propose as a merge request. You will thus set in your symbols file:

dee_analyzer_collate_cmp@Base 0.5.22-1ubuntu1

dee_analyzer_collate_cmp_func@Base 0.5.22-1ubuntu1

dee_analyzer_collate_key@Base 0.5.22-1ubuntu1

dee_analyzer_get_type@Base 0replaceme

dee_analyzer_new@Base 0.5.22-1ubuntu1

dee_analyzer_tokenize@Base 0.5.22-1ubuntu1

dee_client_get_type@Base 1.0.0

The dee_analyzer_get_type@Base 0replaceme line will be replace with the exact version we release in the next daily release.

My name deserves to be in the changelog!

I'm sure you want the world to know what great modifications you have introduced. To get this praise (and blame ;)), we strongly advise you to link your change to a bug. This can be done in multiple ways, like link bugs to a merge proposal before it's approved (you link a branch to the bug report in launchpad), or use "bzr commit --fixes lp:XXXX" so that automatically links it for you when proposing the merge for reviewing, or put in a commit message something like "this fixes bug…". This will avoid the changelog to have only empty content.

I invite you to read "Why not including all commits in debian/changelog?" from this section to understand why we don't include everything, but only bug reports title in the changelog.

Ubuntu Maintainers

I want to upload a change to a package under daily release process

Multiple possibilities there.

The change itself can wait next daily release

If it's an upstream change, we strongly advise you to follow the same procedure and rules than the upstream developers, meaning, proposing a merge request and so on… bzr lp-propose and bzr branch lp:<source_package_name> are regularly and automatically adjusted as part of the daily release process to point to the right branches so that you can find them easily. Vcs-Bzr in debian/control should point to the right location as well.

Then, just refer to the upstream merging guidelines above. Once approved, this change will be in the next daily release (if tests pass).

An urgent change is needed

If the change needs to be done right away, but can wait for a full testing round (~2 hours), you can:

  • Propose your merge request
  • Ping someone from the ubuntu-unity team who will ensure the branch is reviewing in priority and rerun a daily release manually including your change. That way, your change will get the same quality reviews and tests checking than anything else entering the distro.
It's really urgent

You can upload right away your change then, the next daily will be blocked for that component though (and only for that component) until your change reaches upstream. So please, be a good citizen and avoid more churn in proposing your change back to the upstream repo (including changelog), pinging the ubuntu-unity team preferably.

I'm making a change, should I provide anything in debian/changelog?

Not really, as mentioned earlier, if you link to a bug or mention in a commit message a bug number in your branch before proposing for review, this will be taken into account and debian/changelog will be populated automatically with your name as part of the next daily release. You can still provide it manually and the daily release will ensure there is no duplication if it was already mentionned.

If you provide it manually, just: dch -i and ensure you have UNRELEASED for an unreleased version to ubuntu (without direct upload).

ubuntu-unity team

Note that in all following commands, if -r <release> is not provided, "head" is assumed. Also, ensure you have your credentials working and be connected to the VPN (the QA team is responsible for providing those jenkins credentials).

Where is the jenkins start page?

There it is.

Forcing a stack publication

After reviewing and ensuring that we can release a stack (see section about manual publication causes, like packaging changes to review or upstream stack failing/in manual mode). Then run:

$ cu2d-run -P <stack_name> -r <release>

example for indicators, head release:

$ cu2d-run -P indicators

Remember that the master "head" job is not rerun, so it will stay in its current state (unstable/yellow). The publish job should though goes from unstable/yellow to green if everything went well.

Rerun a full stack daily release, rebuilding everything

$ cu2d-run -R <stack_name> -r <release>

Rerun a partial stack daily release, rebuilding only some components

$ cu2d-run -R <stack_name> -r <release> <components>

/!\ this will keep the state and take into account (failure to build state for instance) and the binaries packages for the other components which were part of this daily release, but not rebuilt.

for instance, keeping dee and libunity, but rebuilding compiz and unity from the unity, head release stack:

$ cu2d-run -R unity compiz unity

The integration tests will still take everything we have (so including dee and libunity) at the latest stack with the new trunk content for compiz and unity when we relaunched this command.

Rerun integration tests, but taking all ubuntu-unity ppa's content (like transition needing publication of the indicator and unity stacks):

$ cu2d-run -R <stack_name> -r <release> --check-with-whole-ppa

This command doesn't rebuild anything, just run the "check" step, but with the whole ppa content (basically doing a dist-upgrading instead of selecting only desired components from this stack). This is handy when for instance a transition involved the indicator and unity stack. So the indicator integration tests failed (we don't have the latest unity stack built yet). Unity built and tests pass. To be able to release the indicator stack as well as the unity stack, we need the indicator stack to publish, for that, we relaunch only the indicator tests, but with the whole ppa content, meaning last Unity.

In that case, the publish for the indicators will be manual (you will need to force the publication) and as well, you should ensure that you will publish the unity stack at the same time as well to have everything in coherence copied to the -proposed pocket. More information on the stack dependency in this section.

Adding/removing components to a stack

The stack files are living in jenkins/etc/. This yaml file structure should be straightforward. Note that the tuple component: branch_location is optional (it will be set to lp:component if not provided). So add/change what you need in those file.

Be aware that all running jobs are stopped for this stack.

$ cu2d-update-stack -U <path_to_stack_file>

This will reset/reconfigure as well all branches, add -S if you don't have access to them (but ensure someone will configure them still at least once)

Different level of failures acceptance.

We do accept some integration tests failing (no unit tests should fail though as ran as part of package building). The different level of those triggers are defined in a file like jenkins/etc/autopilot.rc. Those are different level of failures, regressions, skipped and removed tests per stack. The real file depends on stackname and are found in $jenkins/cu2d/<stack_name>.autopilotrc as described in the -check template.

Monitoring upstream uploads and merges

Remember to regularly look at the -changes mailing list to pick if a manual upload has been done and port the changes back (if not proposed by the ubuntu developer making the upload) to the upstream branch. The component which had a manual upload is ignored for daily release until this backport is done, meaning that no new commits will be published until then (you can see the prepare job for that component will turn unstable).

Also, ensure as you are watching all upstream merges to follow our acceptance criterias that the "latestsnapshot" branches generated automatically by the daily release process has been merged successfully by the upstream merger.

Boostrapping a new component for daily release

A lot of information is available on this wiki page.

Conclusion

Phew! This was a long serie of blog post. I hoped that you appreciated to have a deeper look of what daily release means, how this work, and also that the remaining questions have been answered by the above FAQ. I've just ported all those information on the https://wiki.ubuntu.com/DailyRelease.

Of course, this is not the end of the road, I have my personal Santa Claus wishlist like UTAH being more stable (this is under work), the daily image provisionning with success more often than now, refactorisation work taking integration tests into account, less Xorg crashes when completing those tests… This is currently adding a little bit of overhead as of course, any of this will reject the whole test suite and thus the whole stack. We prefer to always be on the safe side and not accepting something in an unknown state!

Also, on the upstream stack and daily release process as well, there are rooms for improvments, like having less upstream integration tests failing (despite awesome progress already there), I wish I can soon turn that down to 3% of accepted failure at most for unity instead of the current 7% one. Dedicated integrations tests for indicators would be awesome as well (stealing the unity ones for now and only running those involving indicators, but of course, not everything is covered). Having also some oif, webapps and wecreds integration tests is one of my main priority, and we are working with the various upstream developers to get this done ASAP so that we have a better safety net than today. On the daily release process itself, maybe freezing all trunks in a go would be a nice improvement, more tests for the system itself as well is under work and my personal goal to reach 100% test coverage for the scripts before making any additional changes. Finally, some thoughts for later but a dashboard would enable to have a higher view than just using jenkins directly for reporting, to get a quicker view on all stack statuses.

Of course, I'm sure that we'll experience some bumpy roads with this daily release process through the cycle, but I'm confident we'll overcome them and I am really enthousiast about our first releases and how smoothly they went until now (we didn't break the distro… yet :p[2]. I'm sure this is a huge change for everyone and I want again give a big thank you to everyone involved into that effort, from the distro people to upstream and diverse contributors on the QA team. Changing to such a model isn't easy, it's not only having yet another process, but it's really a change of mindset in the way we deliver new features and components to Ubuntu. This might be frightening, but I guess it's the only way we have to ensure a systematic delivery, continuous and increasing quality while getting well a really short feedback loop to be able to react promptly to the changes we are introducing to our audience. I'm confident we are already collecting all the fruits of this effort and going to expand this more and more in the future.

For all those reasons, thanks again to all parties involved and let's have our daily shot of Unity!

I couldn't finish without a special thanks to Alan Pope for being a patient first reader, and fixing tediously various typos that I introduced into the first draft of those blog posts. ;)

Notes

[1] "0replacemepleasepleaseplease" or anything else appended to 0replaceme is accepted as well if you feel adventurous ;)

[2] fingers crossed, touching wood and all voodoo magic required

vendredi, janvier 25 2013

Unity: release early, release often… release daily! (part 4)

This post is part of the Unity daily release process blog post suite.

You hopefully recovered from your migraine on reading yesterday's blog post on the insight of daily release and are hungry for more. What's? That's not it? Well, mostly, but we purposely dismissed one of the biggest point and consequences on having stacks: they depends on each other!

Illustrating the problem

Let's say that Mr T. has an awesome patch for the indicator stack, but this one needs as well some changes in Unity and is not retro-compatible. Ok, indicators integration tests should fail, isn't it? as long as we don't have the latest version of Unity rebuilt in the ppa… But then, the indicator stack is not published (because of integration tests failing), but Unity, with its integration tests will pass as they are run against the latest indicator stack built in the same ppa. Consequently, we may end up publishing only half of what's needed (the Unity stack, without the indicator requirement).

Another possible case: let's imagine that yesterday's daily-build failed and so we have a new unity staging in the ppa (containing the indicator-compatible code), so the next day, the indicator integration tests will pass (as it's installing Unity from the ppa), but will only publish the indicator components, not the Unity ones. Bad bad bad :-)

We clearly see we need to declare dependencies between stacks and handle them smartly.

What's the stack dependency as of today?

Well, a diagram is worth 1 000 words:

Stack dependencies

We can clearly see here that stacks are depending on each other, this is specified in the stack configuration file, each stack telling on what other they depend on.

This has multiple consequences.

Waiting period

First, a stack will start building only once all each stack it depends on finished to build. This enables us to ensure we are in a predictable state and not in a in-between with half of the components building against the old upstream stack and the other half against the new upstream stack because it finished to build in the same ppa.

The previous diagram needs to be then modified for the unity stack with this "wait" optional new step. This step is only deployed with cu2d-update-stack if we declared some dependent stacks.

Daily Release, jenkins jobs with wait optional step

The wait script is really simple to achieve that task. However, waiting is not enough as we've seen above, we need to take the test result into consideration, which we'll do later on during the publisher step.

Running integration tests in isolation

Ok, that's answering some part of the consistency issue, however, imagine now the second case we mentionned where yesterday's daily build failed, it means that if we take the whole ppa content, we'll end up testing the new indicator integration with a version of Unity which isn't the distro one, nor the one we'll publish later on. We need to ensure we only tests what will make sense.

The integration jobs are then not taking the whole ppa content, but only the list of components generated by the current stack. It means that we are explicitely only installing components from this stack and nothing else. Then, we log the result of the install command and filters against it. If a soname change for instance (like libbamf going from 3.0 to 3.1), ending up in a package renaming will grab a new Unity to be able to run, we'll then bail out.

Here is a real example where we can find here the list of packages we are expecting to install, then here is the list of what happened when we asked to install only those packages. The filtering happening as the first test will then exit the job in failure, meaning we can't publish that stack.

So we know if the we are ensuring we know the retro-compatibility status this way. If retro-compatibility is achieved, the tests are passing right away, and so the stack is published regardless other stacks, as we tested against Unity distro's version. This ensure we are not regressing anything then. In other case when retro-compatibility seems to fails, as in the previous example, this means that we would eventually have to publish both stacks at the same time, which is what we can do.

Ensuring that the leaf stack is not published

In a case like the previous one, the Unity integration tests will run against the latest indicator stack, and then they pass (having latest everything), but wait! We shouldn't publish without the indicator components (and those failed, we are not even sure the fixes in Unity will make them pass and we haven't tested againt the whole PPA).

Here comes the second "manual publishing" case from the publisher script I teased about yesterday. If a stack we depend on didn't finish with success, the current stack is put in manual publication. The xml artefacts giving the reason is quite clear:

indicators-head failed to build. Possible causes are:

* the stack really didn't build/can be prepared at all.

* the stack have integration tests not working with this previous stack.

What's need to be done:

* The integration tests for indicators-head may be rerolled with current dependent stack. If they works, both stacks should be published at the same time.

* If we only want to publish this stack, ensure as the integration tests were maybe run from a build against indicators-head, that we can publish the current stack only safely.

So from there, we can be either sure that the upstream stack has no relationship with the current downstream one, and so, people with credentials can force a manual publishing with cu2d-run as indicated in the previous blog post. However, if there is a transition impacting both stacks, the cleverest way is to relaunch the upstream stack tests with the whole ppa to validate that hypothesis.

For that, cu2d-run has another mode, entitled "check with whole ppa". This option will rerun a daily-release, but won't prepare/rebuild any component. It will though still takes their build status into account for the global status. The only difference is that the integration tests will take into account this time the whole ppa (and so no more filtering on what we install if it's part of the current stack or not), doing a classic dist-upgrade. This way, we validate with "all latest" that the tests are passing.

Note that even if tests pass, as we'll probably need to publish at the same time than the dependent stack, we'll automatically pass in manual publishing mode. Then, once all resolved, we can publish all stacks we want or need with cu2d-run.

Manual publish cascades to manual publish

Another case when a dependent stack can fallback to manual publishing mode doesn't necessarily imply tests failing. Remember the other case when we fallback in manual publishing mode? Exactly, packaging changes! As the dependent stack is built against the latest upstream stack, and as it's what we tested, we probably need to ensure both are published (or force a manual publishing on only one if we are confident that there is no interleaved depends in the current change).

Here is what the artefacts will provide as infos on that case:

indicators-head is in manually publish mode. Possible causes are:

* Some part of the stack has packaging changes

* This stack is depending on another stack not being published

What's need to be done:

* The other stack can be published and we want to publish both stacks at the same time.

* If we only want to publish this stack, ensure as the integration tests were run from a build against indicators-head, that we can publish the current stack only safely.

So, here, same solutions than in the previous case, with cu2d-run, we either publish both stack at the same time (because the packaging changes are safe and everything is according to the rules) or just the leaf one if we are confident there is no impact.

Note that we do handle as well if something unexpected happened on a stack and that we can't know its status.

In a nutshell

Stacks don't live in their own silos, they are high dependency behavior between them which are ensured by the fact that they live and are built against each other in the same ppa. We as well ensure the retro-compatibility mode, which is what most of daily release will be, than only publishing some stacks. We also handle that way transitions impacting multiple stacks and can publish everything in one shot.

That's it for the core of the daily release process (phew, isn't it?). Of course this is largely simplified and we have a lot of corner cases I didn't discuss about, but this should give enough high coverage on how the general flow is working out. I'll wrap it up in a week with some kind of FAQ from the most common questions that are still maybe pending (do not hesitate to comment here) and give this time to collect feedbacks. I will publish this FAQ then and at the same time will draw a conclusion from there. See you in a week. ;)

jeudi, janvier 24 2013

Unity: release early, release often… release daily! (part 3)

This post is part of the Unity daily release process blog post suite.

Now that we know how branches are flying to trunk and how we ensure that the packaging metadata are in sync with our delivery, let's swing to the heart of the daily release process!

Preliminary notes on daily release

This workflow is heavily using other components that we rely on. In addition to our own tool for daily release, which is available here, we needed to use jenkins for scheduling, controlling and monitoring the different parts of the process. This would not have been possible without the help of Jean-Baptiste and the precious and countless hours of hangouts on setting up jenkins, debugging, looking at issues and investigating autopilot/utah failures. In addition to that, thanks to the #bzr channel as this process demanded us to investigate some bzr internal behavior, and even patching it (for lp-propose). Not forgetting pbuilder-classic and other uses that we needed to benchmark and ensure we are doing the right thing for creating the source packages. Finally, we needed to fix (and automate the fix) of a lot of configuration on the launchpad side itself for our projects. Now it's clean and we have tools to automatically keeps that this way! Thanks to everyone involved into those various efforts.

General flow of daily release

Daily release, general flow The global idea of the different phases for achieving a stack delivery is the following:

  • For every component of the stack (new word spotted, will be definied in the next paragraph), prepare a source package
  • Upload them to ppa and build the packages
  • Once built, run the integration tests associated to the stack
  • If everything is ok, publish those to Ubuntu

Of course, this is the big picture, let's enter into more details now. :)

What is a stack?

Daily release is working per stack. Jenkins is showing all of them here. We validate a stack all together or reject all components that are part of it. A stack are various components having close interactions and relationships between themselves. They are generally worked on by the same team. For instance, those are the different stacks we have right now:

  • the indicator stack for indicators :)
  • the open input framework stack
  • the misc stack contains all testing tools, as well as misc components like theming, wallpapers, design assets. This is generally all components that don't have integration tests (and so we don't run any integration tests step)
  • the web application stack (in progress to have integration tests so that we can add more components)
  • the web credentials stack (in progress to have integration tests so that we can add more components)
  • the Unity stack, containing the core components of what defines the Unity experience.

The "cu2d" group that you can see in the above links is everything that is related to the daily release.

As we are working per stack, let's focus here on one particular stack and seeing the life of its components.

Different phases of one stack

General diagram

Let's have a look at the general diagram in a different way to see what is running concurrently:

Daily Release, jenkins jobs

You should recognize here the different jenkins jobs that we can list per stack, like this one for instance.

The prepare phase

The prepare phase is running one sub-prepare job per components. This is all control by the prepare-package script. Each projects:

  • Branch latest trunk for the component
  • Collect latest package version that were available to this branch and compare to distro ones, we can get multiple case here:
    • the version in the distro is the same than the one we have in the branch, we are then downloading the source code from Ubuntu of the latest available package to check its content against the content of the branch (this is a double security check).
    • If everything matches, we are confident that we won't overwrite anything in the distribution (by a past direct upload) that is not in the upstream trunk and can go on.
    • However, maybe a previous release was tempted as part of the daily release, so we need to take into account the version in the ppa we are using for daily release to eventually bump the versioning (it would be like the previous version in the ppa never existed)
    • if the version is less than the one in the distro, it means that there has been a direct upload to the distro not backported to the upstream trunk. The prepare job for this component is exiting as unstable, but we don't block the rest of the process. We'll simply ignore that component and try to validate all the other changes on the other projects without anything new from that one (if this component was really needed, the integration tests will fail later on). The integration team will then work to get those additional changes in the distro merged back into trunk for the next daily release.
  • We then check that we have new useful commits to publish. It means that a change which isn't restricted to debian/changelog or po files update has been made. If no useful revision is detected, we won't release that component today, but still go on with the other projects.
  • We then create the new package versionning. See the paragraph about versionning below.
  • We then update the symbols file. Indeed, the symbols file needs to know in which exact version a new symbol was introduced. This can't be done as part of the merge process as we can't be sure that next day, the release will be successful. So upstream is using the magic string "0replaceme" instead of providing a version as part the merge adding a symbol to a library. As part of making that package a candidate for daily release, we then replace automatically all the occurences of "0replaceme" by the exact versionning we computed just before. We add an entry to the changelog if we had to do that to document.
  • Then, we prepare the changelog content.
    • We first scan the changelog itself to grab all bugs that were manually provided as part of upstream merges by the developers since that last package upload.
    • Then, we scan the whole bzr history until latest daily release (we know the revision of the previous daily release where the snapshot has been taken by extracting the string "Automatic snapshot from revision <revision>". Fetching in detail this history (included merged branch content), we are looking for any trace of bugs attached or commit message talking about "bug #…" (there is a quite flexible regexp for extracting that).
    • We then look at the committer for each bug fixes and grab the bug title from launchpad. We finally populate debian/changelog (if it wasn't already previously mentionned in our initial parsing of debian/changelog to avoid duplication). This mean that it's directly the person fixing the issue who gets the praise (and blames ;)) for this fix. You can then see a result like that. As we only deduplicate since the last package upload, it means that we can mention again the same bug number if the previous fix in last upload wasn't enough. Also all the fixes are grouped and ordered by names, making the changelog quite clear (see here).
    • We finally append the revision from where the snapshot was taken to the changelog to make clear what's in this upload and what's not.
  • Then, we generate a diff if we have any meaningful packaging changes since last upload to Ubuntu (so ignoring changelog and symbols automatically replaced). In case we detect some, we include all meta-build information (like configure.ac, cmake files, automake files) that changed as well since last release to be able to have a quick look at why something changed. This information will be used during the publisher step.
  • We then sync all bugs that were previously detected, opening downstream bugs in launchpad so that they are getting closed by the package upload.
  • We then commit the branch and keep it there for now.
  • Building the source package, inside a chroot to try to get all build-dependencies resolved is then the next step. This is using pbuilder-classic and a custom setup to do exactly the job we want to do (we have our own .pbuilderrc and triggering tool, using cowbuilder to avoid having to extract a chroot). This step is creating as well the upstream tarball that we are going to use.
  • Finally, we upload that source package that we just got to the ubuntu-unity ppa, and save various configuration parts.

All those steps are happening in parallel for any component of the stack.

For those interested, here are the logs when there is some new commits to publish for that component. Here is another example, when there is nothing relevant. Finally, an upload not being on trunk is signaled like that (and the component is ignored, but other goes on).

Monitoring the ppa build

This is mostly the job of this watch-ppa script. It monitors the ubuntu-unity daily-ppa build status for the components we just uploaded and ensure that they are published and built successfully on all architectures. As we are running unit tests as part of the packaging build, we already have unit tests passing with latest Ubuntu ensured that way. In addition, the script generates some meta-information when packages are published. It will make the jenkins job failing if any architecture failed to build, or if a package that was supposed to start building had never its source published (after a timeout) in the ppa.

Running the integration tests

In parallel to monitoring the build, we are running the integration tests (if any). For that, we start monitoring using the previous script, but only on i386. Once all new components for this arch are built and published, we start a slave jenkins jobs running the integration tests. Those will, using UTAH, provision a machine, installing latest Ubuntu iso for each configuration (intel, nvidia, ati), then we add the packages we need from this stack using the ppa and run the tests corresponding to the current stack. Getting all Unity autopilot tests stabilized for that, was a huge task. Thanks in particular to Łukasz Zemczak and with some help of the upstream Unity team, we finally got the number of autopilot tests failing under control (from 130 to 12/15 over the 450 tests). We are using them to validate Unity nowadays. However, the indicator and oif tasks had no integration tests at all. To not block the process and trying to move on, we then "stole" the relevant (hud + indicators one for indicators) Unity autopilot tests and only run those. As oif has no integration test at all, we just ensure that Unity is still starting and that the ABI is not broken.

We are waiting on web credentials and web apps stack to grow some integration tests (which are coming in the next couple of weeks with our help) to be able to add more components from them and ensuring the quality status we target for.

Then, the check job is getting back the results of those tests and thanks to a configuration per-stack mechanism, we have triggers to decide if the current state is acceptable or not to us. For instance, over the 450 tests running for Unity (x3 as we run on intel, nvidia and ati), we accept for instance 5% of failures. We have different level for regressions and skipped tests as well. If the collected values are below those triggers, we are considering that the tests are passing.

  • Logs of tests passing. You can get a more detailed view on which tests passed and failed looking at the son job, which is the one running really the tests: https://jenkins.qa.ubuntu.com/job/ps-unity-autopilot-release-testing/. (unstable, yellow means that some tests failed). The parent one is only launching it when ready and doing the collect.
  • Logs of more tests than the acceptable threshold failing.
  • Logs of tests failing because UTAH failed to provision the machine or the daily build iso wasn't installable.

And finally, publishing

If we reach this step, it means that:

  • we have at least one component with something meaningful to publish
  • the build was successful on all architectures.
  • the tests are in the acceptable tests passing rate

So, let's go on and publish! The script has still some steps to proceed. In addition to have a "to other ppa" or "to distro" mode as a publishing destination, it:

  • will check if we have manual packaging changes (remember this diff we did for each component in the prepare stage?). If we actually do have some, we put the publishing in a manual mode. This means that we won't publish this stack automatically and only people with upload rights have special credentials to force the publish.
  • it will then propose and autoapprove upstream for each components the changes that we made to the package (edition of debian/changelog, symbols files…). Then, the upstream merger will grab those branches and merge that upstream. We are only pushing those changes back upstream now (and not during the prepare steps) as we are only finally sure we are releasing them.

Most of the time, we don't have packaging changes, so the story is quite easy. When we do, the job is showing it in the log and is marked as unstabled. The packaging diff, plus the additional build system contexts are attached as artefacts for an easy review. If the integration team agreed with those changes, they have special credentials to run cu2d-run in a special "manual" mode, ignoring the packaging changes and doing the publish if nothing else is blocking. This way, we address the concerns of "only people with upload right can ack/review packaging changes". This is our secondary safety net (the first one being the integration team looking at upstream merges).

The reality is not that easy and that's not the only case when the publish can be set as manual, but I'll discuss that later on.

Small note on the publisher, you see that we are merging back our changes to upstream, but upstream didn't sleep while we take our snapshot, build everything and have those integration tests running, so we usually end up where some commits were done in trunk in between the latest snapshot and the time we push back that branch. That's the reason why we have this "Automatic snapshot from revision <revision>" in the changelog to clearly state where we did the cut and don't rely on the commit of this changelog modification to be merged back.

Copy to distro

In fact, publishing is not the end of the story. Colin Watson had the concern that we need to separate the power and not having too many launchpad bots having upload rights to Ubuntu. That's true that all previous operations are using a bot for committing, pushing to the ppa, with its own gpg and ssh key. Having those credentials widespread on multiple jenkins machine can be scary if they give upload privileges to Ubuntu. So instead of the previous step directly being piloted by a bot having upload rights to the distro, we only generated a sync file with various infos, like this one.

Then, on the archive admin machines, we have a cron using this copy script which:

  • collects all available rsync files over the wire
  • then, check that the info in this file are valid, like ensuring that components to be published are part of this whitelist of projects that can be synced (which is the same list and metainfo used to generate the jenkins jobs and lives there).
  • do a last final check from version compared to the distro if new version were uploaded since the prepare phase (indeed, maybe an upload happened on one of those components while we were busy building/testing/publishing).
  • if everything is fine, the sources and binaries are copied from the ppa to the proposed repository.

Copy to distro

The fact to execute binary copies from the ppa (meaning that the .debs in the ppa are exactly the one that are copied to Ubuntu) gives us the confidence that what we tested is exactly what is delivered, built in the same order, to our users. The second obvious advantage as well is that we don't rebuild what's already ready.

Rebuild only one or some components

We have an additional facility. Let's say that test integration failed because of one component (like one build dependency not being bumped), we have the possibility to only rerun (still using the same cu2d-run script by people having the credentials) the whole workflow for this or those defined on the command line components. We keep for the others components exactly the same state, meaning, not rebuilding them, but they would still be part of what we tests as part of the integration suite and as part of what we are going to publish. Some resources are spared this way and we can get faster to our result.

However, for the build results, it will still take all the components (even those we don't rebuild) into account. We won't let a FTBFS slipping by that easily! :)

Why not including all commits in debian/changelog?

We made the choice to only mention changes associated bugs. This also mean that if upstream never link bugs to merge proposal, or use "bzr commit --fixes lp:XXXX" or put in a commit message something like "this fixes bug…", the upload will have an empty changelog (just having "latest snapshot from rev …") and won't detail the upload content.

This can be seen as suboptimal, but I think that we should really engage our upstream to link fixes to bugs when they are important to be broadcasted. The noise of every commit message to trunk isn't suitable for people reading debian/changelog, which is visible in update-manager. If people wants to track closely upstream, they can look at the bzr branch for that. You can see debian/changelog as a NEWS file, containing only important information from that upload. We need of course to have more and more people upstream realizing that and being educated that linking to a bug report is important.

If an information is important to them but they don't want to open a bug for it if there isn't one, there is still the possibility, as part of the upstream merge, to directly feed debian/changelog with a sensible description of the change.

Versionning scheme

We needed to come with a sane versionning schema. The current one is:

  1. based on current upstream version, like 6.5
  2. appending "daily" to it
  3. adding the current date in yy.mm.dd format

So we end up with, for instance: 6.5daily13.01.24. Then, we add -0ubuntu1 as the packaging is separated in the diff.gz by using split mode.

If we need to rerun the same day a release for the same component, as we detect that a version was already released in Ubuntu or eventually in the ppa, this will become: 6.5daily13.01.24.1, and then 6.5daily13.01.24.2… I think you got it :)

We can use this general pattern: <upstream_version>daily<yy.mm.dd(.minor)>-0ubuntu1 for the package. The next goal is to have upstream_version as small (one, two digits?) as possible.

Configuring all those jobs

As we have one job per project per series + some metajob per stack, we are about having 80 jobs right now per supported version. This is a lot, the only way to not have bazillions of people just to maintain those jenkins jobs is to automate, automate, automate.

So every stack defines some metadata like a serie, projects that they cover, when they need to run, eventual extracheck step, ppa used as source and destination, optional branches involved… Then, using cu2d-update-stack -U, this will update all the jenkins jobs to latest configuration using the templates we defined to standardize those jobs. In addition, that will reset as well some bzr configuration in launchpad for those branches to ensure that various components like lp-propose or branching the desired target will indeed do the right thing. As told previously, as this list is as well used for filtering when copying to distro, we have very few chances to get out of sync by automating everything!

So, just adding a component to a stack is basically a one line change + the script to run! Hard to do something easier. :)

And btw, if you were quite sharp on the last set of links, you have seen a "dependencies" stanza in the stack definition. Indeed, we described here the simple case and eluded completely the fact that stacks are depending on each other. How do we resolve that? How do we avoid publishing an inconsistent state? That's a story for tomorrow! :)

mercredi, janvier 23 2013

Unity: release early, release often… release daily! (part 2)

This post is part of the Unity daily release process blog post suite.

As part of the new Unity release procedure, let's first have look at the start of the story of a branch, how does it reach trunk?

The merge procedure

Starting the 12.04 development cycle, we needed upstream to be able to reliably and easily get their changes into trunk. To ensure that every commits in trunk pass some basic unit tests and doesn't break the build, that would obviously mean some automation would take place. Here comes the merger bot.

Merge upstream branch workflow

Proposing my branch for being merged, general workflow

We require peer-review of every merge request on any project where we are upstream. No direct commit to trunk. It means that any code change will be validated by an human first. In addition to this, once the branch is approved, the current branch will be:

  • built on most architectures (i386, amd64, armhf) in a clean environment (chroot with only the minimal dependencies)
  • unit tests will be run (as part of the packaging setup) on those archs

Only if all this passes, the branch will be merged into trunk. This way, we know that trunk is at a high standard already.

You will notice in this example of a merge request that in advance of phase (thanks to the work of Martin Mrazik, Francis Ginther), a continuous integration job is kicking in to give some early feedback to both the developer and the reviewer. This can indicate if the branch is good for merging even before approving it. This job kicks back if additional commit is proposed as well. This rapid feedback loop helps to give an additional advice on the branch quality and direct link to a public jenkins instance to see the eventual issues during the build.

Once the global status of a merge request is set to "approved", the merger will validate the branch, then takes the commit message (and falling back to the description on some project as a commit message if nothing is set), will eventually take attached bug reports (that the developer attached manually to the merge proposal or directly in a commit with "bzr commit --fixes lp:<bugnumber>") and merge that to the mainline, as you can see here.

How to handle dependencies, new files shipped and similar items

We told in the previous section that the builds are done in a chroot, clean environnement. But we can have dependencies that are not released into the distribution yet. So how to handle those dependencies, detect them and taking the latest stack available?

For that, we are using our debian packages. As this is what the finale "product" will be delivered to our users, using packages here and similar tools that we are using for the Ubuntu distribution itself is a great help. This means that we have a local repository with the latest "trunk build" packages (appending "bzr<revision>" to the current Ubuntu package version) so that when it's building Unity, it will grab the latest (eventually locally built) Nux and Compiz.

Ok, we are using packages, but how to ensure that when I have a new requirement/dependency, or when I'm shipping a new file, the packaging will be in sync with this merge request? Previously and historically, the packaging branch was separated from upstream. This was mainly for 3 reasons:

  • we don't really want to require that our upstream learns how to package and all the small details around this
  • we don't want to be seen as distro-specific for our stack and be as any other upstream
  • we the integration team wants to have the control over their packaging

This mostly worked in the sense that people had to ping the integration team just before setting a merge to "approve", and ensure no other merges was in process meanwhile (to not take the wrong packaging metadata with other branches). However, it's quite clear this can't scale at all. We did have some rejections because ensuring that we can be in sync was difficult.

So, we decided this cycle to have the packaging inlined with the upstream branch. This doesn't change anything for other distributions as "make dist" is used to create a tarball (or they can grab any tarball from launchpad from the daily release) and those don't contains the packaging infos. So we are not hurting them here. However, this ensures that we are in sync between what we will deliver the next day to Ubuntu and what upstream is setting into their code. This work started at the very beginning of the cycle and thanks to the excellent work of Michael Terry, Mathieu Trudel-Lapierre, Ken VanDine and Robert Bruce Park, we got that quickly in. Though, there were some exceptions where achieving this was really difficult, because of unit tests were not really in shape to work in isolation (meaning in a chroot, with mock objects like Xorg, dbus…). We are still working on getting the latest elements bootstrapped to this process and having those tests smoothly running. I clearly know this idea of using the packaging to build the upstream trunk and having this inlined is a drastic change, but from what we can see since October, we have pretty good results with this and it seems to have worked out quite well! I would like to thanks again the whole awesome product strategy (the canonical upstream) team to have let that idea going through and facilitate as much as possible this process. Thanks as well to the jenkins master (2 of them already announced previously, plus Allan LeSage and Victor R. Ruiz) to have completed all the jenkins/merger machinery changes that were needed on each project for that.

We can't expect that every upstream will know everything about the packaging, consequently the integration team is here and available for giving any help which is needed. I think on the long term that basic packaging changes will be directly done by upstream (we are already seeing some people bumping the build-dependency requirement themselves, adding a new file to install, declaring a new symbol as part of the library…). However, we have processes inside the distribution and only people with upload rights in Ubuntu is supposed to do or review the changes. How does this work with this process? Also we have some feature freeze and other Ubuntu processes, how will we ensure that upstream are not breaking those rules?

Merge guidelines

As you can see in the first diagram, some requirements during a merge request, both controlled by the acceptance criterias and the new conditions from inline packaging, are set:

  • Design needs to acknowledge the change if there is any visual change involved
  • Both the developer, reviewer and the integration team ensure that ubuntu processes are followed (UI Freeze/Feature Freeze for instance). If exceptions are required, they check before approving for merging that they are acknowledged by different parts. The integration team can help smooth to have this happened, but the requests should emerge from developers.
  • Relevant bugs are attached to the merge proposal. This is useful for tracking what changed and for generating the changelog as we'll see in the next part
  • Review of new/modified tests (for existence)
  • They ensure that it builds and unit tests are passing automated
  • Another important one, especially when you refactor, is to ensure that integration tests are still passing
  • Review of the change itself by another peer contributor
  • If the change seems important enough to have a bug report linked, ensure that the merge request is linked to a bug report (and you will get all the praise in debian/changelog as well!)
  • If packaging changes are needed, ping the integration team so that they acknowledge them and ensure the packaging changes are part of the merge proposal.

The integration team, in addition to be ready to help on request by any developer of our upstream, is having an active monitoring role over everything that is merged upstream. Everyone has some part of the whole stack attributed under his responsibility and will spot/start some discussion as needed if we are under the impression that some of those criterias are not met. If something not following those guidelines are confirmed to be spotted, anyone can do a revert by simply proposing another merge.

This enables to mainly answer the 2 other fear of what inline packaging may interfere by giving upstream control over the packaging. But this is the first safety net, we have a second one involved as soon as there is a packaging change since last daily release that we'll discuss in the next part.

Consequences for other Ubuntu maintainers

Even if we have our own area of expertise, Ubuntu maintainers can touch any part of what constitutes the distribution (MOTUs on universe/multiverse and core developers on anything). On that purpose, we didn't want daily release and inline packaging to change anything for them.

We added some warning on debian/control, pointing the Vcs-Bzr to the upstream branch with a comment above, this should highlight that any packaging change (if not urgent) needs to be a merge proposal against the upstream branch, as if we were going to change any of the upstream code. This is how the integration team is handling transitions as well, as any developer.

However, it happens that sometimes, an upload needs to be done in a short period of time, and we can't wait for next daily. If we can't even wait for manually triggering a daily release, the other option is to directly upload to Ubuntu, as we normally do for other pieces where we are not upstream for and still propose a branch for merging to the upstream trunk including those changes. If the "merge back" is is not done, the next daily release for that component will be paused, as it's detecting that there are newer changes in the distro, and the integration team will take care of backporting the change to the upstream trunk (as we are monitor as well uploads to the distributions).

Of course, the best is always to consult any person of the integration team first in case of any doubt. :)

Side note on another positive effects of inline packages

Having to inline all packages was a hard and long work, however in addition to the previous benefits hilighted, this enabled us to standardize all ~60 packages from our internal upstream around best practices. They should now all look familar once you have touch any of them. Indeed they all use debhelper 9, --fail-missing to ensure we ship all installed files, symbol files for C libraries using -c4 to ensure we force updating them, running autoreconf, debian/copyright using latest standard, split packages… In addition to be easier for us, it's as well easier for upstream as they are in a familar environment if they need to do themselves any changes and they can just use "bzr bd" to build any of them.

Also, we were able to remove and align more the distro patch we had on those components to push them upstream.

Conclusion on an upstream branch flow

So, you should now know everything on how upstream and packaging changes are integrated to the upstream branch with this new process, why we did it this way and what benefits we immediately can get from those. I guess having the same workflow for packaging and upstream changes is a net benefit and we can ensure that what we are delivering between those 2 are coherent, of a higher quality standard, and in control. This is what I would keep in mind if I would have only one thing to remember from all this :). Finally, we are taking into account the case of other maintainers needing to make any change to those components in Ubuntu and try to be flexible in our process for them.

Next part will discuss what happens to validate one particular stack and uploading that to the distribution on a daily basis. Keep in touch!

mardi, janvier 22 2013

Unity: release early, release often… release daily! (part 1)

This post is part of the Unity daily release process blog post suite. This is part one, you can find:

For almost the past 2 weeks (and some months for other part of the stacks), we have automated daily release of most of the Unity components directly delivered to Ubuntu raring. And you can see those different kinds of automated uploads published to the ubuntu archive.

I'm really thrilled about this achievement that we discussed and setup as a goal for the next release at past UDS.

Why?

The whole Unity stack has grown tremendously in the past 3 years. At the time, we were able to release all components, plus packaging/uploading to ubuntu in less than an hour! Keeping the one week release cadence by then was quite easy, even though a lot of work. The benefit was that it enabled us to ensure that we have a fluid process to push what we developped upstream to our users.

As of today, teams have grown by quite some extends, and if we count everything that we develop for Unity nowadays, we have more than 60 components. This covers from indicators to the theme engine, from the open input framework to all our tests infrastructure like autopilot, from webapps to web credentials, from lenses to libunity, and finally from compiz to Unity itself without forgetting nux, bamf… and the family is still growing rapidly with a bunch of new scopes coming down the pipe through the 100 scopes project, our own SDK for the Ubuntu phone, the example applications for this platform we are about to upload to Ubuntu as well… Well, you got it, the story is far from ending!

So, it's clear that those numbers of components that we develop and support will only go higher and higher. The integration team already scaled by large extends their work hours and rush to get everything delivered timely to our user base[1]. However, it's with no question that we won't be able to do that forever. We don't want as well introducing artificial delays to our own upstream on when we are delivering stuff to our users. We needed to solve that issue while not paying any price on quality, nor balancing the experience we deliver to our users. We want to keep high standards, and even, why not allying this need while providing an even a better, more reliable, and better evaluation before releasing of what we eventually upload to Ubuntu from our upstreams. Getting our cake and eat it too! :)

Trying to change for the better

What was done in the last couple of cycles was to separate between 2 groups the delivery of those packages to the users. There was an upstream integration team, which will hand over to the Ubuntu platform team theorically ready-to-upload packages, making the reviews, helping them, fixing some issues and finally sponsoring their work to Ubuntu. However, this didn't really work for various reasons and we quickly realized that this ended up just complexifying the process instead of easing it out. You can see a diagram of where we ended up when looking back at the situation:

end of 12.04 and 12.10 release process

Seems easy isn't it? ;) Due to those inner loops and gotchas, in addition to the whole new set of components, we went from a weekly release cadence to doing 4 to 5 solid releases during a whole cycle.

Discussing about it with Rick Spencer, he gave me a blank card on thinking how we can make this more fluid to our users and developers. Indeed, with all the work piling up, it wasn't possible to release immediatly the good work that upstream did in the past days, which can lead to some frustration as well. I clearly remember that Rick used the term "think about platform[2] as a service". This kind of immediatly echoed in me, and I thought, "why not trying that… why not releasing all our code everyday and delivering a service enabling us to do that?"

Daily release, really simplified diagram

Even if not planned from the beginning (sorry, no dark, hidden, evil plan here), thinking about it, this makes sense as part of some kind a logical progression from where we started since Unity exists:

  • Releasing every week, trying manually to get the release in a reasonable shape before uploading to Ubuntu
  • Raise the quality bar, put some processes for merge reviewing.
  • Adding Acceptance Criterias thanks to Jason Warner, ensuring that we are getting some more and more good tests and formal conditions on doing a release
  • Automate those merges through a bot, ensuring that every commits in trunk builds fine, that unit tests are passing
  • Raise again the quality bar, adding more and more integration tests

Being able and ensuring we are able to release daily seems really, looking at it, par of the next logical step! But it wouldn't have been possible without all those past achievements.

Advantages of daily releases

It's quite immediate to see a bunch of positive aspects of doing daily releases:

  • We can spot way faster regressions. If a new issue arose, it's easier to bisect through packages and find the day when the new regression or incorrect behavior started to happen, then, looking at the few commits to trunk (3-5?) that was done this day and pinpoint what introduced it.
  • This enables us to deliver everything in a rapid, reliable, predictable and fluid process. We won't have crazy rushes as we had in the past cycles around goal dates like feature freezes to get everything in the hand of the user by then. This will be delivered automatically the day after to everyone.
  • I see also this as a strong motivation for the whole community who contributes to those projects. Not having to wait a random date hidden in a wiki for a "planned release date" to see the hard work you put into your code to be propagated to the user's machines. You can immediately see the effect of your hacking on the broader community. If it's reviewed and approved for merging (and tests passes, I'll come back to that later), it will be in Ubuntu tomorrow, less than 24 hours after your work reached trunk! How awesome is that?
  • This also means that developers will only need to build the components they are working on. No need for instance to rebuild compiz or nux to do an Unity patch because the API changed and you need "latest everything" to build. Lowering the entry for contributing and chances that you have unwanted files in /usr/local staying around conflicting with the system install.

Challenges of daily release

Sure, this comes with various risks that we had to take into account when designing this new process:

  • The main one is "yeah, it's automated, how can you be sure you don't break Ubuntu pushing blindly upstream code to it?". It's a reasonable objection and we didn't ignore it at all (having the history of years of personal pain on what it takes to get a release out in an acceptable shape to push to Ubuntu) :)
  • How to interact properly with the Ubuntu processes? Only core developpers, motus, and per-package uploaders have upload rights to Ubuntu. Will this new process in some way give our internal upstream the keys to the archives, without having them proper upload rights?
  • How ensuring packaging changes and upstream changes are in sync, when preparing a daily release?
  • How making useful information in the changelog so that someone not following closely upstream merges but only looking at the published packages in raring-changes mailing list or update-manager can see what mainly changed in an upload?
  • How to deal with ABI breaks and such transitions, especially when they are cross-stacks (like a bamf API change impacting both indicators and Unity)? How to ensure what we deliver is consistent across the whole stacks of components?
  • How do we ensure that we are not bindly releasing useless changes (or even an upload with no change at all) and so, using more bandwith, build time, and so on, for nothing?

How then?

I'll detail much of those questions and how we try to address those challenges in the subsequent suite of blog posts. Just for now, to sum up, we have a good automated test suite, and stacks are only uploaded to Ubuntu if:

  1. their internal and integration tests are passing above a certain theshold of accepted failures.
  2. they don't regress other stacks

Stabilizing those tests to get reliable result was a tremendous work for a cross-team effort, and I'm really glad that we are confident in them to finally enable those daily release.

On another hand, additional control is made to ensure that packaging changes are not committed without someone with upload right being in the loop for acking the final change before the copy to the distribution happens.

Of course, the whole machinery is not limited to this and is in fact way more complicated, I'll have the pleasure the write about those in separate blog posts in the following days. Stay tuned!

Notes

[1] particularly around the feature freeze

[2] the Ubuntu platform team here, not Ubuntu itself ;)

lundi, décembre 31 2012

Getting sound working during a hangout in raring

Since approximately the beginning of the raring development cycle, I had an issue on my thinkpad x220 with google hangouts forcing me to use a tablet or a phone to handle them. What happened is that once I entered a hangout, after 40-60s, my sound was muted and there was no way to get the microphone back on, same for ouptut from other participants. Video was still working pretty well.

Worse, even after exiting the hangout, I could see the webcam was still on and the plugin couldn't reconnect or recreate a hangout, I had to kill the background google talk process to get it working back (for less than a minute of course ;)).

Finally, I took a look this morning and googling about that issue. I saw multiple reports of people tweaking the configuration file to disable volume auto adjustement. So, I gave it a try:

  1. Edit ~/.config/google-googletalkplugin/options
  2. Replace the audio-flags value set to 3 to use 1 (which seems disabling this volume auto adjustement feature)

Then, kill any remaining google talk process if you have some and try a hangout. I tested for 5 minutes and I was still able to get the sound working! After quitting the hangout, the video camera is turned off as expected. I can then restart another hangout, so the google talk process don't seem to be stalled anymore.

Now, it's hard to know what regressed as everything was working perfectly well on ubuntu 12.10 with that hardware. I tried on chromium, chrome canary and firefox with the same result. The sound driver don't seem to be guilty as I tried to boot on an 12.10 kernel. The next obvious candidate is then pulseaudio, but it's at the same version than in 12.10 as well. So it seems to be in the google talk plugin, and I opened a ticket on the google tracker.

At least, with that workaround, you will be able to start next year with lasting sound during your hangouts if you got this problem on your machine. ;)

mardi, décembre 11 2012

Few days remaining for FOSDEM 2013 Crossdesktop devroom talks proposal

The Call for talks for the Crossdesktop devroom at FOSDEM 2013 is getting to its end this very Friday! This year, we'll have some Unity related talks. If you are interested in having one, no time to lose and submit your talk today!

Proposals should be sent to the crossdesktop devroom mailing list (you don't have to subscribe).

vendredi, août 17 2012

Quickly reboot: Q&A session wrap up!

Last Wednesday, we had our Quickly reboot on air hangout, welcoming the community to ask questions about Quickly and propose enhancement.

Here is the recording of the session:

As for the previous sessions, we had a lot of valuable feedbacks and participation. Thanks everyone for your help! Your input is tremendous to shape the future of Quickly.

We are making a small pause in the Quickly Reboot hangouts, but we will be back soon! Meanwhile, you can catch up on the session notes, and provide feedback/ideas on this wiki page! Follow us on google+ to ensure to not miss any annoucement.

mardi, août 14 2012

Quickly reboot: Q&A sessions!

The previous Quickly reboot session about templates was really instructive! It started a lot of really interesting and opened discussions, particularly on the Quickly talks mailing list where the activity is getting higher and higher. Do not hesitate to join the fun here. :)

As usual, if you missed the on-air session, it's available here:

I've also summarized the session note on the Quickly Reboot wiki page.

Next session: Q&A!

But we don't stop here, the next session will be hold this Wednesday! If you read through the previous links, you will see a lot of pending questions we have still to discuss about, this will be used as the base conversation of the session. However, in addition to those topics, all of your questions will be taken into account as well! You can jump during the session on #quickly on freenode, while watching the show on ubuntuonair. You can as well prepare your questions and new ideas for Quickly, and post them to the google moderator page. There are plenty of ways to participate and help shaping the future of Quickly. Do not miss this session and subscribe to it right now.

Also, ensure you won't miss anything about Quickly and its reboot by subscribing to our google+ page.

lundi, août 6 2012

Quickly reboot: developer feedback wrap up and templates content

Previous sessions

The first two hangouts on Quickly reboot about developer feedback were really a blast! I'm really pleased about how much good ideas and questions emerged from those.

If you missed them, the hangouts on air are available now on youtube. Go and watch them if you are interested:

I've also taken some notes during the sessions, here are what I think was important and came from them: hangouts notes. It's a wiki, if you do have any feedback/questions/other subjects you want to get discussed, don't be shy and edit it! Quickly is a fully community-driven project and we love getting constructive feedbacks/new ideas from everyone. ;)

I've also added on it some nice spot of discussions for future sessions, and speaking of sessions…

Next step: default templates

Next session is a really important one: what should be the default templates in Quickly? From the previous discussions, seems that python + gtk, a html5 one and the already existing unity-lens ones are the good candidates. If so, what should be in every of each of those? How should look the default applications? Which framework (if any) in the case of html5 should we use? Should we make opinionated choices or just providing a default set? What should we provide as tools for them, and so on…

Join the conversation, I'm sure it will be a lot of fun! We plan to have the hangout at 4PM UTC on Wednesday. Ensure to follow it either by jumping in the hangout itself or by following the onair session. Mark it to down to you calendar not miss it!

Do not hesitate to follow the Quickly google+ page to not miss any future events and enhancements to Quickly.

lundi, juillet 30 2012

Time for a Quickly reboot?

Quickly is the recommended tool for opportunistic developers on ubuntu.

When we created it 3 years ago, we made some opinionated choices, which is the essence of the project. We had back then a lot of good press coverage and feedbacks (Linux Weekly NewsarstechnicaZdnetMaximum PC reviewShot of jak and some more I can't find on top of my head…)

Some choices were good, some were wrong and we adapted to the emerging needs that happened along the road. Consequently, the project evolved, the needs as well and we tried to make them match as much as possible. Overall, my opinion is that the project evolved in a healthy way, and has strong arguments seeing the number of projects using it successfully for the ubuntu developer contest. Indeed, from the ~150 submitted projects, most of them were created with Quickly and seeing the few support we needed to do, it means that most of things are working quite well! Also, the comments on the developer contest seems to be really positive about Quickly itself.

So? Time to go the beach and enjoy? Oh no… Despite this great success, can we do better? I like that sometimes, we can step back, look around, and restate the project goals to build an even brighter future, and I think now is this time and that's why I want to announce a Quickly reboot project!

quickly_reboot.png

Why a reboot?

We will need to port Quickly itself to python3, we have no hard deadline right now on it, but it's something (especially with unicode versus string) that I want to do soon. As this will ask a lot of refactoring, I guess it's time to evaluate the current needs, drop some code, completely use another direction to attract more 3rd party integrator (like do people want to integrate Quickly with their favorite IDE?), encourage template developers, make everything even more testable than today to avoid regressions… A lot of things are possible! So a reboot, meaning "let's put all aside, list what we have and what we want to do" is the appropriate term. :)

!Do you have a detailed plan right now of what is coming?

No… and that's exactly what I wanted! Well, of course, I have some ideas on papers and know globaly where I want the project to evolve, but the whole plan is to get the community contributing ideas, experiences before the first line of the reboot is even written.

I'm sold! how to participate?

I will run some google hangouts from the Quickly Google+ page. Those will be hangout on air (so you can just view them live or after the hangout without participating), asking questions/suggestions on #ubuntu-community-onair on freenode (IRC) for answering live, or you can jump in and ask directly during the show. :)

The current hangout will be also available (with an IRC chat box) on this page.

Those hangouts will be thematic to focus the discussion, and will be announced on the google+ Quickly page, this blog, the Quickly mailing list and so on…

First step… feedbacks, 2 chances to join!

The first step to build this Quickly next gen[1] is to gather some developers feedback. So, if you are a developer who used Quickly (or not!) and make softwares for ubuntu, in the context of the app showdown (or not!), we will love to ear from you! The goal of the session is not to point to any particular bug, but rather to share how the experience was for you, what template did you use, what template you would have used if it existed, what went well or bad with the workflow, when did you need to ask a question about a particular subjects? Of course, we are not limited to those questions.

You can directly jump into the hangout to ask your question live or just assist to the hangout on air and add a comment to the live event to get it read during the show.

The two first sessions will be live on different hours next Thursday and Friday: See those events are on the Quickly Google + page: subscribe to those hangouts on air to be sure to not miss them!

Next sessions?

Next topics in the following weeks will be:

  • listing the existings requirement
  • from the feedback from the first session, what needs to be added/dropped?
  • what need to be changed in Quickly to met it? Technical considerations (use of pkgme, templating system…)

all of this will be published on this blog and on the google+ page soon. I hope you are as excited as we are and will massively join us.

Note

[1] Term which seems to be use by the cool kids

jeudi, juillet 26 2012

Quickly: a path forward?

Seeing the amount of interests we saw around Quickly the past last years was really awesome!

Now that we have some more detailed view on how people are using the tool, it's time to collect and think about those data to see how we can improve Quickly. With all the new tools available like hangouts on air, it can be also now time to experiment how we can use them and use this opportunity to have a very open collaboration process as well as trying to attract more people to contribute to it. 

More on that next week. To ensure to not miss anything, please follow the Quickly Google+ page. :)

Lire la suite...

mardi, juillet 24 2012

Unity Radios lens for quantal

After having worked on the local radio scope for the music lens, I spent some time with pure python3 code, which made me experiencing a bug in Dee with pygi and python3 (now all fixed in both precise and quantal)

In addition to that, it was the good timing to experiment more seriously some mocking tool for testing the online part of the lens, and so I played with python3-mock, which is a really awesome library dedicated to that purpose[1].

So here was my playground: an Unity dedicated radio lens! You can search through thousands of online available radios, ordered by categories with some contextual informations based on your current language and your location (Recommended, Top, Near you).

Unity lens Radios full view

As with most of lenses you can refine any search results with filters. The supported ones are countries, decades and genres. The current track and radio logos are displayed if supported and double clicking any entry should start playing the radio in rhythmbox.

Unity lens Radios search and filter

Was a fun experiment and it's now available on quantal for your pleasure ;)

Oh btw:

didrocks@tidus:~/work/unity/unity-lens-radios/ubuntu$ nosetests3 ..................................................


Ran 50 tests in 0.164s

OK

Note

[1] loving the patch decorator!

lundi, juillet 16 2012

Announcing session-migration now in ubuntu

Just fresh hot of the press, session-migration is now available in quantal.

Session icon

This small tool is trying to solve a problem we encountered for a long time as a distributor, but had to postpone it way too long because of other priorities. :) It basically enables packagers and maintainers to migrate in user session data. Indeed, when you upgrade a package, the packaging tools are running under root permissions, and only hackish solutions was used in the past to enable us to change some parts of your user configuration[1], like adding the FUSA applet, adding new compiz plugins on the fly… There are tons of example when a distribution needs to migrate some user data (logged or not when the upgrade is proceeding) without patching heavily the upstream project to add a migration support there.

Fusa in 2008

This tool is executed at session startup, in a sync fashion. It contains caching and tries to execute the minimal chunk[2], based on the design of the excellent gconf->gsettings migration tool. It contains as well a dh-migrations package, with a debhelper hook (--with migrations calling dh_migrations), so that client desktop packages just have to ship a debian/<package>.migrations file linking to their migration scripts which will be shipped in the right directory. We can even imagine in the near future that when you install such a package, you end up with a notification that a session restart is necessary (and not a full reboot). Note as well that the migration happens per user and per session, so it's really important that the scripts are idempotents.

Was 3 days of fun coding this. ;) All of the C and perl codes are covered by a short, but complete testsuite run during the package build, so no breakage hopefully. ;) Associated man pages are session-migration and dh_migrations.

You can find more details on the launchpad specification as well as the recorded streamed discussion from UDS (part 1, part 2).

The illustrated session icon is under a GPL licence, found on iconfinder.

Notes

[1] Knowing that we try to respect as more as possible the gold rule and always trying to only change default, which is not possible sometimes

[2] but support multiples layers based on XDG_DATA_DIRS

- page 1 de 3