DidRocks' blog

apt-get install freedom

Aller au contenu | Aller au menu | Aller à la recherche

Unity: release early, release often… release daily! (part 3)

This post is part of the Unity daily release process blog post suite.

Now that we know how branches are flying to trunk and how we ensure that the packaging metadata are in sync with our delivery, let's swing to the heart of the daily release process!

Preliminary notes on daily release

This workflow is heavily using other components that we rely on. In addition to our own tool for daily release, which is available here, we needed to use jenkins for scheduling, controlling and monitoring the different parts of the process. This would not have been possible without the help of Jean-Baptiste and the precious and countless hours of hangouts on setting up jenkins, debugging, looking at issues and investigating autopilot/utah failures. In addition to that, thanks to the #bzr channel as this process demanded us to investigate some bzr internal behavior, and even patching it (for lp-propose). Not forgetting pbuilder-classic and other uses that we needed to benchmark and ensure we are doing the right thing for creating the source packages. Finally, we needed to fix (and automate the fix) of a lot of configuration on the launchpad side itself for our projects. Now it's clean and we have tools to automatically keeps that this way! Thanks to everyone involved into those various efforts.

General flow of daily release

Daily release, general flow The global idea of the different phases for achieving a stack delivery is the following:

  • For every component of the stack (new word spotted, will be definied in the next paragraph), prepare a source package
  • Upload them to ppa and build the packages
  • Once built, run the integration tests associated to the stack
  • If everything is ok, publish those to Ubuntu

Of course, this is the big picture, let's enter into more details now. :)

What is a stack?

Daily release is working per stack. Jenkins is showing all of them here. We validate a stack all together or reject all components that are part of it. A stack are various components having close interactions and relationships between themselves. They are generally worked on by the same team. For instance, those are the different stacks we have right now:

  • the indicator stack for indicators :)
  • the open input framework stack
  • the misc stack contains all testing tools, as well as misc components like theming, wallpapers, design assets. This is generally all components that don't have integration tests (and so we don't run any integration tests step)
  • the web application stack (in progress to have integration tests so that we can add more components)
  • the web credentials stack (in progress to have integration tests so that we can add more components)
  • the Unity stack, containing the core components of what defines the Unity experience.

The "cu2d" group that you can see in the above links is everything that is related to the daily release.

As we are working per stack, let's focus here on one particular stack and seeing the life of its components.

Different phases of one stack

General diagram

Let's have a look at the general diagram in a different way to see what is running concurrently:

Daily Release, jenkins jobs

You should recognize here the different jenkins jobs that we can list per stack, like this one for instance.

The prepare phase

The prepare phase is running one sub-prepare job per components. This is all control by the prepare-package script. Each projects:

  • Branch latest trunk for the component
  • Collect latest package version that were available to this branch and compare to distro ones, we can get multiple case here:
    • the version in the distro is the same than the one we have in the branch, we are then downloading the source code from Ubuntu of the latest available package to check its content against the content of the branch (this is a double security check).
    • If everything matches, we are confident that we won't overwrite anything in the distribution (by a past direct upload) that is not in the upstream trunk and can go on.
    • However, maybe a previous release was tempted as part of the daily release, so we need to take into account the version in the ppa we are using for daily release to eventually bump the versioning (it would be like the previous version in the ppa never existed)
    • if the version is less than the one in the distro, it means that there has been a direct upload to the distro not backported to the upstream trunk. The prepare job for this component is exiting as unstable, but we don't block the rest of the process. We'll simply ignore that component and try to validate all the other changes on the other projects without anything new from that one (if this component was really needed, the integration tests will fail later on). The integration team will then work to get those additional changes in the distro merged back into trunk for the next daily release.
  • We then check that we have new useful commits to publish. It means that a change which isn't restricted to debian/changelog or po files update has been made. If no useful revision is detected, we won't release that component today, but still go on with the other projects.
  • We then create the new package versionning. See the paragraph about versionning below.
  • We then update the symbols file. Indeed, the symbols file needs to know in which exact version a new symbol was introduced. This can't be done as part of the merge process as we can't be sure that next day, the release will be successful. So upstream is using the magic string "0replaceme" instead of providing a version as part the merge adding a symbol to a library. As part of making that package a candidate for daily release, we then replace automatically all the occurences of "0replaceme" by the exact versionning we computed just before. We add an entry to the changelog if we had to do that to document.
  • Then, we prepare the changelog content.
    • We first scan the changelog itself to grab all bugs that were manually provided as part of upstream merges by the developers since that last package upload.
    • Then, we scan the whole bzr history until latest daily release (we know the revision of the previous daily release where the snapshot has been taken by extracting the string "Automatic snapshot from revision <revision>". Fetching in detail this history (included merged branch content), we are looking for any trace of bugs attached or commit message talking about "bug #…" (there is a quite flexible regexp for extracting that).
    • We then look at the committer for each bug fixes and grab the bug title from launchpad. We finally populate debian/changelog (if it wasn't already previously mentionned in our initial parsing of debian/changelog to avoid duplication). This mean that it's directly the person fixing the issue who gets the praise (and blames ;)) for this fix. You can then see a result like that. As we only deduplicate since the last package upload, it means that we can mention again the same bug number if the previous fix in last upload wasn't enough. Also all the fixes are grouped and ordered by names, making the changelog quite clear (see here).
    • We finally append the revision from where the snapshot was taken to the changelog to make clear what's in this upload and what's not.
  • Then, we generate a diff if we have any meaningful packaging changes since last upload to Ubuntu (so ignoring changelog and symbols automatically replaced). In case we detect some, we include all meta-build information (like configure.ac, cmake files, automake files) that changed as well since last release to be able to have a quick look at why something changed. This information will be used during the publisher step.
  • We then sync all bugs that were previously detected, opening downstream bugs in launchpad so that they are getting closed by the package upload.
  • We then commit the branch and keep it there for now.
  • Building the source package, inside a chroot to try to get all build-dependencies resolved is then the next step. This is using pbuilder-classic and a custom setup to do exactly the job we want to do (we have our own .pbuilderrc and triggering tool, using cowbuilder to avoid having to extract a chroot). This step is creating as well the upstream tarball that we are going to use.
  • Finally, we upload that source package that we just got to the ubuntu-unity ppa, and save various configuration parts.

All those steps are happening in parallel for any component of the stack.

For those interested, here are the logs when there is some new commits to publish for that component. Here is another example, when there is nothing relevant. Finally, an upload not being on trunk is signaled like that (and the component is ignored, but other goes on).

Monitoring the ppa build

This is mostly the job of this watch-ppa script. It monitors the ubuntu-unity daily-ppa build status for the components we just uploaded and ensure that they are published and built successfully on all architectures. As we are running unit tests as part of the packaging build, we already have unit tests passing with latest Ubuntu ensured that way. In addition, the script generates some meta-information when packages are published. It will make the jenkins job failing if any architecture failed to build, or if a package that was supposed to start building had never its source published (after a timeout) in the ppa.

Running the integration tests

In parallel to monitoring the build, we are running the integration tests (if any). For that, we start monitoring using the previous script, but only on i386. Once all new components for this arch are built and published, we start a slave jenkins jobs running the integration tests. Those will, using UTAH, provision a machine, installing latest Ubuntu iso for each configuration (intel, nvidia, ati), then we add the packages we need from this stack using the ppa and run the tests corresponding to the current stack. Getting all Unity autopilot tests stabilized for that, was a huge task. Thanks in particular to Łukasz Zemczak and with some help of the upstream Unity team, we finally got the number of autopilot tests failing under control (from 130 to 12/15 over the 450 tests). We are using them to validate Unity nowadays. However, the indicator and oif tasks had no integration tests at all. To not block the process and trying to move on, we then "stole" the relevant (hud + indicators one for indicators) Unity autopilot tests and only run those. As oif has no integration test at all, we just ensure that Unity is still starting and that the ABI is not broken.

We are waiting on web credentials and web apps stack to grow some integration tests (which are coming in the next couple of weeks with our help) to be able to add more components from them and ensuring the quality status we target for.

Then, the check job is getting back the results of those tests and thanks to a configuration per-stack mechanism, we have triggers to decide if the current state is acceptable or not to us. For instance, over the 450 tests running for Unity (x3 as we run on intel, nvidia and ati), we accept for instance 5% of failures. We have different level for regressions and skipped tests as well. If the collected values are below those triggers, we are considering that the tests are passing.

  • Logs of tests passing. You can get a more detailed view on which tests passed and failed looking at the son job, which is the one running really the tests: https://jenkins.qa.ubuntu.com/job/ps-unity-autopilot-release-testing/. (unstable, yellow means that some tests failed). The parent one is only launching it when ready and doing the collect.
  • Logs of more tests than the acceptable threshold failing.
  • Logs of tests failing because UTAH failed to provision the machine or the daily build iso wasn't installable.

And finally, publishing

If we reach this step, it means that:

  • we have at least one component with something meaningful to publish
  • the build was successful on all architectures.
  • the tests are in the acceptable tests passing rate

So, let's go on and publish! The script has still some steps to proceed. In addition to have a "to other ppa" or "to distro" mode as a publishing destination, it:

  • will check if we have manual packaging changes (remember this diff we did for each component in the prepare stage?). If we actually do have some, we put the publishing in a manual mode. This means that we won't publish this stack automatically and only people with upload rights have special credentials to force the publish.
  • it will then propose and autoapprove upstream for each components the changes that we made to the package (edition of debian/changelog, symbols files…). Then, the upstream merger will grab those branches and merge that upstream. We are only pushing those changes back upstream now (and not during the prepare steps) as we are only finally sure we are releasing them.

Most of the time, we don't have packaging changes, so the story is quite easy. When we do, the job is showing it in the log and is marked as unstabled. The packaging diff, plus the additional build system contexts are attached as artefacts for an easy review. If the integration team agreed with those changes, they have special credentials to run cu2d-run in a special "manual" mode, ignoring the packaging changes and doing the publish if nothing else is blocking. This way, we address the concerns of "only people with upload right can ack/review packaging changes". This is our secondary safety net (the first one being the integration team looking at upstream merges).

The reality is not that easy and that's not the only case when the publish can be set as manual, but I'll discuss that later on.

Small note on the publisher, you see that we are merging back our changes to upstream, but upstream didn't sleep while we take our snapshot, build everything and have those integration tests running, so we usually end up where some commits were done in trunk in between the latest snapshot and the time we push back that branch. That's the reason why we have this "Automatic snapshot from revision <revision>" in the changelog to clearly state where we did the cut and don't rely on the commit of this changelog modification to be merged back.

Copy to distro

In fact, publishing is not the end of the story. Colin Watson had the concern that we need to separate the power and not having too many launchpad bots having upload rights to Ubuntu. That's true that all previous operations are using a bot for committing, pushing to the ppa, with its own gpg and ssh key. Having those credentials widespread on multiple jenkins machine can be scary if they give upload privileges to Ubuntu. So instead of the previous step directly being piloted by a bot having upload rights to the distro, we only generated a sync file with various infos, like this one.

Then, on the archive admin machines, we have a cron using this copy script which:

  • collects all available rsync files over the wire
  • then, check that the info in this file are valid, like ensuring that components to be published are part of this whitelist of projects that can be synced (which is the same list and metainfo used to generate the jenkins jobs and lives there).
  • do a last final check from version compared to the distro if new version were uploaded since the prepare phase (indeed, maybe an upload happened on one of those components while we were busy building/testing/publishing).
  • if everything is fine, the sources and binaries are copied from the ppa to the proposed repository.

Copy to distro

The fact to execute binary copies from the ppa (meaning that the .debs in the ppa are exactly the one that are copied to Ubuntu) gives us the confidence that what we tested is exactly what is delivered, built in the same order, to our users. The second obvious advantage as well is that we don't rebuild what's already ready.

Rebuild only one or some components

We have an additional facility. Let's say that test integration failed because of one component (like one build dependency not being bumped), we have the possibility to only rerun (still using the same cu2d-run script by people having the credentials) the whole workflow for this or those defined on the command line components. We keep for the others components exactly the same state, meaning, not rebuilding them, but they would still be part of what we tests as part of the integration suite and as part of what we are going to publish. Some resources are spared this way and we can get faster to our result.

However, for the build results, it will still take all the components (even those we don't rebuild) into account. We won't let a FTBFS slipping by that easily! :)

Why not including all commits in debian/changelog?

We made the choice to only mention changes associated bugs. This also mean that if upstream never link bugs to merge proposal, or use "bzr commit --fixes lp:XXXX" or put in a commit message something like "this fixes bug…", the upload will have an empty changelog (just having "latest snapshot from rev …") and won't detail the upload content.

This can be seen as suboptimal, but I think that we should really engage our upstream to link fixes to bugs when they are important to be broadcasted. The noise of every commit message to trunk isn't suitable for people reading debian/changelog, which is visible in update-manager. If people wants to track closely upstream, they can look at the bzr branch for that. You can see debian/changelog as a NEWS file, containing only important information from that upload. We need of course to have more and more people upstream realizing that and being educated that linking to a bug report is important.

If an information is important to them but they don't want to open a bug for it if there isn't one, there is still the possibility, as part of the upstream merge, to directly feed debian/changelog with a sensible description of the change.

Versionning scheme

We needed to come with a sane versionning schema. The current one is:

  1. based on current upstream version, like 6.5
  2. appending "daily" to it
  3. adding the current date in yy.mm.dd format

So we end up with, for instance: 6.5daily13.01.24. Then, we add -0ubuntu1 as the packaging is separated in the diff.gz by using split mode.

If we need to rerun the same day a release for the same component, as we detect that a version was already released in Ubuntu or eventually in the ppa, this will become: 6.5daily13.01.24.1, and then 6.5daily13.01.24.2… I think you got it :)

We can use this general pattern: <upstream_version>daily<yy.mm.dd(.minor)>-0ubuntu1 for the package. The next goal is to have upstream_version as small (one, two digits?) as possible.

Configuring all those jobs

As we have one job per project per series + some metajob per stack, we are about having 80 jobs right now per supported version. This is a lot, the only way to not have bazillions of people just to maintain those jenkins jobs is to automate, automate, automate.

So every stack defines some metadata like a serie, projects that they cover, when they need to run, eventual extracheck step, ppa used as source and destination, optional branches involved… Then, using cu2d-update-stack -U, this will update all the jenkins jobs to latest configuration using the templates we defined to standardize those jobs. In addition, that will reset as well some bzr configuration in launchpad for those branches to ensure that various components like lp-propose or branching the desired target will indeed do the right thing. As told previously, as this list is as well used for filtering when copying to distro, we have very few chances to get out of sync by automating everything!

So, just adding a component to a stack is basically a one line change + the script to run! Hard to do something easier. :)

And btw, if you were quite sharp on the last set of links, you have seen a "dependencies" stanza in the stack definition. Indeed, we described here the simple case and eluded completely the fact that stacks are depending on each other. How do we resolve that? How do we avoid publishing an inconsistent state? That's a story for tomorrow! :)

Unity: release early, release often… release daily! (part 2)

This post is part of the Unity daily release process blog post suite.

As part of the new Unity release procedure, let's first have look at the start of the story of a branch, how does it reach trunk?

The merge procedure

Starting the 12.04 development cycle, we needed upstream to be able to reliably and easily get their changes into trunk. To ensure that every commits in trunk pass some basic unit tests and doesn't break the build, that would obviously mean some automation would take place. Here comes the merger bot.

Merge upstream branch workflow

Proposing my branch for being merged, general workflow

We require peer-review of every merge request on any project where we are upstream. No direct commit to trunk. It means that any code change will be validated by an human first. In addition to this, once the branch is approved, the current branch will be:

  • built on most architectures (i386, amd64, armhf) in a clean environment (chroot with only the minimal dependencies)
  • unit tests will be run (as part of the packaging setup) on those archs

Only if all this passes, the branch will be merged into trunk. This way, we know that trunk is at a high standard already.

You will notice in this example of a merge request that in advance of phase (thanks to the work of Martin Mrazik, Francis Ginther), a continuous integration job is kicking in to give some early feedback to both the developer and the reviewer. This can indicate if the branch is good for merging even before approving it. This job kicks back if additional commit is proposed as well. This rapid feedback loop helps to give an additional advice on the branch quality and direct link to a public jenkins instance to see the eventual issues during the build.

Once the global status of a merge request is set to "approved", the merger will validate the branch, then takes the commit message (and falling back to the description on some project as a commit message if nothing is set), will eventually take attached bug reports (that the developer attached manually to the merge proposal or directly in a commit with "bzr commit --fixes lp:<bugnumber>") and merge that to the mainline, as you can see here.

How to handle dependencies, new files shipped and similar items

We told in the previous section that the builds are done in a chroot, clean environnement. But we can have dependencies that are not released into the distribution yet. So how to handle those dependencies, detect them and taking the latest stack available?

For that, we are using our debian packages. As this is what the finale "product" will be delivered to our users, using packages here and similar tools that we are using for the Ubuntu distribution itself is a great help. This means that we have a local repository with the latest "trunk build" packages (appending "bzr<revision>" to the current Ubuntu package version) so that when it's building Unity, it will grab the latest (eventually locally built) Nux and Compiz.

Ok, we are using packages, but how to ensure that when I have a new requirement/dependency, or when I'm shipping a new file, the packaging will be in sync with this merge request? Previously and historically, the packaging branch was separated from upstream. This was mainly for 3 reasons:

  • we don't really want to require that our upstream learns how to package and all the small details around this
  • we don't want to be seen as distro-specific for our stack and be as any other upstream
  • we the integration team wants to have the control over their packaging

This mostly worked in the sense that people had to ping the integration team just before setting a merge to "approve", and ensure no other merges was in process meanwhile (to not take the wrong packaging metadata with other branches). However, it's quite clear this can't scale at all. We did have some rejections because ensuring that we can be in sync was difficult.

So, we decided this cycle to have the packaging inlined with the upstream branch. This doesn't change anything for other distributions as "make dist" is used to create a tarball (or they can grab any tarball from launchpad from the daily release) and those don't contains the packaging infos. So we are not hurting them here. However, this ensures that we are in sync between what we will deliver the next day to Ubuntu and what upstream is setting into their code. This work started at the very beginning of the cycle and thanks to the excellent work of Michael Terry, Mathieu Trudel-Lapierre, Ken VanDine and Robert Bruce Park, we got that quickly in. Though, there were some exceptions where achieving this was really difficult, because of unit tests were not really in shape to work in isolation (meaning in a chroot, with mock objects like Xorg, dbus…). We are still working on getting the latest elements bootstrapped to this process and having those tests smoothly running. I clearly know this idea of using the packaging to build the upstream trunk and having this inlined is a drastic change, but from what we can see since October, we have pretty good results with this and it seems to have worked out quite well! I would like to thanks again the whole awesome product strategy (the canonical upstream) team to have let that idea going through and facilitate as much as possible this process. Thanks as well to the jenkins master (2 of them already announced previously, plus Allan LeSage and Victor R. Ruiz) to have completed all the jenkins/merger machinery changes that were needed on each project for that.

We can't expect that every upstream will know everything about the packaging, consequently the integration team is here and available for giving any help which is needed. I think on the long term that basic packaging changes will be directly done by upstream (we are already seeing some people bumping the build-dependency requirement themselves, adding a new file to install, declaring a new symbol as part of the library…). However, we have processes inside the distribution and only people with upload rights in Ubuntu is supposed to do or review the changes. How does this work with this process? Also we have some feature freeze and other Ubuntu processes, how will we ensure that upstream are not breaking those rules?

Merge guidelines

As you can see in the first diagram, some requirements during a merge request, both controlled by the acceptance criterias and the new conditions from inline packaging, are set:

  • Design needs to acknowledge the change if there is any visual change involved
  • Both the developer, reviewer and the integration team ensure that ubuntu processes are followed (UI Freeze/Feature Freeze for instance). If exceptions are required, they check before approving for merging that they are acknowledged by different parts. The integration team can help smooth to have this happened, but the requests should emerge from developers.
  • Relevant bugs are attached to the merge proposal. This is useful for tracking what changed and for generating the changelog as we'll see in the next part
  • Review of new/modified tests (for existence)
  • They ensure that it builds and unit tests are passing automated
  • Another important one, especially when you refactor, is to ensure that integration tests are still passing
  • Review of the change itself by another peer contributor
  • If the change seems important enough to have a bug report linked, ensure that the merge request is linked to a bug report (and you will get all the praise in debian/changelog as well!)
  • If packaging changes are needed, ping the integration team so that they acknowledge them and ensure the packaging changes are part of the merge proposal.

The integration team, in addition to be ready to help on request by any developer of our upstream, is having an active monitoring role over everything that is merged upstream. Everyone has some part of the whole stack attributed under his responsibility and will spot/start some discussion as needed if we are under the impression that some of those criterias are not met. If something not following those guidelines are confirmed to be spotted, anyone can do a revert by simply proposing another merge.

This enables to mainly answer the 2 other fear of what inline packaging may interfere by giving upstream control over the packaging. But this is the first safety net, we have a second one involved as soon as there is a packaging change since last daily release that we'll discuss in the next part.

Consequences for other Ubuntu maintainers

Even if we have our own area of expertise, Ubuntu maintainers can touch any part of what constitutes the distribution (MOTUs on universe/multiverse and core developers on anything). On that purpose, we didn't want daily release and inline packaging to change anything for them.

We added some warning on debian/control, pointing the Vcs-Bzr to the upstream branch with a comment above, this should highlight that any packaging change (if not urgent) needs to be a merge proposal against the upstream branch, as if we were going to change any of the upstream code. This is how the integration team is handling transitions as well, as any developer.

However, it happens that sometimes, an upload needs to be done in a short period of time, and we can't wait for next daily. If we can't even wait for manually triggering a daily release, the other option is to directly upload to Ubuntu, as we normally do for other pieces where we are not upstream for and still propose a branch for merging to the upstream trunk including those changes. If the "merge back" is is not done, the next daily release for that component will be paused, as it's detecting that there are newer changes in the distro, and the integration team will take care of backporting the change to the upstream trunk (as we are monitor as well uploads to the distributions).

Of course, the best is always to consult any person of the integration team first in case of any doubt. :)

Side note on another positive effects of inline packages

Having to inline all packages was a hard and long work, however in addition to the previous benefits hilighted, this enabled us to standardize all ~60 packages from our internal upstream around best practices. They should now all look familar once you have touch any of them. Indeed they all use debhelper 9, --fail-missing to ensure we ship all installed files, symbol files for C libraries using -c4 to ensure we force updating them, running autoreconf, debian/copyright using latest standard, split packages… In addition to be easier for us, it's as well easier for upstream as they are in a familar environment if they need to do themselves any changes and they can just use "bzr bd" to build any of them.

Also, we were able to remove and align more the distro patch we had on those components to push them upstream.

Conclusion on an upstream branch flow

So, you should now know everything on how upstream and packaging changes are integrated to the upstream branch with this new process, why we did it this way and what benefits we immediately can get from those. I guess having the same workflow for packaging and upstream changes is a net benefit and we can ensure that what we are delivering between those 2 are coherent, of a higher quality standard, and in control. This is what I would keep in mind if I would have only one thing to remember from all this :). Finally, we are taking into account the case of other maintainers needing to make any change to those components in Ubuntu and try to be flexible in our process for them.

Next part will discuss what happens to validate one particular stack and uploading that to the distribution on a daily basis. Keep in touch!

Unity: release early, release often… release daily! (part 1)

This post is part of the Unity daily release process blog post suite. This is part one, you can find:

For almost the past 2 weeks (and some months for other part of the stacks), we have automated daily release of most of the Unity components directly delivered to Ubuntu raring. And you can see those different kinds of automated uploads published to the ubuntu archive.

I'm really thrilled about this achievement that we discussed and setup as a goal for the next release at past UDS.

Why?

The whole Unity stack has grown tremendously in the past 3 years. At the time, we were able to release all components, plus packaging/uploading to ubuntu in less than an hour! Keeping the one week release cadence by then was quite easy, even though a lot of work. The benefit was that it enabled us to ensure that we have a fluid process to push what we developped upstream to our users.

As of today, teams have grown by quite some extends, and if we count everything that we develop for Unity nowadays, we have more than 60 components. This covers from indicators to the theme engine, from the open input framework to all our tests infrastructure like autopilot, from webapps to web credentials, from lenses to libunity, and finally from compiz to Unity itself without forgetting nux, bamf… and the family is still growing rapidly with a bunch of new scopes coming down the pipe through the 100 scopes project, our own SDK for the Ubuntu phone, the example applications for this platform we are about to upload to Ubuntu as well… Well, you got it, the story is far from ending!

So, it's clear that those numbers of components that we develop and support will only go higher and higher. The integration team already scaled by large extends their work hours and rush to get everything delivered timely to our user base[1]. However, it's with no question that we won't be able to do that forever. We don't want as well introducing artificial delays to our own upstream on when we are delivering stuff to our users. We needed to solve that issue while not paying any price on quality, nor balancing the experience we deliver to our users. We want to keep high standards, and even, why not allying this need while providing an even a better, more reliable, and better evaluation before releasing of what we eventually upload to Ubuntu from our upstreams. Getting our cake and eat it too! :)

Trying to change for the better

What was done in the last couple of cycles was to separate between 2 groups the delivery of those packages to the users. There was an upstream integration team, which will hand over to the Ubuntu platform team theorically ready-to-upload packages, making the reviews, helping them, fixing some issues and finally sponsoring their work to Ubuntu. However, this didn't really work for various reasons and we quickly realized that this ended up just complexifying the process instead of easing it out. You can see a diagram of where we ended up when looking back at the situation:

end of 12.04 and 12.10 release process

Seems easy isn't it? ;) Due to those inner loops and gotchas, in addition to the whole new set of components, we went from a weekly release cadence to doing 4 to 5 solid releases during a whole cycle.

Discussing about it with Rick Spencer, he gave me a blank card on thinking how we can make this more fluid to our users and developers. Indeed, with all the work piling up, it wasn't possible to release immediatly the good work that upstream did in the past days, which can lead to some frustration as well. I clearly remember that Rick used the term "think about platform[2] as a service". This kind of immediatly echoed in me, and I thought, "why not trying that… why not releasing all our code everyday and delivering a service enabling us to do that?"

Daily release, really simplified diagram

Even if not planned from the beginning (sorry, no dark, hidden, evil plan here), thinking about it, this makes sense as part of some kind a logical progression from where we started since Unity exists:

  • Releasing every week, trying manually to get the release in a reasonable shape before uploading to Ubuntu
  • Raise the quality bar, put some processes for merge reviewing.
  • Adding Acceptance Criterias thanks to Jason Warner, ensuring that we are getting some more and more good tests and formal conditions on doing a release
  • Automate those merges through a bot, ensuring that every commits in trunk builds fine, that unit tests are passing
  • Raise again the quality bar, adding more and more integration tests

Being able and ensuring we are able to release daily seems really, looking at it, par of the next logical step! But it wouldn't have been possible without all those past achievements.

Advantages of daily releases

It's quite immediate to see a bunch of positive aspects of doing daily releases:

  • We can spot way faster regressions. If a new issue arose, it's easier to bisect through packages and find the day when the new regression or incorrect behavior started to happen, then, looking at the few commits to trunk (3-5?) that was done this day and pinpoint what introduced it.
  • This enables us to deliver everything in a rapid, reliable, predictable and fluid process. We won't have crazy rushes as we had in the past cycles around goal dates like feature freezes to get everything in the hand of the user by then. This will be delivered automatically the day after to everyone.
  • I see also this as a strong motivation for the whole community who contributes to those projects. Not having to wait a random date hidden in a wiki for a "planned release date" to see the hard work you put into your code to be propagated to the user's machines. You can immediately see the effect of your hacking on the broader community. If it's reviewed and approved for merging (and tests passes, I'll come back to that later), it will be in Ubuntu tomorrow, less than 24 hours after your work reached trunk! How awesome is that?
  • This also means that developers will only need to build the components they are working on. No need for instance to rebuild compiz or nux to do an Unity patch because the API changed and you need "latest everything" to build. Lowering the entry for contributing and chances that you have unwanted files in /usr/local staying around conflicting with the system install.

Challenges of daily release

Sure, this comes with various risks that we had to take into account when designing this new process:

  • The main one is "yeah, it's automated, how can you be sure you don't break Ubuntu pushing blindly upstream code to it?". It's a reasonable objection and we didn't ignore it at all (having the history of years of personal pain on what it takes to get a release out in an acceptable shape to push to Ubuntu) :)
  • How to interact properly with the Ubuntu processes? Only core developpers, motus, and per-package uploaders have upload rights to Ubuntu. Will this new process in some way give our internal upstream the keys to the archives, without having them proper upload rights?
  • How ensuring packaging changes and upstream changes are in sync, when preparing a daily release?
  • How making useful information in the changelog so that someone not following closely upstream merges but only looking at the published packages in raring-changes mailing list or update-manager can see what mainly changed in an upload?
  • How to deal with ABI breaks and such transitions, especially when they are cross-stacks (like a bamf API change impacting both indicators and Unity)? How to ensure what we deliver is consistent across the whole stacks of components?
  • How do we ensure that we are not bindly releasing useless changes (or even an upload with no change at all) and so, using more bandwith, build time, and so on, for nothing?

How then?

I'll detail much of those questions and how we try to address those challenges in the subsequent suite of blog posts. Just for now, to sum up, we have a good automated test suite, and stacks are only uploaded to Ubuntu if:

  1. their internal and integration tests are passing above a certain theshold of accepted failures.
  2. they don't regress other stacks

Stabilizing those tests to get reliable result was a tremendous work for a cross-team effort, and I'm really glad that we are confident in them to finally enable those daily release.

On another hand, additional control is made to ensure that packaging changes are not committed without someone with upload right being in the loop for acking the final change before the copy to the distribution happens.

Of course, the whole machinery is not limited to this and is in fact way more complicated, I'll have the pleasure the write about those in separate blog posts in the following days. Stay tuned!

Notes

[1] particularly around the feature freeze

[2] the Ubuntu platform team here, not Ubuntu itself ;)

Getting sound working during a hangout in raring

Since approximately the beginning of the raring development cycle, I had an issue on my thinkpad x220 with google hangouts forcing me to use a tablet or a phone to handle them. What happened is that once I entered a hangout, after 40-60s, my sound was muted and there was no way to get the microphone back on, same for ouptut from other participants. Video was still working pretty well.

Worse, even after exiting the hangout, I could see the webcam was still on and the plugin couldn't reconnect or recreate a hangout, I had to kill the background google talk process to get it working back (for less than a minute of course ;)).

Finally, I took a look this morning and googling about that issue. I saw multiple reports of people tweaking the configuration file to disable volume auto adjustement. So, I gave it a try:

  1. Edit ~/.config/google-googletalkplugin/options
  2. Replace the audio-flags value set to 3 to use 1 (which seems disabling this volume auto adjustement feature)

Then, kill any remaining google talk process if you have some and try a hangout. I tested for 5 minutes and I was still able to get the sound working! After quitting the hangout, the video camera is turned off as expected. I can then restart another hangout, so the google talk process don't seem to be stalled anymore.

Now, it's hard to know what regressed as everything was working perfectly well on ubuntu 12.10 with that hardware. I tried on chromium, chrome canary and firefox with the same result. The sound driver don't seem to be guilty as I tried to boot on an 12.10 kernel. The next obvious candidate is then pulseaudio, but it's at the same version than in 12.10 as well. So it seems to be in the google talk plugin, and I opened a ticket on the google tracker.

At least, with that workaround, you will be able to start next year with lasting sound during your hangouts if you got this problem on your machine. ;)

Few days remaining for FOSDEM 2013 Crossdesktop devroom talks proposal

The Call for talks for the Crossdesktop devroom at FOSDEM 2013 is getting to its end this very Friday! This year, we'll have some Unity related talks. If you are interested in having one, no time to lose and submit your talk today!

Proposals should be sent to the crossdesktop devroom mailing list (you don't have to subscribe).

Quickly reboot: Q&A session wrap up!

Last Wednesday, we had our Quickly reboot on air hangout, welcoming the community to ask questions about Quickly and propose enhancement.

Here is the recording of the session:

As for the previous sessions, we had a lot of valuable feedbacks and participation. Thanks everyone for your help! Your input is tremendous to shape the future of Quickly.

We are making a small pause in the Quickly Reboot hangouts, but we will be back soon! Meanwhile, you can catch up on the session notes, and provide feedback/ideas on this wiki page! Follow us on google+ to ensure to not miss any annoucement.

Quickly reboot: Q&A sessions!

The previous Quickly reboot session about templates was really instructive! It started a lot of really interesting and opened discussions, particularly on the Quickly talks mailing list where the activity is getting higher and higher. Do not hesitate to join the fun here. :)

As usual, if you missed the on-air session, it's available here:

I've also summarized the session note on the Quickly Reboot wiki page.

Next session: Q&A!

But we don't stop here, the next session will be hold this Wednesday! If you read through the previous links, you will see a lot of pending questions we have still to discuss about, this will be used as the base conversation of the session. However, in addition to those topics, all of your questions will be taken into account as well! You can jump during the session on #quickly on freenode, while watching the show on ubuntuonair. You can as well prepare your questions and new ideas for Quickly, and post them to the google moderator page. There are plenty of ways to participate and help shaping the future of Quickly. Do not miss this session and subscribe to it right now.

Also, ensure you won't miss anything about Quickly and its reboot by subscribing to our google+ page.

Quickly reboot: developer feedback wrap up and templates content

Previous sessions

The first two hangouts on Quickly reboot about developer feedback were really a blast! I'm really pleased about how much good ideas and questions emerged from those.

If you missed them, the hangouts on air are available now on youtube. Go and watch them if you are interested:

I've also taken some notes during the sessions, here are what I think was important and came from them: hangouts notes. It's a wiki, if you do have any feedback/questions/other subjects you want to get discussed, don't be shy and edit it! Quickly is a fully community-driven project and we love getting constructive feedbacks/new ideas from everyone. ;)

I've also added on it some nice spot of discussions for future sessions, and speaking of sessions…

Next step: default templates

Next session is a really important one: what should be the default templates in Quickly? From the previous discussions, seems that python + gtk, a html5 one and the already existing unity-lens ones are the good candidates. If so, what should be in every of each of those? How should look the default applications? Which framework (if any) in the case of html5 should we use? Should we make opinionated choices or just providing a default set? What should we provide as tools for them, and so on…

Join the conversation, I'm sure it will be a lot of fun! We plan to have the hangout at 4PM UTC on Wednesday. Ensure to follow it either by jumping in the hangout itself or by following the onair session. Mark it to down to you calendar not miss it!

Do not hesitate to follow the Quickly google+ page to not miss any future events and enhancements to Quickly.

Time for a Quickly reboot?

Quickly is the recommended tool for opportunistic developers on ubuntu.

When we created it 3 years ago, we made some opinionated choices, which is the essence of the project. We had back then a lot of good press coverage and feedbacks (Linux Weekly NewsarstechnicaZdnetMaximum PC reviewShot of jak and some more I can't find on top of my head…)

Some choices were good, some were wrong and we adapted to the emerging needs that happened along the road. Consequently, the project evolved, the needs as well and we tried to make them match as much as possible. Overall, my opinion is that the project evolved in a healthy way, and has strong arguments seeing the number of projects using it successfully for the ubuntu developer contest. Indeed, from the ~150 submitted projects, most of them were created with Quickly and seeing the few support we needed to do, it means that most of things are working quite well! Also, the comments on the developer contest seems to be really positive about Quickly itself.

So? Time to go the beach and enjoy? Oh no… Despite this great success, can we do better? I like that sometimes, we can step back, look around, and restate the project goals to build an even brighter future, and I think now is this time and that's why I want to announce a Quickly reboot project!

quickly_reboot.png

Why a reboot?

We will need to port Quickly itself to python3, we have no hard deadline right now on it, but it's something (especially with unicode versus string) that I want to do soon. As this will ask a lot of refactoring, I guess it's time to evaluate the current needs, drop some code, completely use another direction to attract more 3rd party integrator (like do people want to integrate Quickly with their favorite IDE?), encourage template developers, make everything even more testable than today to avoid regressions… A lot of things are possible! So a reboot, meaning "let's put all aside, list what we have and what we want to do" is the appropriate term. :)

!Do you have a detailed plan right now of what is coming?

No… and that's exactly what I wanted! Well, of course, I have some ideas on papers and know globaly where I want the project to evolve, but the whole plan is to get the community contributing ideas, experiences before the first line of the reboot is even written.

I'm sold! how to participate?

I will run some google hangouts from the Quickly Google+ page. Those will be hangout on air (so you can just view them live or after the hangout without participating), asking questions/suggestions on #ubuntu-community-onair on freenode (IRC) for answering live, or you can jump in and ask directly during the show. :)

The current hangout will be also available (with an IRC chat box) on this page.

Those hangouts will be thematic to focus the discussion, and will be announced on the google+ Quickly page, this blog, the Quickly mailing list and so on…

First step… feedbacks, 2 chances to join!

The first step to build this Quickly next gen[1] is to gather some developers feedback. So, if you are a developer who used Quickly (or not!) and make softwares for ubuntu, in the context of the app showdown (or not!), we will love to ear from you! The goal of the session is not to point to any particular bug, but rather to share how the experience was for you, what template did you use, what template you would have used if it existed, what went well or bad with the workflow, when did you need to ask a question about a particular subjects? Of course, we are not limited to those questions.

You can directly jump into the hangout to ask your question live or just assist to the hangout on air and add a comment to the live event to get it read during the show.

The two first sessions will be live on different hours next Thursday and Friday: See those events are on the Quickly Google + page: subscribe to those hangouts on air to be sure to not miss them!

Next sessions?

Next topics in the following weeks will be:

  • listing the existings requirement
  • from the feedback from the first session, what needs to be added/dropped?
  • what need to be changed in Quickly to met it? Technical considerations (use of pkgme, templating system…)

all of this will be published on this blog and on the google+ page soon. I hope you are as excited as we are and will massively join us.

Note

[1] Term which seems to be use by the cool kids

Quickly: a path forward?

Seeing the amount of interests we saw around Quickly the past last years was really awesome!

Now that we have some more detailed view on how people are using the tool, it's time to collect and think about those data to see how we can improve Quickly. With all the new tools available like hangouts on air, it can be also now time to experiment how we can use them and use this opportunity to have a very open collaboration process as well as trying to attract more people to contribute to it. 

More on that next week. To ensure to not miss anything, please follow the Quickly Google+ page. :)

Lire la suite...

Unity Radios lens for quantal

After having worked on the local radio scope for the music lens, I spent some time with pure python3 code, which made me experiencing a bug in Dee with pygi and python3 (now all fixed in both precise and quantal)

In addition to that, it was the good timing to experiment more seriously some mocking tool for testing the online part of the lens, and so I played with python3-mock, which is a really awesome library dedicated to that purpose[1].

So here was my playground: an Unity dedicated radio lens! You can search through thousands of online available radios, ordered by categories with some contextual informations based on your current language and your location (Recommended, Top, Near you).

Unity lens Radios full view

As with most of lenses you can refine any search results with filters. The supported ones are countries, decades and genres. The current track and radio logos are displayed if supported and double clicking any entry should start playing the radio in rhythmbox.

Unity lens Radios search and filter

Was a fun experiment and it's now available on quantal for your pleasure ;)

Oh btw:

didrocks@tidus:~/work/unity/unity-lens-radios/ubuntu$ nosetests3 ..................................................


Ran 50 tests in 0.164s

OK

Note

[1] loving the patch decorator!

Announcing session-migration now in ubuntu

Just fresh hot of the press, session-migration is now available in quantal.

Session icon

This small tool is trying to solve a problem we encountered for a long time as a distributor, but had to postpone it way too long because of other priorities. :) It basically enables packagers and maintainers to migrate in user session data. Indeed, when you upgrade a package, the packaging tools are running under root permissions, and only hackish solutions was used in the past to enable us to change some parts of your user configuration[1], like adding the FUSA applet, adding new compiz plugins on the fly… There are tons of example when a distribution needs to migrate some user data (logged or not when the upgrade is proceeding) without patching heavily the upstream project to add a migration support there.

Fusa in 2008

This tool is executed at session startup, in a sync fashion. It contains caching and tries to execute the minimal chunk[2], based on the design of the excellent gconf->gsettings migration tool. It contains as well a dh-migrations package, with a debhelper hook (--with migrations calling dh_migrations), so that client desktop packages just have to ship a debian/<package>.migrations file linking to their migration scripts which will be shipped in the right directory. We can even imagine in the near future that when you install such a package, you end up with a notification that a session restart is necessary (and not a full reboot). Note as well that the migration happens per user and per session, so it's really important that the scripts are idempotents.

Was 3 days of fun coding this. ;) All of the C and perl codes are covered by a short, but complete testsuite run during the package build, so no breakage hopefully. ;) Associated man pages are session-migration and dh_migrations.

You can find more details on the launchpad specification as well as the recorded streamed discussion from UDS (part 1, part 2).

The illustrated session icon is under a GPL licence, found on iconfinder.

Notes

[1] Knowing that we try to respect as more as possible the gold rule and always trying to only change default, which is not possible sometimes

[2] but support multiples layers based on XDG_DATA_DIRS

Added rhythmbox radio support to unity music lens

Just got it merged (and will be freshly available in Unity 6.0 coming soon in quantal)!

I spent few hours last week to add rhythmbox radios (and writing unit tests) for the music lens. It's been a long time I didn't write some serious vala (I guess last time was for unity's Alt+F2). I confirm it's still not my favorite langage ;)

Coming back to the radios, they will now show as the last row (after tracks, albums and eventually online purchase ones), and they respect (if the metadata are provided in rhythmbox) to every usual music lens filters. Some obligatory screenshot: Rhythmbox radios in music lens

Coming soon, some more news about online radios searching directly in unity (just need a patch in dee for python3 support being released first) ;)

Android ICS on wetab (exopc slate)!

Spend few hours installing Android ICS (Cyanogenmod built for x86) on my exopc tablet and playing with it.

I've never been impressed by Meego installed on it as a developer preview, performance and feature-wise. I've found yesterday evening those instructions and links about a corvusmod rebuild of cynagenmod and I gave it a try! Of course, as this is not an arm device but x86 one, not a lot of applications are working out of the box, but overall, the UI is totally functional and browsing the web is a delightful experience.

Galaxy Nexus After migrating to a google nexus phone (my previous phone had android 2.2), I'm now full ICS at home and I love it! Well done Android team for taking hard decisions and focusing on the user experience itself[1]. This tremendous turn on having coherent and well-designed UI gave a lot of credits to the whole OS. Everything is now smooth on my phone AND my tablet and I love the whole ecosystem integration. :)

I hope we will be able to deliver the same kind of experience on ubuntu, coherent and centralized ecosystem where going from one device to another is almost seamless. I noted as well that the base OS image is only ~170 Mb. I wonder if we could achieve something similar seeing for how many releases we stroke to find enough space on the old 700Mb limitation… I guess we will have to take drastic decisions, but this time, keep them straight as we already tried 2 years ago with the netbook edition flavor :)

Note

[1] And I'm really happy that at Canonical, we are currently doing the same on Ubuntu

Cours python sur Ubuntu, débutant en programmation en ligne

Bonjour à tous,

je passe le message que Rick Spencer (un américain perdu en France depuis presque un an, responsable d'ubuntu engineering chez Canonical) veut lancer quelques sessions interactives[1] en français pour apprendre les bases de la programmation, en python. Python est un langage très accessible et excellent pour débuter dans ce domaine. Il permet aussi bien de créer de petits scripts que de vraies grandes applications (la plupart des applications spécifiques à ubuntu comme le software-center, update-manager, jockey, ubuntu one sont écrits dans ce language!).

Bref, n'hésitez pas à venir laisser un commentaire sur son blog pour trouver les meilleurs horaires si vous voulez participer à l'aventure !

Annonce sur le blog de Rick.

Note

[1] sûrement par google hangout

Precise Pangolin has now its finale Unity release (5.10)

Most of the time, after an Unity release, I look like that:

Jogging

Well, ok, not really, but in fact, this is after doing some exercice after an unity release, so it's exactly the same, isn't it? :)

This particular release didn't follow this rule though. Even if last Friday and Monday were a little bit rushy, we kept merging latest patches on a regular basis to polish the Unity experience on 3D as well as on 2D for Ubuntu 12.04 LTS. Nothing scary emerged at the last minute. Also, thanks to the community testing (85 results for unity 3D and 7 for unity 2D, you rock guys!), we were able to analyze and fix the most important issues that were raised before releasing, taking our time to upload Unity before the Finale Freeze. The long stability trail we had in Unity for the whole precise development release clearly shows that the new processes in place and the maturity of Unity are paying off.

I'll even tell you a secret, since we started the Unity journey, I've never been so proud and positive about any unity release than I'm on this 5.10. Consequently, this finale version of Unity on this quality-based ubuntu 10.04 LTS sounds like a Perfect timing for a Precise release for the Pangolin. :)

In addition to that, this latest release will give you a nice extra cookie for configuring the HUD key in the launcher category of the shortcuts in gnome-control-center for both Unity 2D and 3D.

Of course, it's not the complete end of the unity story in precise, we will maybe be able to fix some crashes that a wider testing would potentially give us before the finale release (and we have two small issues in minds we want to fix also before precise is released or in a first SRU), but we are very confident that Unity in 12.04 LTS will be the greatest Unity experience ever! Good job team, thanks to everyone who contributed through bug reports, code, translations, testing… We won't be were we are now without you!

Unity 5.8 out, ready for beta2!

As I write Unity 5.8 is currently building on our official builders and will reach ubuntu precise soon. For this release, a big part of the stack was uploaded (14 components) including unity, unity-2d, nux of course, but also compiz 0.9.7.2 and the lenses (with some ABI breaks in the middle).

What's new on this unity release? Well, you can see the 3d milestone and 2d bug fixes (look also at this one for unity 2d which landed last monday). Keep in mind also that some 3d bug fixes benefits 2d as well! So, a lot of bug unitfixes, but that's not only it! We also got some UI refinements, some very visible, some more hidden, but it's all those little touch which makes the whole experience better!

Do not forget as well that the music lens is now taking back an important role as it supports rhythmbox in addition to banshee!

Gnome Control Center dIsplay Unity panel

Also, we won some multimonitors new capabilities in both 2d and 3d. Basically, you can now decide if you want sticky edges between your monitors or not[1], and decide if you want one launcher (which is then set on your primary monitor), or a launcher on each monitor. Gnome Control Center was patched to add those default options (which only appears on an unity session) back to the main experience. You will now notice as well a more "unityish" preview with a panel and a launcher displayed. Dragging the launcher also enables to change where you want to set it in addition to the combobox (think about clicking on apply to get the choice taken into account!).

We encountered some hiccups but we either fixed them or workarounded other issues for landing the new stack in beta2. We still have an annoying issue which seems to be appearing rarely (only on some configurations). This is logged as critical on this bug. If you encounter this latter, you should choose the unity-2d session for now on your logging screen. Well done to everyone involved on this release!

We are trying as well a new approach to avoid having the current development version of ubuntu uninstallable for a while (because of a long dependency chain in all the related unity components) while everything is building. Basically, we are using experimentally the -proposed pocket and once everything is built, it will be copied to the main archive. Thanks for the release team to push the right buttons to enable this, let's hope this experiment will be successful and that we can generalize it in Q for every unity release and other components (and not only when we are frozen)!

Hope you will enjoy this release and precise beta2, which is just around corner!

New, shiny, Unity 5.6 released!

Phew! it's been a long road to release the next unity, but I'm more than happy to finally announce the release of 5.6. Unity components (dee, libunity, bamf, lenses, nux) and unity itself, plus some compiz snapshots (post 0.9.7.0) are part of this release. The packages are currently building on the official builders and should be soon available to you.

No particular new feature apart from better ibus support are part of it, plus a tons of bug fixes and some miscelleanous improvements: - Daniel van Vungt landed a patch in compiz that enhances its performance for more than 51%! When you test it, I can ensure you feel a real noticeable difference (in particular on older machines, like mine). - The alt tap false positive revealing the HUD is now part of the past. We know this one was annoying people, I can only tell you it's been technically challenging ;). This has been a rocking combined effort in compiz/unity sides. - the file lens can now find files that were never opened before.

I mentioned challenge… Yeah this release was challenging both process-wise and technically one. We noticed once we had frozen the code quite a fair number regressions. Tracking them and fixing took some times, regressed other things to track, asked for a new round of testing, regressed another part… That was a funny quite of race between issues as you can infer. But we keep it straight, we didn't release before everything was ready. Precise is about quality, and we will keep this strict path until 12.04 LTS is here (and continue after that!). All branches that enter trunk, at release time, have sensible tests associated. Tests are improved everyday and becoming easier to write thanks to the restless Thomi's work. We are more and more confident in everyone's work thanks to that.

Of course, everything is not perfect, the freeze process is debated and we will discuss how things can be improved, particularly when we have a long freeze time because of multiple regressions like this time. We are building upon those feedbacks and hope to make the process even better as soon as for executing it for unity 5.8, our next target.

Well done product stategy team. Keep it on!

Unity 5.2 is now released!

Phew! It's been a crazy ride to release Unity 5.2 once ubuntu precise released its alpha 2, but we finally get there!

Thanks a lot for all the community participation, we actually got 27 testers answering to Nick's call for testing. Those were high quality contributions and enabled us to get closer to the unity release.

So, what's new since 5.0? Well, a lot! :) More precisely, we got multimonitor support with screen edge detection, "push to reveal" launcher behavior to avoid false positive when hitting the back button of firefox, per workspace alt-tab switcher, new home dash, automaximize only on netbooks and a lot of small details that matter.

Test results

Here are some feedback after 5 hours that I took to collect and analyze the test results from all the (numerous) comments that were on the test results:

  • Testers confirmed that some of the issues spotted on 5.0 are now fixed, which is a great news! Not all of them, and of course, we have some minor regressions. I added those issues to the list of "distro priorities". You can look at them there. This list doesn't show all the defects we have, of course, but give a good overview of the big ones we track to ensure they are fixed as soon as possible.
  • Some tests have been updated due to new upstream behavior (like the per workspace scale option and new home dash which now retains its search status). Thanks for people testing it and to have spotted that we missed those changes when updating the tests! We also rephrased with the given suggestions some of them.
  • Some people seem to get difficulties to open menus from the application and the indicator when clicking on them in the panel (only Alt or F10 seems to work). I strongly invite them to open a bug repot with a video attached and giving more info as I couldn't reproduce it there.
  • There is a bamf bug revealing only on some particular circumstances (8 fails, and last time, we also get some failures on this test) when testing launcher/quicklist-pin. I personnaly couldn't reproduce it here. Then, I asked seb128 to give it a shot and he could get the issue. I tried again and this time, I got it! However, this seems to not be reliable or reproducible 100%. We opened a bug and put it on the priority list. Well spotted everyone! :)
  • Also some testers made some interesting design request, I'm reminding you of this link on how to join the relevant mailing list to participate in unity design (the introduction text stated it though ;)).
  • We got also some comments of "key above tab" and why we used this terminology rather than directly telling, let's say "`". Please remember that this is a keyboard dependent configuration! The usa keyboard is normally using `, my azerty keyboard is using ², it seems that for some other configuration it's < or ~. So yeah, we have to keep the test cases as generic as possible, bare with us, please! :)
  • We added some new test cases as well due to a very particular way of triggering some bugs like for instance bug #877778. Thanks to the one adding a comment to explain how to trigger it!
  • Despite our strong efforts to make an easy way for unity restarting on a simple click from the tests (and improving it), it seems that the glibmm/compiz bug preventing to restart it reliably on demand is still an issue. It's not a very important bug for everyday use, however, it will be nice that we can get it over for the tests in particular. Fortunately, checkbox enables you to continue the tests where you stopped even if you had to restart your session.

A story of boot time

Finally and probably the most important feedback from the whole list, peope started to feel that "it was longer to start/boot". Jumping on this fact, we made some bootcharts on our machines to get real and precise values and you know what… the comments were right! The multimonitor support made the boot time badly regressing. Consequently, we decided to delay the release until today to get that fixed rather than pushing a version with this performance impact on intel cards. We finally got the fix, push it to trunk and now, this is all old story! :) Thanks to all the community for spotting this one, it's better to remark it earlier than later and this participation really had a visible impact (or rather avoided some real visible impact ;)) for a bunch of users. Well done!

The importance of testing defaults

Some testers remarked that in the system settings test, we never told to add gnome control center to the launcher. However, in the introduction text, we clearly expressed that we expect testers having the default settings (you have the guest session for it, use it, love it!) and the system settings is by default pinned in the launcher. :) For instance, intellihide is the default behavior and we didn't say anything to ensure that intellihide is there. If we did it, there will be a long list of prerequesites on the top of each test that I'm sure testers don't want to see? ;) We strongly recommend people using the guest session to ensure all settings and environment are correct for the tests!

To sum up

Unity 5.2 is now building in the official repositories and should soon be available to all precise users. Thanks again to everyone participating in this project and see you soon for… 5.4 (or maybe a little bit before for an incoming compiz release that I heard of) ! :)

Some unity configuration in gnome-control-center.

Just finished some hacking for implementing some unity configuration options that are blessed by the design team, as shown in this official specification.

It contains as well other ui tweaks. You can notice in particular the "Restore defaults" options that work on each tabs and restore every page's defaults.

Gnome Control Center Unity tab 1

Those options are impacting both unity and unity-2d. This gave particular challenges as their features don't align (for instance, we don't show the "set launcher icon size" for 2d) and they don't have the same kind of "launcher hide mode". Also, some configuration options have more choices in ccsm than those shown in the ui (like if you want to reveal the launcher on the bottom-left corner, or if you are using the "dodge active window" mode. We tried to be clever on the ui side and not resetting any different setting you can have set in ccsm by just launching the ui.

We also had to do some choices, like what settings to take by default (on first ui launch), when you have different settings between unity 2d and unity 3d? As there are more ui to tweak 3d than 2d, I thus decided to take the settings from 3d at startup (and then, the settings will align).

Gnome Control Center Unity tab 2

Note that the "reveal spot" doesn't work right now for Top Left corner, but this is a compiz/unity bug and not yet (but soon will be!) implemented feature in 2d.

Finally, if you are using a non unity session like the gnome-panel or gnome-shell one, you won't be impacted by those new settings. You will still gain a new "Restore defaults" option though. :)

The package is currently building and will be available soon in Precise, enjoy!

- page 2 de 6 -