Collaborative Maintenance

Stefano Zacchiroli blogs his thoughts about collaborative maintenance. He identifies two arguments/religions about it:

  1. it is good because duties are shared among co-maintainers
  2. it is bad because no one feels responsible for the co-maintained packages

And he says he stand for the first one.

I’m not sure that those two positions really contradict.

Sure, collaborative maintenance is a good thing, because it allows to share the load between co-maintainers, which often results in bugs being fixed faster. But collaborative maintenance creates the problem of the dilution of responsabilities, and the dilution of knowledge. In many cases, a single maintainer will have a better knowledge of a package than the sum of the knowledge of all co-maintainers. Also, sometimes, teams don’t work very well, and everybody start thinking that a specific bug is another co-maintainer’s problem.

In the pkg-ruby-extras team, we have been trying different organizations over the past two years. We have now settled with the following:

  • each package has a “main responsible person”, listed in the Maintainer field
  • the team address is listed in the Uploaders field, as well as all the team members that are willing to help with that package (people add themselves to Uploaders manually. Another variant is pkg-kde and pkg-gnome’s automatic generation of the Uploaders field based on the last X entries of debian/changelog (pkg-kde variant, pkg-gnome variant).)
    Interestingly, we discovered that for several packages, nobody was really willing to help, so I’m wondering how other teams with (nb of packages) >> (nb of active team members) work.

Stefano also raises the point of infrastructure issues caused by the switch to the team model. I ran into one recently. My “Automated mails to maintainers of packages with serious problems” are currently only sent to the maintainers of the packages with problems, unless someone has problems in both maintained and co-maintained packages (in that case, all problems are mentioned). I thought for a while about sending mails to co-maintainers as well, but that would mean sending more mails… I might try that and see if I get flamed :-)

How do archive rebuilds and piuparts tests on Grid’5000 work?

With the development of rebuildd and the fact that several people are interested in re-using my scripts, I feel the need to explain how this stuff works.

Grid’5000

First, Grid’5000 is a research platform used to study computer grids. It’s not really a grid (it doesn’t use all the classic grid middleware such as Globus). Grid’5000 is composed of 9 sites, each hosting from 1 to 3 clusters. Inside clusters, nodes are usually connected using gigabit ethernet, and sometimes another high speed network (Myrinet, Infiniband, etc). Clusters are connected using a dedicated 1OG ethernet network. Grid’5000 is in a big private network (you access it through special gateways), and one can access each node from any other node directly (no need for complex tunnelling).

Using Grid’5000 nodes

When you want to use some nodes on Grid’5000, you have to use a resource manager to say “I’d like to use 50 nodes for 10 hours”. Then your job starts. At that point, you can use a tool called KaDeploy to install your own system on all the nodes (think of it as “FAI for large clusters”). When KaDeploy finishes, the nodes are rebooted in your environment, and you can connect as root. At that point, you can basically break the nodes the way you want, since they will be restored at the end of your job.

Running Debian QA tasks on Grid’5000

None of that was Debian-specific. I will now try to explain how QA tasks are run on Grid’5000. The scripts mentioned below are in the debcluster directory of the collab-qa SVN repository.

When the nodes are ready, the first node is chosen to play a special role (it’s called the master node from now on). A script is run on the master node to prepare it. This consists in mounting a shared NFS directory, and run another script located on this shared NFS directory to install a few packages, configure some stuff, and start a script (masternode.rb) that will schedule the tasks on all the other nodes.

masternode.rb is also responsible for preparing the other nodes. Which consists in mounting the same shared NFS directory, and executing a script (preparenode.rb) that installs a few packages and configures some stuff. After the nodes have been prepared, they are ready to execute tasks.

To execute a task, masternode.rb connects to the node using ssh and executes a script in the shared directory. Those scripts are basically wrappers around lower-level tools. Examples are buildpackage.rb, and piuparts.rb.

Now, some specific details:

  • masternode.rb schedules tasks, not builds. Tasks are commands. So it is possible, in a single Grid’5000 job, to mix a piuparts test on Ubuntu and an archive rebuild on Debian. When another QA task is created, I just have to write another wrapper.
  • Tasks are scheduled using “longest job first”. This doesn’t matter with piuparts tests (which are usually quite short) but is important for archive rebuilds: some packages take a very long time to build. If I want to rebuild all packages in about 10 hours, openoffice.org has to be the first build to start, since building openoffice.org takes about 10 hours itself… So one node will only build openoffice.org, and the other nodes will build the other packages.
  • I use sbuild to build packages, not pbuilder. pbuilder’s algorithm to resolve build-dependencies is a bit broken (#141888, #215065). sbuild’s is broken as well (#395271, #422879, #272955, #403246), but at least it’s broken in the same way as the buildds’, so something that doesn’t build on sbuild won’t build on the buildds, and you can file bugs.
  • I use schroot with “file” chroots. The tarballs are stored on the NFS directory. Which looks ineffective, but actually works very well and is very flexible. A tarball of a build environment is not that big, and this allows for a lot of flexibility and garantees that my build environment is always clean. If I want to build with a different dpkg-dev, I just have to:
    • cp sid32.tgz sid32-new.tgz
    • add the chroot to schroot.conf
    • tell buildpackage.rb to use sid32-new instead of sid32
  • Logs and (if needed) resulting packages are written to the NFS directory.

Comments and questions welcomed :)

How old are our packages?

Debian’s binary packages are only built when they are uploaded (or when a binNMU is requested, but that doesn’t happen frequently). They are never rebuilt later. This means that some binary packages in etch weren’t build in an up-to-date etch environment, but possibly in a much older environment.

Is this a problem? It depends. Old packages don’t benefit from the improvements introduced in Debian after they were built. Like new compiler optimizations, or other changes in the toolchain that could induce changes in binary packages.

For example, some files used to be at some place, but are now put at another place. Also, some parts of maintainers scripts are automatically generated, and would be different if the package was rebuilt today.

But is it really a problem? Are our packages _that_ old? Also, when a big change is made in the way we generate our binary packages (like Raphael Hertzog’s new dpkg-shlibdeps), when can we expect that the change will be effective in all packages?

I went through all binary packages in unstable (as on 24/06/2007) (both main, contrib and non-free packages) on i386. Using dpkg-deb –contents, I extracted the most recent date of each package (which can reasonably be taken as the date of the package’s creation). And here are the results.

Most packages are actually quite recent. 9008 packages (43%) were built after the release of etch. And 19857 packages (94%) were built after the release of sarge. But that still leaves us with 1265 packages that were built before sarge was released, and even one package (exim-doc-html) that was built before the release of woody! (the removal of this package has been requested, so we will soon be woody-clean :-)

Now, what could we do with this data? We could review the older packages, to determine if:

  • They would benefit from a rebuild (by comparing them the result of a fresh build) <= I'm planning to work on that
  • Integrate that data with other sources of data (popcon, for example), to see if the package should be removed. Such old packages are probably good candidates for removal.

Here is the full sorted list of packages.

debconf7

DebConf7 finished today. This was my first DebConf, and probably not my last one :-) I had a really interesting time in Edinburgh, despite the weather (but at least we are prepared for DebConf8 now!). The organization team really did a fantastic work, the venue was gorgeous, etc, etc, etc.

Again, I was positively surprised by how nice everybody was at DebConf. It’s weird to think that the same people are part of the regular flames on the mailing lists. Which proves that most flames are just caused by people being passionate about Debian, and looking for the best for the project, and not by people trying to be annoying, like one could think :)

I gave a talk about the usual QA stuff I’m doing: archive-wide rebuilds, piuparts runs, and how to make all this more efficient by working inside the collab-qa project. So far, it’s not like a lot of people have been joining the project, which is kind of a failure. But at least everybody seem to think that the idea is good.

I also gave a lightning talk about Debian Package of the Day, which I didn’t plan to give originally (I think I never prepared slides in such a short time). I think it was well received, and we even got a new submission half an hour after the talk.

I even managed to clear some tasks on my TODO list during Debconf, the most visible one being the “automated monthly mails to maintainers of packages with serious problems”. I was quite afraid of being flamed, but I sent 250 mails, and only one person replied aggressively. I also got a lot of positive feedback, so the next batch will probably use slightly less strict criterias.

And of course, the most important result of debconf was the numerous discussions, that resulted in a lot of good and interesting ideas.

maintainer wanted for next generation tetrinet server!

You probably all know tetrinet – if you don’t, you should really try that game. There are several servers for that game out there, and one of them is tetrinetx. It is packaged in Debian, but it has several problems:

  • It doesn’t support tetrifast mode. So you have to deal with that stupid 1-second delay before the next block appears.
  • The code is truly horrible. From main.c:
    #include "dns.c"
    #include "utils.c"
    #include "net.c"
    #include "crack.c"
    #include "game.c"

    That gives you an idea, right ?

A few years ago, I worked with a few friends on improving tetrinetx. This resulted in:

  • Massive code cleanup
  • Tetrifast support
  • Stats stored in an MySQL DB, so you can compute nice stuff (time wasted per day, etc)
  • More stats: blocks drop rate, etc.

This is hosted on sourceforge under the name tetrinetx-ng, but it hasn’t seen any activity for the last two years.

It would really be great if a tetrinet fan could take over the project and start making it alive again. Then tetrinet.debian.net could support tetrifast ;)

Also, I could try to setup a server with tetrinetx-ng. Someone wants to host tetrifast.debian.net for me ? Needed: MySQL DB + Apache (for the stats), + not being afraid of insecure code (doesn’t need to run as root)

Distributed SCM and branching a sub-directory ?

I am considering switching from SVN to a distributed SCM for my personal stuff. I had a look at git and mercurial, but neither really support branching a sub-directory:

Often, I am working on a big private project, and, while working on a sub-project (stored inside the project’s repository), I’d like to share that sub-project with others. So there are actually two problems:

  • being able to checkout/branch/clone a sub-directory
  • possibility to control access on a per-directory basis

SVN only partially meets my needs with that (it’s possible to checkout a sub-directory directly, for example with svn co svn://svn.debian.org/svn/pkg-ruby-extras/tools/ruby-pkg-tools). I think that it’s possible to do fine-grained access control
using libapache2-svn, but I haven’t tried yet.

It seems that mercurial can do that, using the forest extension. But you have to convert the specific directory into a repository, with a complex step to keep the history.

Amongst the distributed SCM, is there one that supports that ? (at least the sub-directory branching part)

Bash is weird.

Consider the following command:
echo <(cat /etc/{motd,passwd})

(you can replace "echo" with any command that takes one file as argument and cannot take it as stdin)

It's obvious that the goal is to expand this to:
echo <(cat /etc/motd /etc/passwd)

Then to:
echo /dev/fd/63

However, it doesn't work like this:
$ echo <(cat /etc/{motd,passwd}) ++ cat /etc/motd ++ cat /etc/passwd + echo /dev/fd/63 /dev/fd/62 /dev/fd/63 /dev/fd/62

But bash doesn't work like this. Brace expansion is done first, but inside parameters of the command (that is, <(cat /etc/{motd,passwd})) not words. So when <(cat /etc/{motd,passwd}) is expanded, the prefix is <(cat /etc/, the suffix is ")", so it's expanded to <(cat /etc/motd) <(cat /etc/passwd).

After reporting a bug about that, Chet Ramey gave me the correct way to reach the initial goal:
cat <(eval cat /etc/\{passwd,motd})

Most broken spam protection ever

Just received that, in reply to a mail I sent:

This email is from X.

My email address (X@Y.com) is protected against spam and viruses by MailInBlack.

Please click on the following link in order to identify yourself to me and to allow your message to reach me.
http://192.168.0.252/v/?C8BEF10E72C&tmstp=20070316084314&tk=message_confirm&tkid=7951&lang=2

This needs to be done only once, for this email and all future email correspondence.

Thank you for your understanding.

X

MailInBlack seems to be a french company. No wonder why they guarantee that 100% of spams are stopped. (To be fair, I am not sure yet of who fucked up, it might be the admin)

Jabber clients and OS usage stats (3)

Using XMPP4R, I did some stats about Jabber clients usage on the Apinc Jabber server, which hosts im.apinc.org, jabber.fr, and many more using virtual hosting. I already did similar stats in March 2006 and September 2005.

The poll was done by sending jabber:iq:version to online users, around 1:00 PM (french local time, most of the users are french). 1343 clients were pinged, and 1315 answered, which is better than last year (1145 answers). Of the 1145 jids that answered last year, 368 were also part of the poll this year (I don’t know if this is good, or not enough).

Systems:

  • GNU/Linux: 43% (2006: 38% ; 2005: 34%)
  • Windows: 35% (2006: 37% ; 2005: 34%)
  • Mac OS: 16% (2006: 18% ; 2005: 23%)
  • Unknown: 4 (2006: 5% ; 2005: 6%)
  • Others: 0% (12 clients ; 2006: 0% ; 2005: 1%)

Clients:

  • Psi: 24% (2006: 28% ; 2005: 28%)
  • gaim: 22% (2006: 25% ; 2005: 25%)
  • Gajim: 12% (2006: 5% ; 2005: 3%)
  • Kopete: 11% (2006: 7% ; 2005: 7%)
  • iChatAgent: 9% (2006: 13% ; 2005: 18%)
  • libgaim (Adium): 5% (2006: 4% ; 2005: 3%)
  • Pandion: 4% (2006: 4% ; 2005: 2%)
  • Miranda: 2% (2006: 2% ; 2005: 1%)
  • BitlBee: 2%
  • neos: 1%
  • Unknown client: 0%
  • Exodus: 0%
  • Imendio Gossip: 0%
  • Jabbin: 0%
  • Spark IM Client: 0%
  • Trillian: 0%
  • JBother: 0%
  • jabber.el: 0%
  • JETI: 0%
  • JAJC: 0%
  • Class.Jabber.PHP: 0%
  • Jabberwocky: 0%
  • Tkabber: 0%
  • Gush: 0%

(all clients with at least one reply are listed here)

I also did some stats on the Linux distros. With 566 Linux users, one can consider that statistically signifiant.

  • Client answers with the kernel version, not including distribution information: 62%
  • Debian: 14%
  • Ubuntu: 12%
  • Gentoo: 2%
  • Unknown distros (not provided by the client): 2%
  • Fedora Core: 1%
  • Arch Linux: 1%
  • Mandriva: 1% (despite the fact that most users are french!)
  • Slackware: 0%

(Other distros are below 5 replies)

The Debian/Ubuntu situation is interesting. Debian is not dying, even on the desktop ! The server is hosting jabber.ubuntu-fr.org, so the results are biased towards Ubuntu.

Update: it seems that Psi reports “Debian GNU/Linux (testing/unstable)” even when running on Ubuntu. This is the case for ~5% of linux users.

To server admins: if you are running a large jabber server with jabberd 1.6, it’s easy to give me the right to get the list of online users, so contact me if your are interested in me doing your users stats. It would be interesting to see if users from different servers/countries have different behaviours. I’m not sure if it’s possible with ejabberd and other servers.

Slides for my FOSDEM talks about Debian QA

As promised, the slides from my FOSDEM talks about Automated Testing of Debian Packages and Use of Grid Computing for Debian Quality Assurance are available.

Don’t hesite to ask questions or post comments. The videos for both talks should be available so you can laugh at my frenglish, but I haven’t heard of an ETA yet.