Selling Debian tasks

In a lot of talks or blog posts (like Sam’s talk at RMLL, or Raphaël’s blog posts – both in french), people have been talking about what people could do inside Debian, and how it would help Debian.

That doesn’t sound like the best approach to me. When describing tasks with the objective of getting potential contributors to pick them up, we should try to make them sexy, to tell users what is exciting about them, what they will learn doing those tasks, where satisfaction will come from. We really need to sell them better.

Of course, some Debian tasks are mainly grunt work. And for some of them, people just do them because someone has to do them. But I believe that most tasks inside Debian are actually more interesting than outsiders would expect. For example, I would be very interested in reading why an i18n expert (hint: Christian!) finds i18n sexy … and I should probably try to write about QA myself.

(As you might have noticed now, the subject of this blog post was misleading on purpose — chosen so that a lot of people would read the post :P)

ZFS as LVM killer … really?

From the ZFS FAQ:

Can devices be removed from a ZFS pool?

You can remove a device from a mirrored ZFS configuration by using the zpool detach command. Removal of a top-level vdev, such as an entire RAID-Z group or a disk in an unmirrored configuration, is not currently supported. This feature is planned for a future release.

buzz, buzz, buzz…

Compiz interest

From time to time, I try Compiz, to see if how it has evolved. The last time was yesterday (I also switched to the xserver-xorg-driver-ati from experimental).

But as usual, after using it for a few minutes, I can’t help switching back to metacity. I don’t think that Compiz’s visual effects bring anything on the usability POV, and I just find them annoying after the initial “WOW”. Of course, it’s nice to show off, but to do actual work? Are there people really using it all the time?

creating a “Distributions Developers Forum”: follow-up

After that blog post, I decided to write a mail asking a first set of questions. I sent it to developers’ mailing lists of Fedora, Gentoo, Mandriva, openSUSE, and of course Debian and Ubuntu. I got really interesting answers from everyone, except …. Debian and Ubuntu.

  • I want to wait some more before publishing the answers. If you are a Debian or an Ubuntu developer, and were interested in that initiative, please answer ASAP my mails sent to debian-devel@ and ubuntu-devel-discuss@ (respectively). Not having answers from Debian and Ubuntu people would really be a shame, since everyone else I contacted was really helpful and interested.
  • I plan to use a mailing list archiving software to publish the mails. Can you recommend a good mbox->html converter, that would work well in a “run once” use case, and that doesn’t take ages to set up?
  • Can you think of another distro I should have contacted? At first, I don’t want to include simple derivatives of the “big distros”. I also chose to limit myself to the Linux distros, so I didn’t contact the BSD or Nexenta folks. Both of this could change: my current plan for the future is to try to setup a mailing list+wiki, so everybody could join.

Better Debian RC bugs graphs

if you like to monitor the number of RC bugs, you are probably annoyed by the graph on http://bugs.debian.org/release-critical/. The graph starts in 2003, making it impossible to read short-time changes. There’s a bug about that: #431299: RC bug status graph timescale is too long. And I provided a patch a few months ago, but it hasn’t been included yet.

So, in the meantime, you can use my private copy (generated daily):

Also, if you like graphs, Yves-Alexis Perez (aka Corsac) generates cool graphs about Debian:

And if you want to have all the interesting graphs on the same page, you can use this page.

Idea: creating a “Distributions Developers Forum” ?

Scientific papers always have a “related works” section, where the authors describe how the work they are presenting compares with what others did. In the Free Software world, this is nearly non-existent: in a way, it seems that many of us are thinking of their projects as competing products, fighting for market share. On a project web page, I would love to read something like:

This project is particularly well suited if you want XX. But if YY is more important to you, you might want to have a look at ZZ.

Or simply links to similar projects for other environments, etc. All in all, I think that the goal is to improve the global satisfaction. Not to win a few users, who won’t be totally happy, because the project doesn’t really suit their needs.

While some projects cooperate and share ideas, like I think desktop environments do inside freedesktop.org, most just ignore each other. I am both a Debian and an Ubuntu developer, and I’m sometimes amazed that Ubuntu discusses technical choices that were discussed (and solved) a few weeks earlier in Debian. And it’s even worse with the other big distros out there.

Couldn’t we try to improve this ? We could just create a mailing list, where developers from various distributions could present the way they do things. This would allow to discuss future developments (“We are planning to improve this, what are you doing about that ?“) or simply to improve people’s knowledge of the various distributions.

Of course, this could easily turn into flamefests, but they are technical ways to avoid that, like moderating posts from trollers…

Does something like that already exist ? Do you think that it would be interesting ? Would you like to contribute to such a forum ?

Some examples of things that could be discussed:

  • How many packages do you have, and how do you support them ? Do you have several “classes” of packages ?
  • How do you manage your releases ? Goal-based ? Time-based ? Bug-count-based ?
  • Which kind of quality assurance do you do ?
  • How many contributors do you have ? Are they split into different “classes” ? Who has “commit rights” ? Can you give out “commit rights” restricted to subsets of your packages ? A organized sponsorship system for people who don’t have commit rights ?
  • etc, etc, etc.

Re: Bash: la commande hash -r

Je réponds ici à un billet d’Alban sur la commande hash -r de bash. Lisez son billet avant pour comprendre ma réponse !

bash et tcsh gèrent le $PATH d’une manière différente.

À chaque commande exécutée, bash parcourt les répertoires du $PATH à la recherche de la commande, et garde un cache des commandes déjà exécutées pour ne pas faire cette recherche à chaque fois.

tcsh parcourt les répertoires à son lancement, et construit un cache de toutes les commandes possibles (on le voit avec strace, avec la série d’appels à getdents() au lancement).

Dans bash, hash -r provoque la remise à zéro du cache.

Dans tcsh, rehash provoque le re-parcours des répertoires du $PATH.

Que se passe-t-il quand on ajoute une commande dans le $PATH ? Avec bash, rien. Les répertoires vont être parcourus à la première exécution de cette commande. Avec tcsh, c’est le processus fils qui va faire des execve sur /usr/local/bin/toto, /usr/bin/toto, /bin/toto, /usr/bin/X11/toto etc jusqu’à ce que l’execve n’échoue pas (attention, il faut lancer strace avec -f pour le voir). Mais comme c’est le processus fils du shell qui les fait, il n’enrichit pas le cache (qui est dans le père) ! Donc à chaque fois qu’on exécute une commande qui n’était pas là au lancement du shell, il re-teste tous les répertoires.

Que se passe-t-il quand on ajoute un répertoire dans le $PATH ? Avec bash, rien. Le répertoire sera juste parcouru avec les autres. Avec tcsh, par contre, tous les répertoires du $PATH sont re-parcourus pour mettre à jour le cache.

Il me semble que le comportement de tcsh a changé. Quand je l’utilisais sous FreeBSD il y a 6 ou 7 ans, tcsh ne faisait pas cette recherche dans tous les répertoires du $PATH quand une commande inconnue était exécutée. Du coup, rehash était très fréquemment utilisé, par exemple après avoir installé une appli.

Which distribution on Thinkpads: really the good question to ask?

After Dell, Lenovo decided to ask users which Linux distribution they should put on Thinkpads. Seriously, who cares? If I buy a laptop that comes with Linux pre-installed, my first step would be to reinstall it from scratch, exactly like with a laptop with Windows pre-installed. Because the choices that were made wouldn’t match mine (think of partitioning, etc). Or simply because I wouldn’t totally trust the hardware manufacturer.

So, what would make me happier about a laptop?

  • That installing any recent enough mainstream Linux distribution works without requiring tricks
  • That it’s possible to buy it without an operating system, with no additional charge (and no, I don’t buy the “we need the OS installed to do some quality tests before we ship” argument. USB keys and CDROMs have been bootable for years.)

I couldn’t care less about which distribution comes preinstalled. If Lenovo wants to make me happy, there are two ways:

  • Talk to free software developers: kernel developers, etc. Not distribution developers. And get the required changes merged in, so they will land in my favorite distribution after some time.
  • If they prefer to play on their own, they could create an open “Linux on Lenovo laptops” task force, where they would provide the needed drivers in a way that makes it dead easy to integrate them in Linux distros and to report problems.

It’s not _that_ hard: some manufacturers got it right, at least for some of their products. There are many manufacturers contributing code directly to the Linux kernel, for network drivers for example.

But maybe this is just about marketing and communication, not about results? Because after all, Dell and Lenovo will look nice to the random user. While playing-by-the-rules manufacturers are hidden deep in the Linux changelog.

Opening mutt’s HTML attachments with epiphany?

I just ran into a rather funny problem. I’d like to use epiphany to open text/html attachments or mail parts.

Mutt uses mailcap, so I added an entry in ~/.mailcap:
text/html; epiphany-browser '%s'; description=HTML Text; nametemplate=%s.html

This should work fine, but the problem is that mutt creates a temporary file (/tmp/mutt.html) and removes it when the command terminates, which is nearly immediately with epiphany. And epiphany has a really nice feature: it monitors local files for changes and reload them when they change. Which results in seeing the page briefly, then a File "/tmp/mutt.html" not found. error page.

Anyone using mutt and epiphany together successfully?

Collaborative Maintenance

Stefano Zacchiroli blogs his thoughts about collaborative maintenance. He identifies two arguments/religions about it:

  1. it is good because duties are shared among co-maintainers
  2. it is bad because no one feels responsible for the co-maintained packages

And he says he stand for the first one.

I’m not sure that those two positions really contradict.

Sure, collaborative maintenance is a good thing, because it allows to share the load between co-maintainers, which often results in bugs being fixed faster. But collaborative maintenance creates the problem of the dilution of responsabilities, and the dilution of knowledge. In many cases, a single maintainer will have a better knowledge of a package than the sum of the knowledge of all co-maintainers. Also, sometimes, teams don’t work very well, and everybody start thinking that a specific bug is another co-maintainer’s problem.

In the pkg-ruby-extras team, we have been trying different organizations over the past two years. We have now settled with the following:

  • each package has a “main responsible person”, listed in the Maintainer field
  • the team address is listed in the Uploaders field, as well as all the team members that are willing to help with that package (people add themselves to Uploaders manually. Another variant is pkg-kde and pkg-gnome’s automatic generation of the Uploaders field based on the last X entries of debian/changelog (pkg-kde variant, pkg-gnome variant).)
    Interestingly, we discovered that for several packages, nobody was really willing to help, so I’m wondering how other teams with (nb of packages) >> (nb of active team members) work.

Stefano also raises the point of infrastructure issues caused by the switch to the team model. I ran into one recently. My “Automated mails to maintainers of packages with serious problems” are currently only sent to the maintainers of the packages with problems, unless someone has problems in both maintained and co-maintained packages (in that case, all problems are mentioned). I thought for a while about sending mails to co-maintainers as well, but that would mean sending more mails… I might try that and see if I get flamed :-)