Debian’s binary packages are only built when they are uploaded (or when a binNMU is requested, but that doesn’t happen frequently). They are never rebuilt later. This means that some binary packages in etch weren’t build in an up-to-date etch environment, but possibly in a much older environment.
Is this a problem? It depends. Old packages don’t benefit from the improvements introduced in Debian after they were built. Like new compiler optimizations, or other changes in the toolchain that could induce changes in binary packages.
For example, some files used to be at some place, but are now put at another place. Also, some parts of maintainers scripts are automatically generated, and would be different if the package was rebuilt today.
But is it really a problem? Are our packages _that_ old? Also, when a big change is made in the way we generate our binary packages (like Raphael Hertzog’s new dpkg-shlibdeps), when can we expect that the change will be effective in all packages?
I went through all binary packages in unstable (as on 24/06/2007) (both main, contrib and non-free packages) on i386. Using dpkg-deb –contents, I extracted the most recent date of each package (which can reasonably be taken as the date of the package’s creation). And here are the results.
Most packages are actually quite recent. 9008 packages (43%) were built after the release of etch. And 19857 packages (94%) were built after the release of sarge. But that still leaves us with 1265 packages that were built before sarge was released, and even one package (exim-doc-html) that was built before the release of woody! (the removal of this package has been requested, so we will soon be woody-clean :-)
Now, what could we do with this data? We could review the older packages, to determine if:
- They would benefit from a rebuild (by comparing them the result of a fresh build) <= I'm planning to work on that
- Integrate that data with other sources of data (popcon, for example), to see if the package should be removed. Such old packages are probably good candidates for removal.
Here is the full sorted list of packages.
DebConf7 finished today. This was my first DebConf, and probably not my last one :-) I had a really interesting time in Edinburgh, despite the weather (but at least we are prepared for DebConf8 now!). The organization team really did a fantastic work, the venue was gorgeous, etc, etc, etc.
Again, I was positively surprised by how nice everybody was at DebConf. It’s weird to think that the same people are part of the regular flames on the mailing lists. Which proves that most flames are just caused by people being passionate about Debian, and looking for the best for the project, and not by people trying to be annoying, like one could think :)
I gave a talk about the usual QA stuff I’m doing: archive-wide rebuilds, piuparts runs, and how to make all this more efficient by working inside the collab-qa project. So far, it’s not like a lot of people have been joining the project, which is kind of a failure. But at least everybody seem to think that the idea is good.
I also gave a lightning talk about Debian Package of the Day, which I didn’t plan to give originally (I think I never prepared slides in such a short time). I think it was well received, and we even got a new submission half an hour after the talk.
I even managed to clear some tasks on my TODO list during Debconf, the most visible one being the “automated monthly mails to maintainers of packages with serious problems”. I was quite afraid of being flamed, but I sent 250 mails, and only one person replied aggressively. I also got a lot of positive feedback, so the next batch will probably use slightly less strict criterias.
And of course, the most important result of debconf was the numerous discussions, that resulted in a lot of good and interesting ideas.
You probably all know tetrinet – if you don’t, you should really try that game. There are several servers for that game out there, and one of them is tetrinetx. It is packaged in Debian, but it has several problems:
- It doesn’t support tetrifast mode. So you have to deal with that stupid 1-second delay before the next block appears.
- The code is truly horrible. From main.c:
That gives you an idea, right ?
A few years ago, I worked with a few friends on improving tetrinetx. This resulted in:
- Massive code cleanup
- Tetrifast support
- Stats stored in an MySQL DB, so you can compute nice stuff (time wasted per day, etc)
- More stats: blocks drop rate, etc.
This is hosted on sourceforge under the name tetrinetx-ng, but it hasn’t seen any activity for the last two years.
It would really be great if a tetrinet fan could take over the project and start making it alive again. Then tetrinet.debian.net could support tetrifast ;)
Also, I could try to setup a server with tetrinetx-ng. Someone wants to host tetrifast.debian.net for me ? Needed: MySQL DB + Apache (for the stats), + not being afraid of insecure code (doesn’t need to run as root)
I am considering switching from SVN to a distributed SCM for my personal stuff. I had a look at git and mercurial, but neither really support branching a sub-directory:
Often, I am working on a big private project, and, while working on a sub-project (stored inside the project’s repository), I’d like to share that sub-project with others. So there are actually two problems:
- being able to checkout/branch/clone a sub-directory
- possibility to control access on a per-directory basis
SVN only partially meets my needs with that (it’s possible to checkout a sub-directory directly, for example with
svn co svn://svn.debian.org/svn/pkg-ruby-extras/tools/ruby-pkg-tools). I think that it’s possible to do fine-grained access control
using libapache2-svn, but I haven’t tried yet.
It seems that mercurial can do that, using the forest extension. But you have to convert the specific directory into a repository, with a complex step to keep the history.
Amongst the distributed SCM, is there one that supports that ? (at least the sub-directory branching part)