Debian’s Freeze

Debian’s freeze sounds like a technical hack to address a social problem, and that disturbs me a bit.

The social problem is: At some point, we need everybody in Debian to make only non-disruptive changes, so everything can converge very fast into a releasable state.

The “solution” we are using is that we are blocking all packages from migrating to testing, and requiring manual review from someone on the release team. Consequences are:
– many people feel that you need to be very convincing to fix a small, not RC bug, even if fixing that bug definitely increases your package’s quality.
– the release team is completely overwhelmed by unblock requests during the freeze
– many people just stop trying to fix things during the freeze (which definitely doesn’t improve Debian’s quality), both because they think it’s hard to get a fix in, and because they don’t want to bother the release team

I wonder if we really need such a strict policy. Are there other Free Software projects that use such a technical measure to prevent software from disrupting stable releases? I am the impression that most other projects rely on social pressure instead of technical measures for that, except maybe during the last few hours before the release.

Couldn’t we act on the social level? We could default to allow everyone’s package to migrate to testing, and, when someone fucks up and uploads something that should not have been uploaded, block all his packages (switching to manual review mode) until the release. Of course, that require the release team to make decisions about _people_, which is harder than making decisions about _packages_. But if the rules are clearly stated, couldn’t this work?

47 thoughts on “Debian’s Freeze

  1. Just make it easy to revert when someone makes a mistake. If people are scared of doing things because mistakes are costly they wont want to risk uploading a fix.

  2. How does this compare with how FreeBSD does it? They seem to get stable releases out the door, sometimes even on a schedule that they can stick to. :-)

  3. I don’t think it’s a social problem per se. In a project the size of Debian, not everyone can oversee any change he does to a package, and mistakes are always made. If you don’t freeze there’s bound to be uploads that disrupt the archive regularly, beit by someone who is making a mistake, is hasty, careless or has for some reason lived in a cage and didn’t know a release was coming up. It is good to have a period of extra carefulness with new changes, and require that each such change is reviewed by an independent developer.

    Another problem is that many packages have interactions with each other. With the release of etch, I experienced that one of my packages broke because the PHP-maintainer decided to include a last minute bugfix for the way a function behaved, while my application relied on the (arguably broken) behaviour. It would have been nice if we could have fixed both the application and PHP before the release, but if we don’t freeze we will not have a moment where the archive is not in a relatively stable state and we (or users) have a good opportunity to evaluate exactly how the system performs with these specific versions.

    In the security team we have a ‘permanent freeze’ since we’re working with stable, and even though developers are well aware of the care that needs to be taken with updates to stable, the extra approval step of the team members catches a significant number of problems (unfortunately not all).

    In software projects it’s very normal to freeze the repository before a release and only let commits in that are approved by a core team member. The extra set of eyes, and those eyes knowing that a release is near, helps for the extra required scrutiny.

    I do agree that we may need to allow more bugfixes in. But there’s a tradeoff there that needs to be defined. Every change has a risk of breaking other things. The benefit of allowing the change in has to be weighed against that risk.

    So I believe that we may be able to tweak that tradeoff, but that the basic system is there because it’s else just not managable in a project that big and uncontrolled.

    One tweak to the tradeoff could be that we would allow fixes for minor/normal bugs too as long as they’re in leaf packages. The risk in those packages is a lot smaller than changing libraries or other things that packages external to yours depend on.

  4. How about, pre-freeze, asking maintainers to ask for exemptions for specific packages (given proper motivations of course). This will ensure that such packages will go through to Testing without manual interventions, until a certain point of course. This is the sort of thing that would benefit Mozilla-related set of packages at the moment, preventing Hommey and crew having to beg freeze exceptions in this near future.

  5. A while ago I had strange idea on my mind: distribute the work of the RMs! How?

    For any non-RC change the maintainer would need to request an “approval ticket” from some new “approval” system. Then, this “approval system” would send a mail like “your help is needed to review this change and approve/veto package foo, click approve/veto” to THREE RANDOM fellow developers.

    And only when 2 or 3 devs responded and all say OK, then the package would be considered automatically for the next release.

  6. @Tshepang: problem is, you don’t know beforehand if your small & unimportant package will need a fix for a (still unreported) minor bug.

    @Zomb and David: that won’t work. RMs and RAs are doing an awesome work reviewing all the changes, but that’s really grunt work. Don’t expect a lot more people to volunteer to do that work. And you can’t just assign work to random DDs. They are volunteers, might be busy with something else, etc.

    Something that *might* work would be a Signed-Off: system (like linux’s) where it’s the responsibility of the uploader to seek Signed-Offs from other DDs. That way, the RT could spend less time reviewing changes that were Signed-Off other DDs. Actually, inside most teams, it would be easy to ensure that uploads during “freeze” are peer-reviewed.

  7. I blogged about that a while ago.

    In the old days, we’d stabilize unstable, then freeze for a few days and then release, and during the freeze no uploads to unstable were possible.

    This worked, as there was consensus that releases are important, and during the preparation and freeze times the entire project worked on that.

    With testing, the release happens sort of “in the background”, and very few people in the project actually run the software.

    Adding more red tape like a sign-off system will not solve the fundamental problem that the release is seen by many as something that they do not have to care about because britney does everything automatically as long as RC bugs are being fixed.

    I’d like to see what happens if, for the release after lenny, we go back to the old way of releasing (i.e. with unstable frozen during that time).

  8. Lucas, its an interesting thought you have. Personally I’d prefer if more people would participate in a broader discussion about this topic, because the current practice has its advantages, but it also features a lot of disadvantages like Debian always being outdated all over *before* the release happens. Like nasty little, non-rc, but still disturbing bugs that never get fixed during the life-time of a release (like a mostly broken rxvt-unicode, a buggy manpage for a core tool which makes it basically unreadable and for sure a lot of other problems as well). There are several reasons for people to move away from Debian or workaround those problems, e.g. by excessive use of backports and alike.
    Well, after all I think the social approach won’t work this way. There are just to many people in Debian and we all know that even now there are problems with people not doing things right. So possibly the freeze should be reduced to a subset of core packages and possibly additional libraries. This approach stacked with your idea for the rest of the packages could work, imho. What do you think about this?

  9. +1 with Patrick Schoenfeld.

    The -stable release as it is today is a great thing, OK, but do you think it’s *really* useful?

  10. @Patrick: your suggestion to restrict the current freeze policy to a subset of packages makes sense (so only the important packages would have to go through a review on -release, the other ones wouldn’t be moderated, but we would put a lot of social pressure on maintainers so that they concentrate on bug fixing).

    I wonder if someone already made some stats about the “end of the branches” in the Debian dependency tree. I’d like answers to questions such as:
    – how many binary packages are leaves?
    – how many source packages are leaves (a source package is a leaf if none of its binary packages has reverse dependencies) ?
    this could be extended to questions such as:
    how packages are there, that only break one other package if they are removed? two other packages? three other packages?

  11. Lucas, I like the Signed-Off thing, but that could be implemented with the “distributed workload” idea; why not creating a “Support Team” for RMs, where DDs can join, and choosing the “Random DDs” amongst these?

    I think that some DD is willing to help during release time — and that time does not really occur so often… and, moreover, I can’t see any issue here, as the “Support Team” would be opt-in.

    Just my 2€c,
    David

  12. I know that I’m restarting ~ 7 year old discussions…

    When you say “the release team is completely overwhelmed by unblock requests during the freeze” I’m still wondering whether using testing for releases was a good change at all:
    – it required big infrastructure changes like e.g. version tracking in the BTS
    – it requires constant work by you RMs to get library transitions into testing
    – tons of other work for updating packages in testing

    If anyone has ever shown that the release process with testing is a net win please point me to it.

    The former release process was:
    – virtually no work for the release manager outside of the freeze time
    – freeze unstable
    – uploads to unstable during the freeze were approved by a RM (not mailing list requests but upload and the RM accepted or rejected the package)

    And with uploads to unstable restricted to fixes for the release maintainers also tended to be more interested in getting the release out since until then uploading of new stuff to unstable was not possible.

  13. @ ciol

    I think it depends. On servers I’m mostly satisified with Debian. Usually I don’t need backports and the core packages are in a good state usually (exceptions strengthen the rule). On the desktop, well, my laptop is a sid system, my desktop too, my desktop at work is a etch system simply because I cannot regulary fix broken things and because its bad for building software. But its cluttered with backports – openoffice, mutt, rxvt-unicode and some others. Backports are fine, but they increase the administrative overhead and they are a pain in the ass from a security pov. I could go with another distribution but thats not an option for me. After all Debian wants to be a universal operating system and that claim is hard to achieve with current release way.

    @ Lucas
    I had some thoughts about the social pressure in mind. Possibly it would make sense to soften the policy idea a bit to make it easier for the release team. Instead of blocking all packages of a particular maintainer it would be possible to block problematic packages.
    Additional britney could be enhanced to block packages that have a higher number of non-rc but also not-wishlist bugs. Thats a technical solution again, I know, but it would probably help to enhance the overall quality by forcing maintainers to not only fix the most critical bugs just because there is pressure to do so.

  14. @Patrick:

    The fact that (too) much of Debian is outdated is not primarily due to stuff being outdated before the release happens.

    E.g. check Etch:
    Original freeze date: October 2006
    Actual Freeze: December 2006
    Release: April 2007

    Release of Lenny: September 2008 (or later)

    Time since the original Etch freeze date when Lenny gets released:
    6 + 17 (+ X) months = ~ 2 years

    Even though the etch freeze was delayed and took a long time the main part of the current age of the software in Debian stable comes from the fact that there are 17 (+ X) months between the release of Etch and the release of Lenny.

    If Debian would release twice a year like the other big distributions all software would always be < 1 year old.

  15. I know it’s probably not a 100% good comparison, but in GNOME, when we enter a freeze, developers do respect it. Of course, some forget from time to time, but generally the release team doesn’t have to do tons of work.

    We definitely trust the developers to do the right thing, and they do it.

    (there might be various factors — that I don’t know — that could make such a thing not possible in Debian, though)

  16. Some other idea:

    Define a set of ‘core packages’ (tasksel list or something like this) and only do the current RM work on them. Only this packages are garanteered to be in the next release (upgrade path, etc). Then *automatically* kick every other package out of testing if there is a RC bug (=technical rule, like after one week old RC, no exceptions possible) in its dependency chain. Release if there are no RC bugs in testing. Allow the latter set of packages to go into testing at any time before the release.

  17. I’m a Debian user. On my laptop. Just returned after using Ubuntu 8.04 for two months. Hardy is quite good, but as before with Ubuntu, I ran into problems and with Ubuntu, you are trapped: the next version is too immature.

    Debian releases are quite rare events. Make them stable above all else. There are two things I really like about Debian. Technical discussions and decisions tend to be first class, and there is a commitment to quality and stability. So please keep the bias for stability and quality in the stable release. How this might change release policy is a judgment, but if it’s a choice between “newness” and “stability”, then for stable, please choose stability.

  18. A technical question:

    Simon said that in the old days, unstable
    was frozen and then, within a few days, there
    was a release. Why is that so? Now, it seems to
    take months to end the freeze. Lenny is clearly
    not outdated. Packages in testing should have way
    less bugs than the unstable packages of “the old days” ™.
    There is nothing wrong with releasing once in 1…1.5
    years. But a new release should not be outdated.
    Is it only a lack of “social pressure”?
    Or is there some extra work because of testing?
    As I said, I would expect finishing testing to be
    faster than unstable. What do I miss?

  19. @macs

    Great page!
    Every single possible idea is already proposed it seems.
    Yet the last change seems to be more than half a year ago.

    @Lucas
    maybe, this blog brings the topic (and this wiki page) back on
    the table.

  20. As Vincent said, in GNOME we don’t have any technical “lock”. Developers respect the freeze.
    And indeed, big distros like Debian carrying old versions of software tend to be a problem in the long run, in 6 or 8 months from now we will be ignoring reports on GNOME 2.22 (2.24 is out before October).

    As a Debian user I always find this problems:
    – I can’t hack GNOME in Debian because it means building everything from source, while in Ubuntu or Fedora I get the development releases packaged. I could build everything, but that’s a lot of time I could surely use in actual hacking.
    – A Lenny user won’t have more fixes for 2.22 in 8 months when we release 2.26, it will be up to maintainers to try to backport fixes; this leads to another thing:
    – It’s a huge duplication of work to have maintainers constantly backport stuff to unsupported releases, that work could be done to package newer versions and release a new Debian

    That’s my rant :)

  21. What makes the freeze take so long is fixing RC bugs – look at the RC bug graph. So would getting people to fix minor bugs mean they’re more likely to fix RC bugs? There’s quite a few bugs where there’s no activity visible for months now.

    Getting back to your question, maybe the policy during the main freeze (not for the core stuff which is alread frozen) could be loosened to allow bugfix uploads to go through automatically, with social measures taken if someone ignores this. Combined with a small script that looks for “new upstream version” in the changelog and stops packages with it migrating.

    GNOME’s freeze is on features only, bugfixes are allowed up until 3 days before the release, and can be made after via the point releases.

  22. @ Adrian

    I consider the fact that a release gets outdated with increasing age less bad as the fact that a release is already outdated *before* it happens. Indeed it might be desirable to release more often, but then again we have users who don’t want to upgrade their systems every 6 months (and I can perfectly understand that as I have systems where I’d like to avoid that, too) but with our man power we cannot guarantee (security-)support for 2+ releases. So after all it wouldn’t be too bad to have a release cycle of lets say every 18 months. But currently even *this* is not possible.

  23. @ Tim who said “So please keep the bias for stability and quality in the stable release.”

    I agree with you Tim, quality is important, but do you think freezing *all* packages is necessary?
    Debian is an OS, I don’t understand why e.g a game is treated as the kernel.

  24. @Patrick:

    Every distribution (and also other big open source projects) has some kind of freezing. If you want to do QA on what you are releasing then for a distribution a 2-3 months freeze is unavoidable. Some distributions might trade for a few selected packages stability for having recent versions of selected software they can announce on their boxes, but other than that distributions always ship software that is ~ 3 months old when the distribution releases.

    And the “with our man power” point is kinda strange for a project with ~ 1000 developers…

  25. Adrian, there is nothing wrong with a freeze per se. I’ve never stated that. Its just considerable if its really worth to freeze every single package in a distribution consisting of some thousand very different packages with a differing importance. In this relationship I’d like to refer to what ciol said. He mentioned that he does not understand why the kernel is handled the same way as for example a game. And this is indeed a valid question and a point my suggestion addresses.

    About the 1000 developers: Uh, d’oh. Your calculation does not work. Not every developer is able to do everything in the project. Not every developer wants to do everything in the project. You might take into account that Debian consists of software in totally different languages (C, Perl, PHP, Python, Haskell, C++, Java to name just a few), totally different components (debian-installer vs. e.g. a web application) and that not everybody is even able to test every piece of software from a _user_ point of view (e.g. scientific applications). Tracking security issues is another point for itself. Not everybody is able to work in the security team. How big is our security team which basically has to provide the support for the release distributions? Did you consider that? There are also some construction sites in our basic tools, because the teams that do the work are overloaded. You might want to look at the “Request for Help” bugs in our bugtracker. They are indicators for missing man power in critical positions. Apart from that the number you state includes inactive developers, weither temporary or mostly permanent.
    So no, IMHO the man power argument is in no way strange.

  26. It’s rather an issue of motivation rather than a lack of man power.
    The aim of the Debian’s releases is not clear enough so that everyone gets deeply involved in fixing RC bugs.
    Debian needs to be transcendent in order to have more contributors.

  27. Most of the projects I’m aware of with big developer communities have some sort of freeze or branching procedure for release – without it you tend to end up with a lot of people sitting twiddling their thumbs since there’s nothing going on in areas they find relevant during freezes.

  28. About the 1000 developers: this is a huge number. Even having only a small fraction of reachable devs, there is lots of manpower available. And we are a community, at least a kind of. And I expect a fellow DD to help with quality analysis a little bit, thus I suggested the “distributed RM workload” mechanism. Randomization and quorum would help to overcome the problem with inactive DDs.

    @Patrick: of course you cannot offer competence in every area. But this is not a problem: if one does not feel competent to answer the approve/veto question then sHe could choose: delegate to other (random) DD.

    BTW, I am still a fan of the old release process. The last years with “testing” did IMHO make it obvious that it does not reduce RM workload or avoid RM burnouts, etc.

    Testing is bad. It does not improve software quality – it only adds a time delay between the package upload and start of the real tests. In my experience: most ugly/embarassing bugs are not discovered in Unstable, they settle down to Testing and then you cannot fix them quickly (without using critical severity, uh).

    And I think that a complete freeze is a good thing, from the psychological side. As others pointed out: Out of sight, out of mind.. ATM the “release stuff” is considered as SEP by many developers. Playing in the own sand box (i.e. unstable packages) is sooo much easier and sooo much fun.
    Without little pressure people won’t care much about the release process, it happens “somewhere else”. But with a complete freeze one has either to wait and swear, or look around and help others in speeding up the release.

    And if you don’t feel competent to help then go and f learn how to do things. That’s also a question of pride. We are developers and not some lame swanks.

  29. @Patrick:

    Regarding freezing:

    The basic idea is to have all software frozen and to shake out as many bugs as possible during the freeze. When you talked about problems “like a mostly broken rxvt-unicode, a buggy manpage for a core tool which makes it basically unreadable”, then these are problems that should have actually be found and fixed across the whole distribution during a freeze.

    Why doesn’t this work (anymore)?
    – Especially after the invention of testing everyone focusses on the RC bugs metric. But many bugs that are annoying for users are not RC.
    – Debian maintainership has much of ownership and few of responsibility. There are perfectly maintained packages, but also packages where the maintainer at most uploads new versions and fixes RC bugs (if at all) but doesn’t handle normal bugs properly. The many NMUs paper over this problem – everyone doing an NMU should ask himself “Is this maintainer maintaining his packages properly?”, and in many cases the answer would be “No” and his packages should go to someone else.

    Regarding how much additional work more frequent releases would bring for the security team:

    Things like the “Tracking security issues is another point for itself.” you mentioned stay the same work no matter how many releases are maintained at the same time.

    Having to maintain more relases takes more time, but the hardest part of the work depends on how far back fixes have to be backported.

    Worst case for security support that happens as of Etch:
    2 stable releases plus testing maintained at the same time, the oldest one consisting of 3 years old software.

    A suggestion modelled after what other distributions do, also taking your “don’t want to upgrade their systems every 6 months” into account:
    – every 6 month a release
    – releases are supported for 1 year
    – every third release is a long-term release that is supported for 2 years

    Worst case for security support that happens here:
    3 stable releases maintained at the same time, the oldest one consisting of < 2.5 years old software.

    So in the worst case there’s same number of releases, but the oldest software is more than half a year younger.

  30. Zomb,

    I aimed at the support of distributions _after_ a release, when I originally brought up the manpower argument. I said that some Debian users want Debian to be a not-so-fast moving target, that they have to dist-upgrade their critical systems every 12 months, so a we-release-every-6-months goal is not realistic because it would result in about 3 releases that would need to be supported simultaneous. Thats all I’ve said.

    And I disagree with you that missing knowledge is no problem. Obvious the only chance to have is _ask_ someone to step up and help you. You cannot delegate, because after all this is a project of volunteers.

    I’m not sure weither I can agree with your opinion that Testing is bad. If I get you right, then you think that Testing gets no Testing, but thats actually not true, because every package in testing had some testing period in Unstable. And I believe that quiet a lot of people do use unstable (I’m in no way representative but for example I do). Weither this betters the quality of software ending in Testing or not? Yeah, it does. In the way that totally broken software does not end up in Testing. On the other hand there are reasons why Testing might feel more broken as unstable does, e.g. because soft depends (Recommends+Suggests) are held from migration to testing and therefore some functionality is missing for Testing users.

    Btw. this sentence:
    “And if you don’t feel competent to help then go and f learn how to do things. That’s also a question of pride. We are developers and not some lame swanks.”

    is not needed in such a discussion. Its not neccessary to upset other people by insulting them in the way this sentence does. You really need to understand that developers are not gods. After all they are people with a dedication, with an ambition to bring an open source project forward and in _having fun_ with it. You cannot force people to do things they are not interested in and not everybody is strong in every task. Its absolutely okay if there are developers that care for some divisions of the project (e.g. translations!) instead of others (e.g. Coding, Security etc.)

  31. @ Adrian
    “The basic idea is to have all software frozen and to shake out as many bugs as possible during the freeze. When you talked about problems (…), then these are problems that should have actually be found and fixed across the whole distribution during a freeze.

    I understand and support the basic idea behind freezes. But actually the freeze as it is has discouraging effects. Some people feel that they have to beg to get enhancements into the distribution. I hope that this does not bare someone to still fix rc bugs in other packages, but I could imagine that it does. Additional there is a difference between a freeze in a single project (e.g. GNOME or the kernel) and a distribution, derived from the fact that there are severall hundred totally different things to maintain, enhance and support adverse to the single project where its just the project and possibly its components. So my opinion is that the freeze should honor this fact.

    “- Especially after the invention of testing everyone focusses on the RC bugs metric. But many bugs that are annoying for users are not RC.”

    I can’t tell how it has been before Testing was invented, but yes I think that this impression (“focus on RC”) is true. And actually thats related to what I’ve written above: That you get the feeling that you need to beg to get enhancements (not rc-fixes) into Testing. Thats not because the RMs are bad people, but because they are very overloaded with a lot of Unblock requests that derive from the fact that developers actually try to increase the release quality of Debian with uploads.

    “- Debian maintainership has much of ownership and few of responsibility. There are perfectly maintained packages, but also packages where the maintainer at most uploads new versions and fixes RC bugs (if at all) but doesn’t handle normal bugs properly.”

    *sigh* Yeah, I agree with this. Personally I try to handle my packages like pets, e.g. with good care and a lot of love, but I know that there are quiet a lot cases where this isn’t handled this way.
    But I have no real idea how this could be enhanced. Because you cannot easily decide from the outside if the maintainer lacks responsibility or if he is just overloaded. Taking packages away from people could make them angry and give up all the work they dedicate to Debian. Certainly this is something to avoid.

    “Things like the “Tracking security issues is another point for itself.” you mentioned stay the same work no matter how many releases are maintained at the same time.”

    I told it wrong. I didn’t mean tracking as in “Get knowledge of a security issue, find out which versions it affects, submit it to a database and get a clue if it gets fixed by a maintainer upload” but additional fixing it, so lets say the work from the detection to the fix. Last part is often the effort multiplicated with the number of releases.

    “So in the worst case there’s same number of releases, but the oldest software is more than half a year younger.”

    Thats not exactly true. You forget that “Testing” and “Unstable” are actually where the development takes place. Therefore these are moving targets. While its likely that the migrations from Unstable introduce new security issues, they also close security issues which the maintainer or even upstream takes care for (= not the security team). So after all I tend to believe that the needed effort on side of a security team is much lesser as supporting a stable release.

    Additional: I don’t see the point in “the software is more than half a year younger”. What does that say? In a distribution is software which doesn’t even change in a year. And there is the opposite type of software which changes fast (e.g. release at least every 6 months). Also new releases are usually out of the same codebase so the effort to change major release 1.x or to change major release 2.x is similar, isn’t it?

  32. @Patrick:

    Regarding your ‘Additional: I don’t see the point in “the software is more than half a year younger”. What does that say?’:

    As I already said:
    “the hardest part of the work depends on how far back fixes have to be backported”

    In both cases there are 3 releases to support in the worst case, and usually the challenging part is to fix the oldest one:

    When you have a fix for a latest upstream release of some software and have to fix the version in your distribution, this task becomes harder and harder the older your distribution becomes, since the upstream codebase diverges from the code you ship.

    And I am speaking as someone who maintains a kernel tree and who provided repositories containing maintained backports of Debian packages years before backports.org existed.

  33. @Patrick: well, I disagree. First, if the described decission were established then delegating a decission task would follow the same rules.

    Second: Testing concept feels like an afterthought because it IS an aftertought. IMHO it did not fit into the release strategy used by Debian and it still doesn’t, the only reason for it’s existence are the “demands” of the I-am-too-afraid-or-too-lame-to-deal-with-Unstable kind of users.

    And I also dislike this “I am volunteer so I have only rights and no duties, HA, HA, HA” mentality. That’s ruthless maximum-level-of-fun thinking but in release times we need some discipline. If you don’t know what this word means, let me explain: If you joined the project, you represent it to the public, and then please care about it’s prosperity.
    Or maybe this way: something where you can not just appear when you are bored, do some funny stuff and then disappear and let it rot forever.

    And I am pretty aware of my argumentation style with stereotypes, but this whole topic became a stereotype.

  34. Adrian,

    I commit that you might have more experience with backporting fixes then I have. As a maintainer of just 9 packages (my motto is to keep commitment as low as you’ve got time to spend) I’d only gathered experience with one PHP project which tends to have security projects. Anyway I had to backport some patches to the oldstable version of this project. The diff between the oldsarge and the unstable version at this time (the package has no version in etch) would be rather large, because this package changed a lot. So to come to the point: Even with the large difference between this two versions (1 year of development and *a lot* of changes) and even though backporting the changes to the oldstable version was no fun at all (mostly because I had to get an appropriate test environment running with apache, php etc. which is no fun at all in sarge) I’d say that backporting the patches to the old codebase wasn’t much harder as implementing it in the unstable version.
    (okay, but I must say the patch only existed for the current development version, so I actually had to backport the change to the Debian unstable version which were based on the latest stable release of the upstream author)

    However, to recall my commitment from above into mind, this is a limited experience and therefore my impression could be wrong.

    Zomb,

    there is not much to comment on “I disagree” and your Testing-has-been-made-for-Ubuntu-Lamers-Flame. No arguments to argue about.

    About your comments about rights vs. duties: You get me wrong if you think that I feel this kind of thought in any way desirable. Everybody should be required to do what he committed to do. But people have committed to dedicate their time to a given level (committed to do something) and you ask to excess this level (do something they did not commit to). Its as easy as that you want to require people, who have joined the project to translate it into various languages, to learn C because you think its their duty to help in the dpkg development, which is in need for help [1] currently (or any other job that needs some love). People that actually are not interested in development exist and they possibly do a valuable job anyway. But if you say “We have 1000 so-called developers and they have to help everywhere and if they can’t they need to learn that” then you despise that fact. And that is unacceptable. No matter how rude your tone is to underline your point.

    [1] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=282283

  35. Adrian,

    so what I read from your argumentation is that you think that releasing often would indeed make less work because the additional effort for backporting packages compensates the support efforts and because with the current situation, we also have 3 releases to support. (Although I stil find this last point arguable, because I neither think that Testing becomes the same support love as a stable release, nor that it needs this love, as discussed above)

    Obviously when you say “But the few where it’s not easy are the problematic ones.”, you are right. But its questionable if the first group makes the majority or the second, isn’t it?

  36. @Patrick:

    “releasing often would indeed make less work” might not be a good general statement. My point was that even if Debian would release every 6 months that would not necessarily result in significantely more work for the security team than the current status quo.

    Can we agree that my usage of “significantely” here covers differences in software age and the fact that security support for testing might be easier than for other releases?

    Regarding how many packages are problematic:

    With 200 DSAs per year even a low percentage of problematic packages can bring quite some amount of hard work.

    Even more if packages with many DSAs are problematic ones.

    I just counted for Mozilla in it’s different incarnations (mozilla*/ice*/xulrunner) 20 DSAs in 2007. No matter how many packages are not problematic – 10% of all DSAs in 2007 were for Mozilla.

  37. Adrian,

    I think we agree on various points and the one you asked me if we agree on is just one of them. Indeed I would find a move to such a release politic (a long-term stable release and a snapshot release every half a year) a good idea. I’m just not totally convinced that we can reach this goal easily and think that the support efforts are one of many possible blocker. Please read ‘are’ as in ‘could be potentially’.

    The number of DSAs issued for Mozilla is indeed a good and interesting point.

  38. I believe the mozilla project had a sort of karma assignment system related to nightly autobuilds, though I haven’t looked lately. If a commiter checked in something that broke a build, they were assigned a certain amount of “blame” and once blame reached a certain level, commit privileges were presumably curtailed. This sounds like a very effective form of social pressure tightly coupled with a technical solution that can automatically put on the brakes before things get too out of hand. I don’t know how well it works in practice.

  39. In my opinion, it was just wrong to freeze Lenny that early. With ~400 RC bugs still open and only two months left, it was crystal clear that the September release deadline hadn’t the slightest chance to be matched. Etch was frozen when the RC bug count was well below 200, and it still took four months to release. The right thing to do would have been to delay the freeze. Instead the release team chose to ignore reality and pretend that the release process was running on schedule, when everybody knew it wasn’t.

  40. Pingback: Codeine.

Comments are closed.