State of hardware support in Linux

I’m getting increasingly annoyed by the state of some aspects of hardware support in Linux.

My laptop (old Dell Latitude D610, Intel-based with ATI graphics) used to suspend/resume correctly with Linux 2.6.24 (ie, > 95% success rate). Later changes made the success rate drop significantly, and added a problem with kacpid taking 100% CPU because of an interrupt storm. And now, with 2.6.28-rc6, it’s completely broken. (partially documented in bug 11563, but I admit I gave up on this bug, because I’m going to change my laptop soon).

My desktop used to wake-on-lan correctly with 2.6.24 (but required some hacks, because it wouldn’t wake up if the NIC was DOWNed before shutdown), but changes in the r8169 driver broke it. (documented in bug 9512).

As a result, I’m forced to run old kernel versions on the two systems I have at home. I can understand that those issues aren’t considered high priority (not everybody use WoL), but the fact that in both cases, they are regressions, worries me a bit.

How many things are routinely broken during each kernel release cycle? Hardware support is difficult, of course, but are we really doing everything we could to make it suck less? Some things I really would like to see:

  • Distro packages for development kernel versions. For Debian, there’s this repository, but it doesn’t always contain the latest kernel versions. Maybe that’s something that should be moved to a umbrella and generalized to all distributions, to provide beta-testers with easy-to-install packages. git bisect isn’t that user-friendly.
  • Funding for driver developers to buy specific hardware. Many drivers cover a wide range of chips/cards, and developers often only have on a small subset of them, making it difficult to debug issues specific to one chip.
  • Better/updated bugzilla on Many bug logs are totally confusing, and cover different issues. They could maybe benefit from new or specifically developed bugzilla features (and more bug triagers, of course).

14 thoughts on “State of hardware support in Linux

  1. About the Dell D610: My roommate has a D610 with a broken screen hinge that he could probably be convinced to donate to someone who would do Linux testing on it.

    Ubuntu (and presumably other distros) have documented hardware testing teams on their wiki. Perhaps we all (the Linux kernel user community) can find people who have more spare time than I do personally and send our hardware to those friendly people, and they can excitedly do a bunch of testing? (-: (I’m picturing people who were like me in high school.)

    Or alternately, if this testing were easier, I could probably spend an hour a week on it. That’s probably a more scalable strategy.

  2. Here’s what we really need: QA complainers. Seriously. We need people who will install each preview release and then bitch and moan about every single regression until it gets fixed. These would be mostly low-hanging fruit: minor annoyances. These are the sort of things that drive people away from Linux, but experienced users might have grown to simply accept them. These QA complainers could also serve as managers for groups of hardware testers, which would enable them to complain appropriately about hardware regressions on a wide variety of hardware. As part of this, the distros could produce hardware QA spins, which would be specifically designed to put the hardware through its paces. These spins would have to run off USB sticks and be easy to update daily.

  3. I’ve come to realize that if I don’t try pre-releases myself and file bugs, I’ll be the one that gets screwed by them. ‘course, I get screwed by them anyways.. but I expect updates within the next week that make 8.10 really usable for me so I don’t have to reboot into 8.04.1 all the time..

  4. At least for Ubuntu one can download a .deb (easy-to-install as usual) of 2.6.28-rc6 right now:
    It might work on Debian also.

    What would be nice is a automatic (as far as possible) build of newest kernel-rc dropped onto a live cd. The cd could be like the Ubuntu CD but without gmonsters like oo.o and evolution which are not useful for hardware support testing.

  5. I agree, I don’t know much about the development side of things but as an end user the regressions are an annoyance and a turn off Ubuntu’s target audience, that’s ‘regular people’ as opposed to ‘Linux geeks’.

    I’ve been running Ubuntu on and off since Breezy I used Dapper and Feisty full time for a while but always wound up going back to Windows. Whether it’s agonizingly slow Firefox due to IPV6 being enabled by default or the system beep beeping for stupid reasons in Intrepid that it never did in Hardy. Compiz is another thing that everyone raves about but just doesn’t work right for me; from time to time it blacks out Firefox for chits and giggles and lots of times a window’s title bar is messed up for seemingly no reason. Firefox still connects to server and loads pages noticeably slower than on Windows. Why? I don’t know but it’s annoying. When people spend lots of time in their browser, which the average user does it should be as fast and usable as possible.

    Anyway your post wasn’t about Compiz or Firefox but my point is that with ever release there is some nice although small aesthetic changes from the end users prospective but often other things wind up breaking for seemingly no reason at all. Linux is the only OS I know that breaks a considerable amount of important things from upgrade to upgrade and really doesn’t worry about it.

  6. Actually the USB stick install as in for instance /dev/sdc0,1,2 (root,swap,home) on lets say a 8Gig Stick one with a standard ubootin configured for making a stick like that sounds like a great idea so I could test each one of my machines lets say refreshing the stick between sounds like a great idea for regression testing.

    We all make the USB stick called out as something standard Fat32 boot image and the rest ext3 or something and it does not touch my hard drive. Or make the live iso expect the usb stick for root, swap and home and install if found.

    I will not play with my production machine. I cannot take the risk.


  7. Would it be possible to write regression test, similar to the way you find in KOffice’s ODT system? I know we are dealing with hardware, but a VM (or some sort of test farm) would probably do the trick. Also, it’s possible that the HW was actually broken by the common subsystems and not by the driver itself.

  8. Mandriva provides unmodified builds of the latest upstream kernel releases (including each rc as they come out) in the ‘kernel-linus’ package in Cooker. In stable releases, this package is updated with the latest stable release kernel series in the /contrib/backports repository.

  9. Hmm… Part of the problem is that certain hardware used to work because of a hack. When a kernel dev tries to refactor the code into something maintainable and sane, things break. Also in some cases (not to point fingers) the recent Ubuntu kernels aren’t compiled with all the necessary flags that allow for hardware that used to work, to work. I’m not sure if that is because not enough devs have the same hardware. Or is it proper code overriding hacks that allowed certain hardware to work. Or is it a simple omission/error on part of the Ubuntu kernel maintainers. (Again not to say the Ubuntu kernel maintainers aren’t doing a fanatastic job. But to err is human.)

  10. I like the USB Image idea – I don’t/can’t risk my machine and want to test the latest build. I also have no desire to waste CDs (and the updatability of USB drives is ideal)

    The test suite is a great idea — a simple next step might even be to have a checklist of things to validate. (Ubuntu’s hardware test is incomplete… no 3d, no sleep/hibernate/etc, no printing validation, left/right/channel based audio test, screen resolution changes, external monitors, etc.

    More information on what hardware is out there and being used. The hardware database (for Ubuntu) doesn’t work (incomplete/indevelopment/??? who knows). Intel is focused on current hardware, but my older hardware is feature incomplete (was working, dropped in the latest driver). I suspect it is still commonly used so there would be value in fixing it.

    Infrastructure to pay for fixes. I like free as in freedom but dammit I want my hardware to work :-) , so why can’t I pay for it? Why can’t I rally all the other hardware users so we can pool our funds to pay for a fix? (a dollar each from 8000 users, surely that would get me a few solid weeks of developer time to fix this feature so I could finally drop Windows)

  11. I have to agree with this post. To take just the most recent example, I was visiting my grandparents last week and had some trouble connecting to their wireless network: about 50% of the time my card would just fail to talk to it for no obvious reason (it was as if the DHCP packets were just disappearing). But if it connected it worked, and it seemed like maybe the problem was related to attempting to hibernate the computer (always dangerous on Linux), so I just quit hibernating when I was down there. Then, in the middle of the week, a point release of the Linux kernel and my wireless drivers showed up in Debian. Against my better judgement, I decided to installl it and see if it made matters better — and poof! my wireless was totally broken instead of just broken half the time. Now I’m back home and it still works fine with my home wireless router.

    When something like this happens and I file a bug, the usual result is that it’s ignored for three years, then I get a message saying that the kernel team is closing out old bugs and they want to know if I still have this problem. Well, after three years I’m often not using the hardware in question, so I can’t provide any usfeul feedback and they close the bug untested. Closing the bug at that point makes sense, but I don’t think I’ve ever had someone follow up on one of my kernel bugs while I could have actually helped them.

Comments are closed.