Mon Jun 22 21:37:55 CEST 2009
A small challenge
So here's one of those small things that go beyond the limits of my patience, bit might be very easy for others:
Bug 267267, the virtualbox stabling, is on hold for now. The trigger is a pre-stripped .so. VBox uses kbuild, which has a severe lack of documentation.
I haven't been able to find a way to sanely disable that behaviour. If you have the skills - show the world how awesome you are by writing a patch for any VBox version between 2.1.4 and 2.2.4, attach it to the bug and enjoy that smug feeling of superiority.
If you don't have the skills, don't worry. You'll get there one day if you keep on working on it. Find a simpler bug and try to fix it. Try to fix one bug every day. After 5 years you'll have fixed over a thousand bugs (hey, some days off in between are acceptable) and learned a lot!
Bug 267267, the virtualbox stabling, is on hold for now. The trigger is a pre-stripped .so. VBox uses kbuild, which has a severe lack of documentation.
I haven't been able to find a way to sanely disable that behaviour. If you have the skills - show the world how awesome you are by writing a patch for any VBox version between 2.1.4 and 2.2.4, attach it to the bug and enjoy that smug feeling of superiority.
If you don't have the skills, don't worry. You'll get there one day if you keep on working on it. Find a simpler bug and try to fix it. Try to fix one bug every day. After 5 years you'll have fixed over a thousand bugs (hey, some days off in between are acceptable) and learned a lot!
Mon Jun 22 20:58:41 CEST 2009
Working on stuff
In the last weeks I've picked up a few packages mostly because someone pointed me at them and there were lots of open bugs.
I've "inherited" virtualbox like that (although Jokey and X-Drum are still doing an awesome job whenever they have the time),
I've mostly taken over xen for now (even though I don't use it at the moment and test it in *ahem* VirtualBox).
Samba is another one of those packages that many people use, but few devs maintain. But it's a lot harder to test, so I mostly leave it alone for now.
So I'm beginning to wonder - what packages are "orphaned", where did users provide and test patches and no dev is around?
How can we improve our response time to users so that they are happy and keep helping us, and how do we notice that a package doesn't get the love it deserves?
My current mechanism for that is quite crude and biased - if I notice enough people complaining I have a look, and if the build system doesn't make me want to get drunk I start playing around with it until a few bugs are closed.
Maybe we need a "Package Fix" team of users and devs?
The users could cooperate on collecting bugs for a "topic" like samba and test them, and the devs can give assistance and then commit the fixes. It would most likely be quite fun to cooperate like this, might be a theme for a bugday - but who has the time for that?
If you have an open bug with a fix that hasn't been committed feel free to Mail Me or drop by in #gentoo-bugs on irc.freenode.net. I can't promise much, my time is limited and sometimes things just don't work out as expected, but still I try. And maybe it gets a few old bugs killed - that would be worth it :)
Samba is another one of those packages that many people use, but few devs maintain. But it's a lot harder to test, so I mostly leave it alone for now.
So I'm beginning to wonder - what packages are "orphaned", where did users provide and test patches and no dev is around?
How can we improve our response time to users so that they are happy and keep helping us, and how do we notice that a package doesn't get the love it deserves?
My current mechanism for that is quite crude and biased - if I notice enough people complaining I have a look, and if the build system doesn't make me want to get drunk I start playing around with it until a few bugs are closed.
Maybe we need a "Package Fix" team of users and devs?
The users could cooperate on collecting bugs for a "topic" like samba and test them, and the devs can give assistance and then commit the fixes. It would most likely be quite fun to cooperate like this, might be a theme for a bugday - but who has the time for that?
If you have an open bug with a fix that hasn't been committed feel free to Mail Me or drop by in #gentoo-bugs on irc.freenode.net. I can't promise much, my time is limited and sometimes things just don't work out as expected, but still I try. And maybe it gets a few old bugs killed - that would be worth it :)
Thu Jun 18 20:33:03 CEST 2009
A Manifesto
Following everyone else writing a manifesto for the current Council elections I've finally found the time to write one too.
You can find it here
I hope that motivates more people to vote. As long as you vote it's good ... come on, it takes all of 5 minutes to vote (and if you really don't care leave the candidates in the random order given by the votify script).
Just to repeat myself, http://dev.gentoo.org/~patrick/Manifesto.html
I hope that motivates more people to vote. As long as you vote it's good ... come on, it takes all of 5 minutes to vote (and if you really don't care leave the candidates in the random order given by the votify script).
Just to repeat myself, http://dev.gentoo.org/~patrick/Manifesto.html
Mon Jun 1 13:08:18 CEST 2009
The problem of providing source code archives
Here's something that has been bugging me quite a bit lately:
Many projects are unable to simply provide a sane stable download URL for their releases.
In theory it's quite easy. For every release you make a tarball (that can be done easily by your version control system) and put it in a directory (say, /download) on your webserver.
When I want to download a release I just look at the download link on the homepage and either get a direct link to the latest or a directory view of all available items.
Sounds easy, eh?
The problem with that naive approach is that somehow people do not understand HTTP anymore. Instead of providing a simple list it's a web 2.0 bouncy rotating dynamic swirly thing. Which provides me with unclickable links (thank you GitHub!).
Or it's a service like sourceforge providing mirroring. So the "simple" URI gets redirected to the "best" server - which means that a request for the canonical URI gets redirected (HTTP 302), then gets redirected to the mirror (HTTP 301), then gets an OK (HTTP 200) ... for an error page. Which means that if there's any change in the mirror layout or file availability there is NO automated way to detect it because the return code is WRONG.
Which means that I have to manually check every file that gets automatically downloaded to see if it is an HTML error page.
Then there's funny things like Berlios adding a random byte at the end of every archive so that checksums fail (but they stopped that quite a while ago)
And then of course there is the upstream that refuses to provide tarballs because you can do them yourself. Which means that lots of time gets wasted trying to package things in a sane way. (Added bonus: their svn checkout has all libraries included so that you have to find a way of ripping them out. Yey!)
Is it really so hard to configure your webserver to return correct HTTP status codes and just provide simple downloadable archives with a stable URL ? Or is that a black art that takes years to learn?
Many projects are unable to simply provide a sane stable download URL for their releases.
In theory it's quite easy. For every release you make a tarball (that can be done easily by your version control system) and put it in a directory (say, /download) on your webserver.
When I want to download a release I just look at the download link on the homepage and either get a direct link to the latest or a directory view of all available items.
Sounds easy, eh?
The problem with that naive approach is that somehow people do not understand HTTP anymore. Instead of providing a simple list it's a web 2.0 bouncy rotating dynamic swirly thing. Which provides me with unclickable links (thank you GitHub!).
Or it's a service like sourceforge providing mirroring. So the "simple" URI gets redirected to the "best" server - which means that a request for the canonical URI gets redirected (HTTP 302), then gets redirected to the mirror (HTTP 301), then gets an OK (HTTP 200) ... for an error page. Which means that if there's any change in the mirror layout or file availability there is NO automated way to detect it because the return code is WRONG.
Which means that I have to manually check every file that gets automatically downloaded to see if it is an HTML error page.
Then there's funny things like Berlios adding a random byte at the end of every archive so that checksums fail (but they stopped that quite a while ago)
And then of course there is the upstream that refuses to provide tarballs because you can do them yourself. Which means that lots of time gets wasted trying to package things in a sane way. (Added bonus: their svn checkout has all libraries included so that you have to find a way of ripping them out. Yey!)
Is it really so hard to configure your webserver to return correct HTTP status codes and just provide simple downloadable archives with a stable URL ? Or is that a black art that takes years to learn?