On Fri, Nov 12, 1999 at 11:02:25PM -0500, Brandon Mitchell wrote: > On Sat, 13 Nov 1999, Chris Leishman wrote: > > On Fri, Nov 12, 1999 at 06:40:18PM -0500, Adam Di Carlo wrote: <snip> > > Let me prefix this by saying I would have said this had Adam not done so. > > > As for the package pool - the problem I see here is that it puts alot of > > strain onto the release team - they have to work out which packages they can > > pull down (and which versions, to maintain stability). > > I'd propose 2 things, the developer says they want their package > considered for the current proposed dist, and second, the transfer from > the pool to the proposed is done using the bug tracking system and a > delay. The delay is to give time for old bugs to be closed and new bugs > to be opened, say a week. Given no open bugs and a developer desiring to > move the package to proposed, then do it automatically. > Ok...this sounds a little better - but I still think this has some flaws. 1) We are relying alot on developers to know when is a good time to move their packages to the "proposed" dist - which is alot to ask since most of us wont be running the proposed dist ourselves to be testing on. I would think that the majority of developers only have one machine they develop on, which naturally runs unstable. To build and test packages on this platform and then hope that they work on the "proposed" distribution would be error prone. From my observation, most developers would run the distribution they are working on - they keep their machine running unstable until it freezes, then keep running frozen until it stabilises. At that point everyone does the final update and goes "wow - it a new stable distribtuion" before they do apt-get dist-upgrade to the new unstable. I admit - this is an unproven generalisation, but I don't think its too far from the mark. 2) Bug reports. A delay is all very well to ensure there are no open bugs, but as I have just stated, this in no way ensures that it wont have bugs in the "proposed" distribution. 3) This idea makes it extremely hard to do "base" upgrades (say a new libc). At what point do you move the base system to "proposed", since it will probably break everything in there? And we cant just leave it till later on, as no new packages will be able to go into "proposed" as they wont have the support of the new base there. 4) You still have to freeze the proposed at some point. Debian really does need to have a static, tested "stable" distribution (not only for CD's, but so people know exactly what they will get when they install a stable distribution). In fact - most of the servers I look after reference debian distributions by their codename (hamm, slink, potato) - so that I know exactly which code base I'll be pulling stuff from - no matter what happens to their stability state. 5) Are there going to be enough people testing "proposed"? <snip> > People still download what they want, actually there would then be 3 > instead of 2 levels, stable, proposed, and pool. Any bugs in proposed are > quickly backed out of. The pool is the developer and tester area. The > proposed is another tester and update-happy user area. Leaving stable > where it always was, boring and proven, the way it should be. Hmm...I still don't like the idea of a perpetual unstable "pool". At the moment developers basically work to stabilise the distribution _they are running_ (see earlier point). We find a bug and we fix it because we know that will make the system more stable. But I really don't want to live with the thought that at any point in time the system will break horribly under me (see next point before telling me that unstable is meant to be like that). As it is, I'm not a developer that updates to the new unstable immediantly after frozen goes stable. I don't have the time or resources to deal with the major changes that usually go on at the start of the unstable development cycle. So I wait until it has settled down a bit first - then I upgrade. This idea fits into my staged development model - by giving people a definate idea when things will be at certain points of the cycle. I am happy to move to a system after the base has "frozen" but when the librarys, etc are still under active development. I think one thing people are missing is the fact that alot of hard to resolve RC bugs are because an individual package has internal problems, but by they way they fit in with the rest of the distribution. The model I propose aims to aleviate this by giving each stage of the development cycle an unchanging base upon which to build - reducing the chance of inter-operation problems. <snip> > Goal setting results in longer freezes and periods between freezes. > Playing catch-up and the desire to see something cool motivates most > people. Goals can still occur, they just can't hold everything up. > Across the board changes can still occur, they just move to a different > proposed area. I really disagree with this. Goal setting is _not_ going to result in longer freezes. It is the lack, or unattainabilitay, of a goal that causes problems. This _is the point_ of my model - to make the goals more obvious and obtainable. Its a fairly classic motivation studies point. Setting a huge goal for yourself is far less likely to achive more smaller goals which have the same end result. <snip> > I'm not discouraging your ideas, just putting more out so that the best > one wins. I put this one to rest around the time that Richard came on, in > hopes that he could fix things, but, not to blame him (it's the system > man), things seem to be the same. Fair enough :) Regards, Chris PS. Sorry for the long email! -- ---------------------------------------------------------------------- Linux, because I'd like to *get there* today ---------------------------------------------------------------------- Reply with subject 'request key' for GPG public key. KeyID 0xB4E24219
Attachment:
pgpBpZ0yHGiMS.pgp
Description: PGP signature