[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: musings on "the release thing"



On Fri, May 29, 1998 at 02:35:30AM -0600, Bdale Garbee wrote:
> In article <[🔎] 19980529083228.A7451@molec3.dfis.ull.es> you wrote:
> 
> : IMHO, that method would be easier to follow if we divide "main" in "core"
> : and "non-core".
> 
> So, how does that differ from the current system?  Isn't "core" exactly
> the same as the set of required+important priority packages?  One could
> argue that the core also includes standard packages, but I see that as a
> second tier, with all of optional and extra as clearly not part of the
> core.

That's kinda what I thought really.  Though I can clearly see other problems
besides these, that's still the big one.  The larger Debian dist would be
updated as often as it has been (not often enough) and the core wouldn't
satisfy the need for new toys.

That's why I suggested having the packages be considered stable or unstable
rather than an entire dist.  It'd be much more likely to happen more
freqently that packages get stable than entire groups of 2000 odd packages
will.


> But even this is just a testing optimization thing.  To do a 'stable
> release', the "release team" would still want/need to determine which
> packages from the set of all available packages should be included in the
> release, and which version of each package to be released should be
> included.  If it helps the testing team to start with packages in the
> highest priority band and work out from there, that's fine, but I don't
> think it should change the release process.

Problem seens that there is just too much to test all at once.



> I've been thinking about this a long time.  Here's a wild idea I've been
> kicking around for a long time.  If you don't have time to be distracted,
> quit reading now!
> 
> A long time ago, when I instigated and for a while managed
> master.debian.org, I was on the verge of proposing a radical change in the
> way we handle versioning...  then I got massively busy at work, went
> through a messy internal audit that forced me to be less generous to
> Debian with our network bandwidth, and so forth...  so I never followed up
> on the idea.  I have alluded to it a couple of times, and I still think
> there's a kernel of a good idea in here somewhere.
> 
> The general notion is that all package uploads go into a common pool. 
> Scripts on the server routinely build a symlink tree that points to the
> most recent version of each package in the pool, which is the equivalent
> of the current 'unstable' tree.  In fact, there could even be different
> flavors of instability, with different criteria used by the link tree
> builder to determine which versions to link to for each tree.  Trees of
> symlinks are cheap.

Upto here this was EXACTLY what I was thinking...


> When the calendar, or the release team, or God decrees that it's time to
> work on a new "stable release", a new versioned symlink tree is
> semi-manually built pointing into the package pool, identifying the
> versions of each package that are proposed to be included in the release. 
> Testing ensues, and the versioned symlink tree is updated appropriately
> based on the results. At release time, the versioned symlink tree is
> frozen.  The size of the pool is managed using standard resource
> allocation techniques... any version of a package currently being pointed
> to is retained, prior versions of packages are retained as allowed by
> available disk space, with older bits being deleted to make space for new
> bits arriving.

I was thinking maybe that we could instead update the stable tree as we
went, when a package in unstable was deemed stable enough.  Stable would be
kept stable, but not static.  As packages were determined to be stable, they
are moved to stable..

All packages in unstable remain there for a testing period for bugs to be
found.  If no release-affecting bugs are open after that testing period, the
package can be moved to stable.  However it can't be moved till all of those
level of bugs are either closed or whatever (more or less as is the case
with not releasing hamm till it's done)  For a low urgency package I figure
4-6 weeks should be sufficient for testing.  Less for higher urgency
packages since higher urgency seems to indicate serious/security bugs were
fixed.

All that would remain is making official releases, essentially installable
snapshots of stable.  That's an issue I haven't worked out yet, but it seems
that a short-term freeze of the stable tree to be sure that it can be
installed would do the job.  I hope.


> There are some obvious issues surrounding how a structure like this gets
> mirrored, but it has several technical and procedural advantages.  I
> always hated the Perl/FTP mirroring code, and figured that doing something
> radical with the dist tree that wasn't well supported by the traditional
> mirroring techniques might motivate me to go write a suite of tools to do
> the job better.  One of the really cool aspects of a structure like this
> is that there is very little risk in trying a new rev of a package, since
> with sufficient disk space on the archive, there would always be a few
> prior revs to fall back on as need be.  It also makes it really easy to
> live on the bleeding edge, since latency from upload to mirroring could be
> very small.

I hadn't actually considered keeping old revisions around really, sounds
like something apt might make work well.  I would suggest that there be some
method to keep n old versions of packages in certain sections at least, with
base being the first one to come to mind.  I'm not yet sure how apt would
handle this.


> One specific failing of the current system that I lament is that any new
> package installation into the distribution tree causes us to lose the
> previous version.  This makes fallback tough, depending on the maintainer
> or other random folk to be able to reproduce/reupload a prior version of a
> package. It also causes us to be wickedly cautious about installing new
> packages... my vision would have Incoming packages dropped into the pool
> immediately if the signatures and sums all matched for a registered
> developer...  I'd also drop the notion of subsections entirely, and merge
> the 'standard' and 'important' priorities into one priority.

Often downgrading involves a purge of the new package and reinstallation of
the old anyway.  Not sure how to fix that problem really.  Downgrading would
be a manual process still most likely.


> I still don't have time to work on this idea... if I did, it might be the
> genesis of a yet-another Debian-derived distribution... since it would be
> easy to track unstable and Incoming to seed an independently managed
> release tree even if Debian didn't adopt the notion... and I suspect it's
> radical enough to be a tough sell these days... too many developers to
> convince!

Some of us do.  =>  Can't claim I'm one of them at the moment, but I can and
wouldn't mind helping.  But I don't want another Debian derived dist.  I'd
much rather see Debian become better because it can become better.


> As my current-favorite game says on the box... "so many pedestrians, so
> little time..." :-) Or, as the HP calculator software guys used to say,
> "life is short, and the ROM is full"...  :-)

=>

Attachment: pgpwgrDNBiZ7X.pgp
Description: PGP signature


Reply to: