Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Linux Software

Revamped Linux Kernel Numbering Concluded 272

kernel_dan writes "Following on the heels of a prior discussion about a kernel numbering scheme, KernelTrap has the conclusion. From summary: "Linus Torvalds decided against trying to add meaning to the odd/even least significant number. Instead, the new plan is to go from the current 2.6.x numbering to a finer-grained 2.6.x.y. Linus will continue to maintain only the 2.6.x releases, and the -rc releases in between. Others will add trivial patches to create the 2.6.x.y releases. Linus cautions that the task of maintaining a 2.6.x.y tree is not going to be enjoyable.'" Torvalds suggested specific guidelines to alleviate burn-out of the .y maintainer and Greg KH volunteered to begin maintainership."
This discussion has been archived. No new comments can be posted.

Revamped Linux Kernel Numbering Concluded

Comments Filter:
  • by Anonymous Coward on Saturday March 05, 2005 @02:12AM (#11851004)
    The *.x.y kernels are unstable.
    The *.x only kernels are stable.

    Won't there be a 28 day cycle for
    stability on the *.x only kernel?
  • Burnout (Score:4, Funny)

    by philovivero ( 321158 ) on Saturday March 05, 2005 @02:13AM (#11851008) Homepage Journal
    Torvalds suggested specific guidelines to alleviate burn-out of the .y maintainer
    Did he say anything about the .NET maintainer? That'd take a serious toll on your sanity.

    How *DO* you write a Linux device driver in C#?
    • Re:Burnout (Score:3, Interesting)

      by Mark_MF-WN ( 678030 )
      Hey, don't laugh. If Java can be used for both realtime systems and driver development, anything's possible.

      Besides, Mono can probably compile to machine code, just like anything else.

    • Writing Linux device drivers in C# is merely pointless. It causes nowhere near the brain damage that would be caused by trying to maintain the yacc.NET version.
  • by Xpilot ( 117961 ) on Saturday March 05, 2005 @02:14AM (#11851011) Homepage
    You can find it in his own subdirectory on kernel.org at:

    http://www.kernel.org/pub/linux/kernel/people/greg kh/v2.6.11/ [kernel.org]

    It includes tiny fixes such as a Dell laptop keyboard fix and a raid6 compilation fix for ppc.

  • Numbering... eek. (Score:5, Insightful)

    by Faust7 ( 314817 ) on Saturday March 05, 2005 @02:16AM (#11851015) Homepage
    Others will add trivial patches to create the 2.6.x.y releases. Linus cautions that the task of maintaining a 2.6.x.y tree is not going to be enjoyable.

    2.6.x...
    2.6.x.y...
    2.6.x.y.z...

    Kind of a Zeno's Paradox, isn't it?
  • Here's an idea... (Score:4, Interesting)

    by rekoil ( 168689 ) on Saturday March 05, 2005 @02:24AM (#11851033)
    Why not do 3.x, 4.x, ... like every other software developer in the world (well, except Microsoft and Apple...)?

    Honestly, I don't understand the insistence on keeping everything at 2.x, 2.x.y, etc. If someone can explain the rationale to me, I'd be quite interested.
    • Re:Here's an idea... (Score:5, Interesting)

      by MarcQuadra ( 129430 ) * on Saturday March 05, 2005 @02:31AM (#11851055)
      Because bumps to the major version number indicate HUGE-scale rewrites, while the minor (.6 in this case) define feature-complete stable branches, and the trailing number at the end is for bugfixes and minor enhancements.

      This is the way software SHOULD be versioned. It's the way Apple is versioning now, and it's the way Microsoft versions it's core systems (Windows XP SP2 = NT 5.1.2600).

      Personally, I'd like for the odd-minor devel releases to go away and find some better way of versioning those, but everything else to-date has been sensible and sane, and I've been compiling my own kernels since the 2.1 series.
      • by NitsujTPU ( 19263 ) on Saturday March 05, 2005 @03:16AM (#11851161)
        Windows XP SP2 = NT 5.1.2600

        Funny. Someone HAS to have planned that one.
      • you do realise that 2.6 is a massive rewrite of 2.4. Whole subsystems are completely different along with wholly new APIs to talk to devices etc. Perhaps we should have gone to 3.0 instead of 2.6, then there would be less hassle by having dev work going on in 3.1, backporting interesting stuff when it's working properly to 3.0...
        • I was always under the impression that it would go to 3.x when it broke binary compatibility with 2.x, i.e. major version number indicates binary compatibility as in a "Linux 2 binary" or "Linux 3 binary".
      • by Cthefuture ( 665326 ) on Saturday March 05, 2005 @09:22AM (#11851774)
        It's not exactly the same. It's version 5.1 build 2600. A build number is a lot different than an actual version number.

        Personally I think many projects (especially open-source) are getting out of hand with the version numbers. Just look at some of the version information on some Debian packages.

        I mean, you have stuff like version "testing-4.3.0.dfsg.1-12.0.1". Does no one else see that as insane? I know each part has a purpose, but it seems more like improper project management. Is this possibly caused by a lack of proper scheduling? Things take too long and people start branching off into separate sub-versions.

        It's the same thing with the kernel. I see no reason for official kernels to have so many frickin sub-versions. KISS, please. I think you reach a certain point and then the versions have lost all meaning. End-users can't keep track of all this stuff. Please implement better project management practices and just do actual releases with a simple versioning scheme.
    • Consistent and straightforward versioning is the only solid rationale. If you can explain a system in a few sentences and things don't get hairy, it's probably good enough. Beyond that, the choice of versioning scheme, as you know, is arbitrary.

      Linus has his preference. As long as I don't have to start maintaining the kernel, this won't affect me at all. I will sort of miss the old even/odd dichotomy though ;)
    • by MidnightBrewer ( 97195 ) on Saturday March 05, 2005 @04:44AM (#11851304)
      I suspect the recent trend over the years to stay attached to point-point-point releases, especially for those projects that take forever and a day to hit 1.0, isn't so much an honesty thing as a sub-conscious desire to avoid responsibility for mistakes. I'm not referring to legal liability so much as professional pride. "Of course it has bugs, it's still not 1.0!" I'm sorry, but that's not realistic. People don't get paid to be perfectionists; that's a conceit to be enjoyed on your own time.

      You do your best, you release it as 1.0, and then you start all over again to fix bugs and work towards the next full release. Making the numbers smaller doesn't change the quality of your software, it just helps a programmer live with the perceived embarrassment of not writing the perfect piece of code. In the final analysis, the numbers are all arbitrary; any sense of pride in your work or shame about your mistakes is a personal issue. Take Apple as an example. You could strip the 10 off of 10.3.8 and say that they are on version 3.8 of OS X. That means that version 4.0 is just around the corner, and that makes their turn-around cycle sound that much more impressive. To those who protest that a full point release demands unbelievable innovation and "drastic code re-writes," I have to ask, "Where is that written?" In the final analysis, versioning is all in your head. :)
      • Re:Here's an idea... (Score:3, Informative)

        by Sique ( 173459 )
        That's basicly what SUN was doing with their release numbers.
        Solaris 10 is in fact Solaris 2.10, and the kernel of Solaris 2.10 is the SunOS 5.10.

        Solaris was released as "2", because it was the first software distribution from SUN Microsystems that was SVR4 compliant, thus making a difference to the previous SunOS releases, which were based on BSD. Solaris 2 got a rewrite of the old SunOS kernel, which was at 4.3.1 before Solaris. So the kernel was SunOS 5.0. With every new Solaris Software Release there w
  • by AdamHaeder ( 798675 ) on Saturday March 05, 2005 @02:28AM (#11851045) Homepage
    What was wrong with .4 being stable and .5 being test? Why not start a .7?

    I haven't been following the kernel mailing list, but as a regular linux user from way back, I'm not clear on why the old way was dropped. This way seems a lot more confusing to me.
    • by A beautiful mind ( 821714 ) on Saturday March 05, 2005 @02:34AM (#11851067)
      The developers just felt there is no urgent need for 2.7 yet and also that 2.6 can accept more features in a semi-stable state than it would be truly a need for 2.7.
      • Well, eventually you have to lock it down and call it stable. Their problem stems from trying to get too much milage out of each version. Just because 2.4 and 2.6 were huge leaps doesn't mean that will always be the way to go. They should lock down 2.6, put the "semi-stable" features into 2.7, and release 2.8 in a year or a year and a half. Save the big changes for 3.0. Unless they have some secret plan to konquer the world, nothing good will come from the current process.
      • by pikine ( 771084 ) on Saturday March 05, 2005 @05:31AM (#11851376) Journal
        Indeed, I feel that 2.6 was pushed out prematurely, but many features in it are desparately needed for publicity (for example, a working ACPI), so the kernel needs the "stable" status to give people incentive to use.

        The fact that kernel developers are still adding new features suggest that it is still a development kernel. Stable kernels are for bug fixes. If they need new features to fix existing bugs, that's when they should bump up the stable version number.

        However, I think version number is already obsolete for Linux kernels. We should be able to manage patchsets as if they're software packages, complete with dependency and conflict information that are automatically computed. When you want a "patch" to be included in your kernel, it looks for patches it depends on, checks to see whether it results in a conflict, and apply the patches. Periodically, "metapatches" are updated to depend on the most recent patches along some feature. More intricacies need to be worked out.

        Assuming (0) that there is a demand for such a patch manager---I think the problem with developing it is that (1) it's difficult to develop a realistic test project from ground up using the patch manager, so the patch system can show that its design is useful, and (2) if we use an existing large software project (such as the Linux kernel), programmers for the patch manager would spend too much time following the development for that other project, rather than have useful work done; they might not want to do it. In general, we want to test the patch manager on a big project, but we also risk wasting too much time on the test project.

        It would be best if the developers of a large project (can also think about the Linux kernel) will take the initiative into developing a patch manager, since they have a demand for it (or can be convinced to have a demand), already have a realistic software product, and are willing to follow the development of their own project.

        I'm saying that there is a seed for an innovative patch management and revision control system from maintaining a Linux kernel. They should do something about it.
    • by Atzanteol ( 99067 ) on Saturday March 05, 2005 @02:57AM (#11851118) Homepage
      They're trying for a more rapid development cycle. 2.6 hasn't feature frozen like in the past.

      It seems to be what the vendors want. RedHat 2.4 kernels have so much 2.6 stuff back-ported they're barely 2.4 anymore.
    • by Malor ( 3658 ) on Saturday March 05, 2005 @04:14AM (#11851263) Journal
      I've only ever had one comment modded down as Flamebait.... this may be #2.

      As near as I can tell from reading recent comments on this particular decision, the single biggest reason they don't want to do 2.7 is because not enough people will test it. Only by calling it 'stable' can they get enough testers. Of course, the fact that it will now never really BE stable, seems to have been lost on them.

      This is better than what they have been doing, but only slightly. What Linus seems to really want is for everyone in the whole world to be using the very most recent kernel. He wants, in essence, everyone in the world to be beta testers. By putting out new code and calling it 'stable', he gets hundreds of thousands of testers, and is able to shake out bugs much faster.

      Apparently, the possibility that it might be banks and hospitals that are discovering these bugs didn't occur to them. Discovering a bug is an EXTREMELY PAINFUL PROCESS for someone who isn't expecting one. So instead of doing the nasty hard work of maintaining separate stable and development branches, they push that pain onto everyone else in the world.

      Personally, I want software that works more than I want the latest whizbang feature. That's why I got onto Linux in the first place, a decade ago... I was frustrated with Windows. It was such a delight to run software that never, ever crashed. It was crude, it was simple, but it was *incredibly* reliable, and that more than any other single thing is why I switched.

      I find it quite ironic that Windows 2003, in the hands of capable admins, with all its design flaws and warts, is substantially more stable than is Linux. There's a reason Ars Technica switched from Linux to Windows, and stayed there. If anyone on the planet is competent, it's those guys. And from the sound of it, they're very happy with the results.

      At this point, I'm so disgusted with this state of affairs that I'm running a test installation of FreeBSD. Their development cycle is much saner. They don't have as many features, but the ones they DO have, seem to work. Maybe they should add a new motto: "Software by Adults, for Folks Who Could Lose Their Job if it Breaks".

      *sigh*
      • I've only ever had one comment modded down as Flamebait.... this may be #2.

        That's because your post smells like a troll.

        Personally, I want software that works more than I want the latest whizbang feature

        So, Linus barged into your office with a gun, demanding you run the latest kernel?

        For what you want (sane development cycle), there are DISTRIBUTIONS. What's wrong with distributions, I ask you?

        I'm a RedHat (not Fedora) man myself, but the sysadmins at work prefer Debian. To each his own, but ther

      • by Hackeron ( 704093 ) on Saturday March 05, 2005 @06:25AM (#11851471) Journal
        Arstechnica you say? -- isnt it ironic their site was down for atleast 5 hours about a week back?

        Also, look at their uptimes on netcraft. There average uptime plummeted to about half since they switched to windows. Sure its still "good enough", but how can you possibly say 2003 is more stable that linux? - especially substantially more stable?
        • This is probably going to sound patronising (and it's not intended like that), but surely it's true that stability isn't the only thing affecting uptime. Windows Server 2003 still needs to be restarted on some patches -- that's not a stability flaw, it's a design flaw affecting uptime. Also, Linux can (apparently) handle a higher network load than a comparable Windows machine -- that's not stability, that's resistance to network pummelling. And if you're basing uptime stats on reachability, that'll affect i
      • by moonbender ( 547943 ) <moonbenderNO@SPAMgmail.com> on Saturday March 05, 2005 @06:31AM (#11851478)
        There's a reason Ars Technica switched from Linux to Windows, and stayed there.

        Yes, there is. Quoted from their article on the redesign [arstechnica.com]:
        Q. Why did you change over from Linux?


        A. This is a loaded question, so we'll be brief. Ars started out on Windows NT back in 1998, but shortly after that we moved to FreeBSD, and then later, Linux. We ran Linux until March of 2004, when we made the move to Windows Servers. Linux and Apache had served us quite well, but when we turned to look at building our new CMS, .NET was simply so attractive for our needs that we felt it warranted the switch. If there are enough requests, we may do an article later documenting our thought process, but for now I'll say that the decision was largely a programming one, with the added benefit of the fact that more of us support Windows in our real lives than Linux.
        I don't know - did they ever release that article documenting the thought process?
        • Re: (Score:3, Funny)

          Comment removed based on user account deletion
        • I'd like to read it if they did -- it would be nice to see technically-minded people explaining the benefits of a system like .NET, even if only to cause the detractors of the system here and elsewhere hunker down and give a more reasoned argument. There's time's it's not given the credit it deserves, it is (in general) one of MS's actual successes.
      • "There's a reason Ars Technica switched from Linux to Windows, and stayed there. If anyone on the planet is competent, it's those guys."

        From the sounds of things, everyone competent there was utterly against the WindowsNT switch, which was introduced by management, caused horrible delays in shipping all their products, and caused most of the technical guys to leave.

        But it sounded so much better as a soundbyte
      • Apparently, the possibility that it might be banks and hospitals that are discovering these bugs didn't occur to them.

        Please name the banks or hospitals that upgrade kernels every time Linus make a point release. If they really exist, I want to be sure to stay clear of them.
    • They thought there was too much back-and-forth, patches for .5 that were accepted to go into the stable tree .4 and then had to be almost completely rewritten.
  • by pergamon ( 4359 ) on Saturday March 05, 2005 @02:31AM (#11851052) Homepage
    ...welcome our new many.version.levels.over.lor.ds.
  • by SpaceLifeForm ( 228190 ) on Saturday March 05, 2005 @02:34AM (#11851064)
    The key is to make sure that the patches for the .y version are clean and really make sense from a stability standpoint *ONLY*. New functionality does not belong here.

    Right now, I consider 2.6 not stable enough for my own use. If I cannot compile and boot a Linus kernel on a simple install of GNU/Linux (whether SuSE or Debian) without major headaches and/or chasing down patches, well, that's not stable enough for me. YMMV.

    Back in 2.4, I wasn't really happy until 2.4.18, and with all of the changes in 2.6, I won't be surprised to see it meet my definition of stable until 2.6.20 at the current pace.

    So, I'm hoping that this new approach will really help.

    • Yeah, I agree, Linus kernel is not good for production use. Give Linus a good thwack on the forehead and he may die. Or he may not. It's unpredictable and unacceptable.

      Now back in my day, kernels were only found in nuts.
    • What exactly are you doing that 2.6 isn't cooked-enough for your needs yet?

      I'm really curious because I felt that after the disaster that the early 2.4 series was, the kernel team really pushed a good 2.6 release out and it's been quite smooth from 2.6.5.

      Are you running strange hardware or binary-only drivers or something?

      • "Are you running strange hardware or binary-only drivers or something?"

        I have had severe problems with consoles on the Radeon framebuffer device (fixed since 2.6.10),
        and also serious trouble with IDE CD writers (which is partly a kernel issue, partly client software).

        In 2.6.11, I can get a CD writer to work if I put it as Master as /dev/hdc, on it's own ide bus with no hdd. The same configuration is never a problem under 2.4.

        Other than these specific issues, 2.6 has not been a problem for me, but both t
        • severe problems with consoles on the Radeon framebuffer device

          I did too, but once I switched to VESA framebuffer, everything was fixed, much less of a headache to use VESA unless you have the need for hardware-accelerated framebuffer. I just use FB for the console, so hardware acceleration seeemed like it wasn't worth the headache.

          As for CD burning, mine's much better since 2.6 came out. I particularly like that I no longer have to emulate SCSI over IDE to get my burner up-and-running. Have you tried ano
      • I have got various random crashing with every 2.6.x I've tried, as well as one replicable one that's binary-only-driver related. But unloading that driver still leaves me the others. Plus my CD burner won't work because they got rid of ide-scsi emulation (does anyone have a patch to add that back in? If so I might try another 2.6)
      • What exactly are you doing that 2.6 isn't cooked-enough for your needs yet?

        In my case, a large chunk of the IBM stack had issues. You could do a bit of hacking to get DB2 to work, but WebSphere and WSAD was a PITA on 2.6. They may have patches out there, but that was main reason I finally called uncle on my dev box and rolled back to 2.4.
    • Right now, I consider 2.6 not stable enough for my own use. If I cannot compile and boot a Linus kernel on a simple install of GNU/Linux (whether SuSE or Debian) without major headaches and/or chasing down patches, well, that's not stable enough for me. YMMV.

      I have the same philosophy regarding kernel stability, yet I've been running comping and installing 2.6, in the manner you describe, since 2.6.0, and have had no issues at all. No patches, purely the major 2.6 releases.

      If you aren't running it, an

    • Any given 2.6.x.y tree is going to be stopped as soon as 2.6.x+1 comes out. If there's enough spacing between the 2.6.xs it will work, but we could have the same situation as now when there are pretty much no stable 2.6 kernels because there are new features being introduced before all the bugs have been squashed. I think there should be separate maintainers for each 2.6.x.y and they should carry on for as long as it takes until that version is stable.
  • I wonder... (Score:4, Informative)

    by Anonymous Cumshot ( 859434 ) on Saturday March 05, 2005 @02:37AM (#11851072)
    If this will make Andres Salomon security & bug fixes patchset [rpi.edu] obsolete since it pretty much focuses on the same things that Linus wants to see for the 2.6.x.y releases..

    FYI, Andres Salomon's patchset provides the foundation for Debian's kernels and has been discussed recently on kerneltrap here [kerneltrap.org] and here [kerneltrap.org].

    • by Anonymous Coward
      Hopefully Andres won't stop.

      Linus only wants to put security patches and patches to bugs which cause a kernel hang or oops or compile failure to get into the 2.6.x.N patches:

      - some very _technical_ and objective rules on patches. And they should
      limit the patches severely, so that people can never blame the sucker
      who does the job. For example, I would suggest that "size" be one hard
      technical rule. If the patch is more than 100 lines (with context) in
      size, it's not trivial any more. Really. Two big scree

      • Andres also patches things that are plain broken, e.g. "sound doesn't work any more". By the law Linus established above, those kind of patches are forbidden to go into 2.6.x.N.

        by that hyperbola Dell Inspiron broken keyboard fix shouldn't have gone in, but it is in 2.6.11.1, and is fully covered with holy penguin pee.

  • by J_Omega ( 709711 ) on Saturday March 05, 2005 @02:38AM (#11851075)
    I'd have preferred r-theta polar coordinates.
    • Re:Why use x-y? (Score:2, Interesting)

      by aliasptr ( 684593 )
      Having been doing a lot of complex analysis lately, I enjoyed this comment! Totally off topic and all but cartesian coordinates are "overrated" granted they are the most "natural" but anyone doing any kind of math/science/engineering has no doubt seen the incredible usefulness of other coordinate systems. "Alternative" three dimensional coordinate systems prove very useful for lots of integrals becasue of the symmetry. Anyway yeah... moving on.
      • Re:Why use x-y? (Score:3, Insightful)

        by J_Omega ( 709711 )
        Depends on what you consider "natural" though, right? I'm willing to bet that the first human use of a coordinate system was polar. ie. "go 1000 paces, that-a-way"
    • by fbjon ( 692006 )
      I'd have preferred r-theta polar coordinates.

      True, they are much better for the development cycle

    • by MarkRose ( 820682 ) on Saturday March 05, 2005 @06:11AM (#11851446) Homepage
      That, my friend, is a rad idea!
  • Dunno (Score:3, Interesting)

    by ThisNukes4u ( 752508 ) <tcoppi@gmail. c o m> on Saturday March 05, 2005 @02:39AM (#11851076) Homepage
    I don't know what's up with the kernel devs, but I for one just want a stable kernel without having to resort to specific distro kernel patches. They have not been able to provide that on the mainline since 2.6 has been released(in my opinion, from my observations). They should have forked 2.7 awhile ago if they were going to be pulling this and put the new code in there. Hopefully this new way of distinguishing between stable and unstable releases will help a bit, but I'm not keeping my hopes up. It may be time to switch to a BSD if they can't get their act together.
    • Unavoidable (Score:5, Interesting)

      by Mark_MF-WN ( 678030 ) on Saturday March 05, 2005 @02:51AM (#11851105)
      I think this is the unavoidable result of the Linux kernel's versatility. It's designed to be able to run on such a wide variety of hardware, from wee little embedded chips to multiprocessor monstrosities. It's able to run with some much old, obsolete hardware, cutting edge hardware, specialized hardware, etc. There's constantly new hardware coming out that needs to be supported, specific security requirements, etc. There's no way for the Kernel team to have it be everything to everyone at once. The natural result is the it's up to a distributor to put it all together, and choose appropriate combinations of patches.

      There's nothing that wrong with depending on an organization (be it commercial like Mandrake or non-profit like Debian) to put together an appropriate Kernel for you. That's not to say you shouldn't give BSD a crack (diversity encourages vigour after all), but I don't think there's anything wrong with the way Kernel development is taking place. Those who needs a rock-solid unfliching kernel can always use a 2.4 series kernel, or use BSD (as you suggested).

      • It's not unavoidable though, because they were able to avoid it in 2.4. Which, until this decision anyway, I intended to stick with until 3 versions after the release of 2.7, because that's what it takes to stabilise (generally).
      • So, what you're saying is that we should all stop using the vanilla kernel? And instead encourage even more fragmentation than there already is? Great.

        Look, making a stable vanilla 2.6 kernel isn't hard: all they would have to do is take all the kernels with features added after 2.6.0, and rename them 2.7.x, and then just stick all the security and stability fixes into 2.6.0. I still don't see what was wrong with the old numbering scheme, but I do see what's wrong with this one.
    • It may be time to switch to a BSD if they can't get their act together.

      It may be time. The old odd/even strategy was a bit goofy at times, but it at least had some semblance of stable/development branches. Its main problems were Linus' late branching and the propensity to add new features to the even branches.

      But now even that has been blown to the wind. They're doing development work on their stable branch, because they only have one branch. Expect major disruptions if they ever decide to revamp some ke
      • Re:Dunno (Score:3, Informative)

        by tehdaemon ( 753808 )
        "Expect major disruptions if they ever decide to revamp some kernel component like paging or smp."

        If you would recall a recent interview Linus did, he said that there probably wasn't going to be anything like that in the near future, except possibly the tty stuff. Mostly just work on drivers and such. I would not be too surprised if the real reason that there is no 2.7 branch now is because there simply isn't any major system that needs a rewrite.

    • Re:Dunno (Score:2, Interesting)

      by LnxAddct ( 679316 )
      Linus has openly stated that stock kernel's are not what they used to be and never will be. He said the responsibility now lies in the hands of distributions. Personally this doesnt change much for most, but its important to note that Linus's goal is no longer to make a kernel that is easy to use straight from the souce through compilation to actual usage, that burden is now distrbuted amongst the distros.Linus still attempts to achieve it, but it is no longer a priority, his goal is to simply advance the k
  • by Anonymous Coward
    /me anxious awaits the release of kernel 3.14.1.59.265.35.897.93238462643383279...
  • by Hohlraum ( 135212 ) on Saturday March 05, 2005 @02:44AM (#11851090) Homepage
    that i regularly have seen breakages with stable hardware upon upgrading from one "stable" kernel release to the next. Granted most of them have been ACPI .. which is just a joke. All I gotta say in 2.7 please.
    • ACPI is very important. I had to use a hack to get the screen to reload on my laptop every time I made it sleep, as none of the drivers support the new acpi model very well.

      This was fixed recently in 2.6.11

  • by Dragon Rojo ( 843344 ) <Dragon.Rojo@NOSpam.gmail.com> on Saturday March 05, 2005 @02:53AM (#11851110)
    I just have finished compiling this 2.6.11.0.0.0.0.1 kernel. Damn 2.6.11.0.0.0.0.2 is out, time to recompile.
  • You know (Score:5, Insightful)

    by mcc ( 14761 ) <amcclure@purdue.edu> on Saturday March 05, 2005 @03:03AM (#11851131) Homepage
    Linus Tourvalds keeps insisting he's just a coder and nothing more, and Alan Cox and everybody keep insisting he's just a coder and nothing more, but watching him in situations like this... he really is is disturbingly competent as a project manager. Like, to a degree that betrays a large amount of talent. I think he and others really sell him short... but of course one of the reasons he's so effective is because the relatively unassuming way in which he approaches things means people's attention is diverted elsewhere, thus allowing him to actually get stuff done :P
    • Re:You know (Score:3, Funny)

      by marcushnk ( 90744 )
      I'll agree with that.. I've watched a few of the lkml threads and his skill at herding cats in scary...
    • Linus has, thus far, been abso-fucking-loutely scary in choosing what and who is relevant in regards to the Linux kernel.
      I'm rather frightened by this, but I trust Linus is not a n00b, and realises the implications of his decisions.

      There comes a time when the one who wears the crown is forced to realise that the kingdom is better off with a new leader, and ignores this fact to his peril. I pray that Mr. Torvalds has the wisdom and humility to pass on the torch when that time comes.

      Soko
    • Actually, i always found Alan Cox much more centered and down-to-earth when it comes to (seemingly) big decisions like this. Linus is a great developer, but he's a flawed individual as everyone else; he can be wrong you know....
  • Okay, you've got your x.0 release. These are for major releases of the software. 1.0, 2.0, etc.

    Then you've got your x.x.0 release. Basically, you subdivide your release schedule according to the major tasks needed to get there. So, for instance, if you're creating a video player, 0.1.0 would be to get something proof-of-concept running some basic video codec. 0.2.0 would be for major GUI additions, 0.3.0 would be for extra codeces, etc. These should adhere to a strict roadmap.

    Next, you've got the x.x.x re
    • 1) Some people might suggest that it's a good idea to put the md5 sum in the version. I couldn't agree more. In these days of rip-offs and trojans, you can't be too careful. I haven't been doing this with dml2xml2004 just yet, but it should become standard in one of the upcoming 3 1/2 releases.

      2) Others might think that based on the way debian does things, you might want to put "stable" or "unstable". With all due respect to the folks at debian, I personally find this overkill.
    • Was your naming convention inspired by tla (Tom Lord's Arch) [srparish.net]?

      There, you get {archives}/2005-foo/2005-foo--mainline/2005-foo-ma inline--0.1/ for the 0.1 version of the trunk of project "foo" as developed in 2005. Version 0.33 of the experimental gui branch would be {archives}/2005-foo/2005-foo--expgui/2005-foo-expg ui--0.33/ etc.
    • You're joking but you have a point. 2.6.11a, 2.6.11b, 2.6.11c would fit in more with existing projects than 2.6.11.1, 2.6.11.2, 2.6.11.3.
  • I don't like it (Score:3, Insightful)

    by erroneus ( 253617 ) on Saturday March 05, 2005 @03:23AM (#11851175) Homepage
    I think Linus is very right that it will create a lot of headache for a lot of well-meaning people. It will also create a bunch of little dictators whose mark in Linux history will be more important to them than the continual growth and evolution of the one main kernel progression.

    I think instead, it is better to identify any kernel branch by the maintainer or distribution it comes from... pretty much as it already is. When I first started using Linux, I thought nothing of compiling a new kernel and getting things all tweaked out, installing patches and stuff like that. But lately, I see value in following structure in systems such as seeking out RPMs rather than compiling new things. It is far more simple and a lot less frustrating at times trying to keep up with my own set of kernel patches. (Oh, I cannot upgrade to the newest kernel because the So-n-so patch hasn't been updated yet) While the same is true or even slightly worse when it comes to RPM dependency, there is at least some structure and predictability to be found.

    I predict that the change will be short lived as it will be found that people will become frustrated with keeping up with all these kernel revisions.
  • or do I take a chance and mangle my current stuff? hello 2005 calling!!!
  • Linus rox (Score:2, Interesting)

    No kidding, even Linus's most offhand comments are so well thought out and plain spoken that it's a pleasure to read. Wanna know why he's in charge of the Linux kernel? Just read. He's so common sense and matter-of-fact about everything that it's easy to see why everyone gravitates to him. And no, I'm no kernel hacker, just a Linux geek. But just reading his occasional emails is more than enough to make me want to convert everything on the network. Sometimes I get caught up in the issue du jour, but
  • Really stable? (Score:3, Interesting)

    by GrouchoMarx ( 153170 ) on Saturday March 05, 2005 @04:35AM (#11851290) Homepage
    Does this mean that 2.6.x releases will actually be stable and reliable again? After getting burned by 2.6.8 and 2.6.9 (both of which had show-stopping bugs that, for instance, kept my CD burner from working or various USB-based devices, all of which worked again magically in 2.6.10), I'm now very wary of new "stable" kernel versions. On the one hand I'd like to stay up to date to get the latest security patches, but on the other I really don't need my USB ZIP drive to stop working every other kernel version. Handling individual security patch files is more trouble than it's worth for a home system, frankly (I'd rather have a life), so that's out. So what's a moderately security-minded user who wants a reliable system to do?

    If going down another point level for bug fixes will help the problem, then I'm all for it. Just make it clear what people like me should be downloading. :-)
  • Scared of 3 (Score:2, Insightful)

    by squoozer ( 730327 )

    What is with people. Most open source projects seem to be scared of the number 1 so every piece of software is 0.x.y.z now the kernel people have become afraid of 3 (or maybe 7) either way this is just silly. I can see it now, in twenty years time we will be up to 2.9.9.9.9.3.8.1 because nobody will take the plunge and call it 3. At least the emacs people got a grip and just dropped the 0.

    • Re:Scared of 3 (Score:2, Insightful)

      by GoCoGi ( 716063 )
      No, that's okay. It should only become 3 when it becomes binary-incompatible with userspace applications.
    • Re:Scared of 3 (Score:2, Informative)

      by Ulric ( 531205 )
      The numbering system used by many, not all, projects is major.minor.teeny:
      • Bugfixes increment the teeny number.
      • New, backwards compatible features increment the minor number and set teeny to 0.
      • Changes that break backwards compatibility increment the major number and set the minor and teeny numbers to 0.

      And of course, the major number starts out as 0, so the only way the major number could be anything else is by introducing incompatible changes.

      Libraries use similar rules, or at least rules with the sa

  • by 10am-bedtime ( 11106 ) on Saturday March 05, 2005 @07:37AM (#11851564)

    well, i'm not a big fan of prescriptive labeling [glug.org] in general.

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Saturday March 05, 2005 @07:45AM (#11851573)
    Comment removed based on user account deletion
  • Maybe now is the time to drop the leading "2." in the kernel release number, as Sun did with Solaris [wikipedia.org] after 2.6...

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...