Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Software

Kernel Fork For Big Iron? 155

Boone^ writes: "ZDNet is running an article on the future of Linux when used on Big Iron. Just a bit ago we read about running Linux on a large scale Alpha box, and SGI wants NUMA support in Linux so it can support their hardware configuration. The article talks about how memory algorithms used with 256GB machines would hamper performance on 386s with 8MB ram. So far Linus et al have been rejecting kernel patches that provide solutions for Big Iron scaling problems. How soon before a Big Iron company forks the kernel?"
This discussion has been archived. No new comments can be posted.

Kernel Fork for Big Iron?

Comments Filter:
  • Maybe this is just an excuse by Big Blue^H^H^H^HIron companies to escape from the OS community somewhat and have their "own" kernel? I know how we all love to think that once these companies adopt Linux they undergo a significant "change of heart", but do corporations ever really change in this sort of way? I bet the real attitude is, "Let's 'adopt' Linux for now, and exploit the OS community until we can get a better grip on this thing and create our own version."
  • by MemRaven ( 39601 ) <kirk.kirkwylie@com> on Wednesday September 27, 2000 @12:15PM (#748966)
    Not really. Forks are also justified if the maintainer has effectively abandoned the project and refuses to relinquish it (and someone else has to "seize" control to make sure that it continues to go forward).

    Just as importantly, forks are probably necessary when a significant part of the user/developer base disagrees with the direction of the project. This usually implies that the forked version and the original version are aiming at solving different problems within the same vein. If the original project wants to continue in the original direction and some people want to use the source to solve a slightly different project, then they pretty much have to fork in order for the project to achieve its maximal result of being most useful to the most people.

    This isn't a bad thing if it's done right. It's just that most of the big forks you hear of are at least partially the result of bitter, angry wars (OpenBSD anyone?). You don't hear that much about the ones which are completely amicable.

  • It's not feasable to just slap in a CD-ROM and install on *ANY* system.

    Just because a system is running doesn't mean it is running to full capacity. With any OS the default kernel/device drivers will get the system running, but updates need to be applied to get optimal performance. A few hours spent after installation will save hours in the long run.

    It amazing how many don't do this, though! I've seen whole networks of machines running IDE in PIO mode on hardware that could run UDMA/33. I also supported an OS with statically configured communications buffers and cache sizes, most customers again left these at their conservative defaults. And others.

    Basically some administrators are lazy. They can't be bothered to tweak their system, or install critical updates. If I had my way, anyone who didn't bother to install current security updates should be fired for basic incompetance and not be hired again!

  • As I understand it, upwards Scalability on Linux is a major problem - which is why Solaris et al are still more popular for true Enterprise class applications.

    The more I think about it and the more Linux distributions I try, the more convinced I become that one size does not fit all.

    Linux as a workstation is is big and clunky. Linux as a datacenter server isn't scaleable enough. Linux on handhelds and wireless devices isn't as efficient Operating Systems build specifically for that purpose.

    As it stands, Linux is great for web, file and print serving - which is what its mostly used for.
    X11 creates a massive overhead for desktop users and the kernal doesn't scale to so-called Big Iron.

    As an aside, I've tried both BeOS and the 1.44MB floppy version of QNX and am convinced that those scale downwards to handheld and wireless devices.
    I'm not so sure of Linux in this regard, especially Linux with X11 and at least 4 different graphic toolkits (GRK, QT, FTLK, Fox etc).

    I think it's time that Linus et al accept patches which do fork the kernal for so-called Big Iron and also for handheld/wireless devices creating three kernal streams.

    It has to happen eventually and at least if Linus takes the initiative, the community will contain and control the changes. It will still be Linux, the brand will be strengthend rather than weakened.

    I think this is really necessary if Linux is to achieve the stated aim of "World Domination"


    -------------------------------------------------- --------
  • I see this as the same problem the guys that want to do RT-Linux or micro-Linux. Why not have someone step up to the plate and say "Here is BigIron Linux! We will manage it and maintain it. Send us your patches
    I agree except that I think Linus and the Kernal team should manage different types of Kernal. That way, it's still Linux rather than SGIs or Suns or IBMs or whoevers flavour of Linux.


    -------------------------------------------------- --------
  • Note quite true, SunOS 5.x is based on a combination of SysV and BSD with Sun extras.

    The Solaris name has been used for SunOS 4.x releases as well. Solaris 1.1 has SunOS 4.1.3 is its core OS component.

    Solaris is the name used to refer to everything that gets installed from the OS CDs; this is more than SunOS 5.x + Openwin + CDE + X11Rx.

    Solaris is a essentially a marketing name, SunOS is the value used in the utsname structure - ie what you get back from uname -s.
  • what was your command line to get that number?

    I saw the .5 million and freaked out and I am not getting quite the same number as you. Wondering what you are using and if you have like 2 kernels worth of files or something.
  • You forget that ancient machines are not the only places 386's and 486's appear. Embedded systems generally don't need a heap of processing power, so you can get things done cheaper (and cooler) with a 386- or 486-level chip.

    To take your points one by one:

    1. Earlier platforms generally had no CD-ROM. Most Linux distros . . . come on CD-ROMs.

    1. You install at the factory onto ROM/flash/whatever. No need for a distribution's install CD.

    2. Earlier machines usually had a 5 1/4" floppy disk . . .

    2. See above.

    3. Earlier machines had RAM limitations . . .

    3. So what? Even without limiting oneself to embedded systems, there's no real need for huge amounts of RAM besides the RAM companies saying "BUY MORE RAM". I ran Linux on a 386 with 8MB at a summer job a few years back with little trouble, and that only in the setup. (On the other hand, it would be nice to see a libc that wasn't as bloated as glibc...)

    4. Some earlier machines had fscked BIOSes, aside from Y2K-unfriendly BIOSes.

    4. Repeat after me: Linux does not use the BIOS. The BIOS is only used at boot time (and by DOS). And as far as embedded systems go, you can use a modern BIOS that works, or just write something simple that starts up Linux on your box. After all, embedded systems don't need to worry about being general.

    5. Earlier machines had ISA, EISA, etc.

    5. Modern embedded systems probably use PCI if they need anything at all.

    6. Earlier network cards are not all supported . . .

    6. Modern embedded systems can use supported hardware.

  • I agree, but Linus and the kernal team should manage the forks. That way, it's not vendor specific.


    -------------------------------------------------- --------
  • I really can't forking being successful. Linux has grown because of the sacrifice of the individuals trying to run a powerful modern OS on limited hardware. Remember the price gap between a Unix workstation and a PC when the whole party started? It was rather significant.

    Linux is still driven by ppl working in their spare time on their home computers. As far as I know, most ppl still have
    Now, I haven't done any kernel hacking myself, but if I were working on the kernel I'd feel kinda taken advantage of if the IBMs and SGIs of the world were to fork the kernel, and focus all their efforts on scaling the system, without contributing to the areas that make a difference on affordable machines (ie sub-$100K)

    Linux is the People's OS. Created by the people, for the people. Yes, it is free, but I don't see any reason to sacrifice the needs of the many to enable the few who can afford machines more expensive than my house.

    In fact, if I see sacrifices being made in the kernel so that it runs effectively on big iron, I'd be all for a fork to keep things running on real hardware (in the sense that you would feel ok storing your pron collection on it).



    I mean, if (thanks to SCO's free[beer] licencing of the old Unix(tm) sources) there can be an effort to get a usable 4.3BSD distro running for old VAXen, I'm sure you can find ppl willing to keep a fork of the Linux kernel that remains true to its roots and ideals...
  • Why would they need different interfaces, your still doing the same thing, just doing it in a different way.

    It's called "modularity", and it's a Very Good Thing.

    If introducing modularity requires re-writing code that uses the module that code is Broken to start with and should be fixed.

    All the memory management stuff should be done in one place, and only exposed via necessary interfaces (such as malloc()), and kernel-internals should use those interaces instead of talking directly to the mm code.

    Assuming, of course, well designed interfaces to start with.

    - Aidan

  • You don't need to fork the whole kernel, just make it support ``big iron'' as a configurable feature.

    If the same code cannot handle both kinds of machines, then you eventually need both pieces of code in the same codebase, not a fork.

    Forking is essential for experimentation. That's why we have tools like CVS which encourage forking for making stable releases and for experimenting with new features.
  • I don't see why it's significant if the big iron patches are a 'big deal'... It's not anybody putting together a system like that isn't already paying big $$$ to somebody for a service/support deal. If anybody is going to pay more than you make in a year on a computer, surely they can spend a little more time/money patching the kernel to take advantage of it.
  • Hey this was to win a bet. Cant blame me if i want to get some money off of a poor sap can ya?
  • At this point, you are a fork just renamed. Anything that is added to 'foo.c' would then have to be merged into 'foo-garbage.c' or 'foo-bigiron.c'. Soooo what have you gained? Nothing.
    There is absolutely nothing wrong with a fork if whoever forks takes on the responsibility of forever merging the tip changes onto the fork.

    On a personal note, it is nice to see you GPL boys now possibly having to code with real-world issues like forks and specials. I don't mean it as a slam, just an observation.
  • The last thing Linux needs is more installation and setup complexity.
  • That is not what I meant.

    I thought I avoided that confusion by saying "platforms (or architectures)."

    To be clear, there will next be an abstraction of platforms (architectures), not to be confused with an abstraction of platforms (OSes).

    What I meant is that the next level is something like Transmeta's code morphing software which will "abstract out" (or "make transparent") the hardware
    architecture. This will make it irrelevant to OSes (be it Windows 2006 or Linux 4.2.x) whether the system is a sub-palm with a scant 256 megs of
    RAM, or a super computer with 1/4 T of RAM (or whatever.) This is the subtext, which you seem to have missed, which makes my post relevant to
    the article, which is about Linux possibly having to fork to properly support the growing rift in supported architectures.

    I'm not really sure why I am even replying, since your sig clearly demonstrates that you are going out of your way to mis-understand.

    -Peter
  • Fork! Fork!

    Maybe we can get changes to the VFS and VM system now!



    ___________________________
  • by jjr ( 6873 ) on Wednesday September 27, 2000 @02:02PM (#748983) Homepage
    What I would believe to happen for the big iron machines is that they would have a different directory for them and the memory management code will be under there. So now you have the memory management for big guys and the other memory management code under the same source tree. When you compile your kernel it looks for the proper managemnt code. I know not that simple there is more to it than that but is what will happen if they fork and come back together.
  • What about an optional switch during the install? Include support for both memory management methods an allow the user to choose. Of course the default would be standard and optional would be BigHonkinMem.

  • Yes ... after all, it's not as if people are looking to run Quake on big iron.

    /me pauses to look at the Alpha thread

    Never mind ...

    =) [on a serious note, I agree ... so *what* if development forks, would it really impact the average user all that much?

  • According to the latest stable kernel's Release Notes [linux.org.uk], there are separate source trees for MIPS, ARM, 68k and S/390. Looks reasonable too, it's a wildly different architecture. Heck, IBM themselves could maintain it.
  • I'm surprised that RedHat or some other big dollar linux company hasn't already begun R&D on something like this. Perhaps no one believes the market would support the amount of money it would take to develop and support it?

  • Where can i find this kernel patch for instant-on, I am definately interested
  • 2. Earlier machines usually had a 5 1/4" floppy disk, until the late 486s started really using 3.5" floppies. Most people are not going to spend money and time ripping out an old floppy.

    Its probably just me, but I've never seen a 5 1/4" floppydiskdrive on anything except 286's and below. Hmm.. or maybe once.. yes.. I did see it on a 386 once. But only once.

    Most 286's had 3.5" too .. at least those I used.

    So, THAT is not a problem, and besides, it untrue ;D


    --
  • by earlytime ( 15364 ) on Thursday September 28, 2000 @03:37AM (#748990) Homepage
    on the subject of forking...
    whyn do we need one huge kernel anyway? Probably several kernels are needed. One for big-ass servers, one for tiny-ass routers, one for mainstream workstations, and one TBD. Having one all encompasing kernel makes building the kernel a pain in the ass. I've been using linux for four years, and I still have to build my kernels a couple times before I get it right. So many freakin options, i'm bound to get something wrong.

    but that's just my opioion...

    -earl

  • No, that's how Microsoft tries to cope with some truely abysmal hardware out there, something which Linux doesn't do. For example, any fool can throw together a sound card from one or two chips and some analogue glue these days - the results are sold NEW for $15-20. No-one in their right minds would try to use one, and the WHQL exists to speed up the process of finding out which are the turkeys.
  • I hope I'm not too rendundant: I don't agree. One of my issues with Microsoft is that MS is in bed with HW manufacturers, so that they can sell faster and better hardware that Windows requires.
    I agree that we do need faster CPUs, but not just because the OS demands it. Linux is the best example and proof: you can add features to the OS; while still being able to install it on your 486.

    I myself have two 486 running Slackware, and they do their job amazingly well, just as an Athlon 1 GHz would have done. But I don't want to be forced to buy an Athlon 1 GHz just because someone decided that I don't need my 486 PCs anymore.

  • by josepha48 ( 13953 ) on Wednesday September 27, 2000 @02:13PM (#748993) Journal
    They do this now for 1 Gig mem limitations. The problem is that there are so many #ifdef and #ifndef's in the linux kernel now that some people do not want added kernel options (more #ifdefs).

    One of the issues that people seem to fail to realize is that Linus is not necessarily rejectiung the patches because of what they do, but how they are implemented. If patch code is submitted to Linus and the patch is going to make mataining that system difficult (read messy unmaintainable code) Linus will reject it. Linus also does not like large patches either. He likes bits and peices and clean fixes. Hey he started this whole thing, I think he has that right.

    Another thing to think of is that ZDNet is a news network. Everyone has been saying that the kernel will fork and blah blah. There are already forks in the kernel but people just don't realize this.

    Redhat kernels: Have you ever tried to apply a patch to a stock redhat kernel? I know that since RH5.2 they ship the Linux kernel with there own patches.

    SuSE kernels: Last SuSSE I installed (5.3) had both a stock Linux kernel and a custom SUSE kernel with custom SuSE patches.

    Corel: never tries them but they patched kde and made it hard to compile other kde software with there distro.

    Point? There are already forks in the Linux community, yet it goes on. That is the whole thing about open source. There can be forks. If an idea is good it gets into the mainstream kernel. But these 'forks' need to be tried first and become tested and cleand up in such a maner that they can exist with the rest of the linux kernel.

    If you think that everyone is running P200 or P500 or GigHz machines you are wrong. I am sure that there are lots of people out there that are running old 386 / 486 with Linux as routers firewalls, etc. After all you do not need a superfast machine for a firewall if all you are going to firewall is 3 or 4 other machines.

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • I can continue to run my favourite ZX Spectrum games under MAME under MS-DOS under DOSEMU under Linux under S/390 !
  • First fork!
  • Huh????

    Solaris is based off SysV 4.x. SunOS was based off BSD, but the current BSD's are not based off it, they are based off the same code it is based off.
  • My 800MHz Athlon system has a 5.25" floppy drive "just in case"!
    It's because:
    a) I upgrade my systems rather than throw them out
    b) I started with an old 486-33 system and this drive and the keyboard are the last remnant of it.
    c) I write software for embedded systems and you'd be surprised how long some systems remain in operation.....
  • You miss the central issue. The problem isn't the memory thing, but should developers at these companies fork the kernel to take advantage of their hardware. You can't decide this issue by issue, but should have a grand plan for it. Otherwise, the desicion processes slows down development of the kernel.
  • I see the original poster already replied, but here's my answer: sure. I have 3 boxes at home with a fourth being built, and I wish I had more. Not because one can't do everything I need it to, but because I want to play around with a real network, not the typical linux server + windows workstation that seems to be fairly common. I now have freebsd, debian, beos, redhat, qnx, and soon, windows me all loaded on different partitions, but how am I supposed to learn about the different ways they interact if I only have 1 or 2 machines? That, and I always load one machine as my server to handle email and internet access, and then don't touch it anymore, other than updates. I can then load and explode everything else with impunity, and not have to worry about whether or not someone is trying to send me email whilst I'm frantically reloading my drive after a failed experiment with dd or fsck :-) Anyways, just thought I'd drop a line explaining why *some* of us have no problem "wasting" gobs of electricity on multiple computers. Pure hack value ;-)

  • I've seen many articles about why Linus won't include this or that. Don't forget Linus is one of the greatest computer experts on this planet, he doesn't just say no because he's having a bad hair day.
    Linus is probably rejecting this because:
    • 2.4 is frozen and is not accepting any more major changes. If he doesn't stick to his guns here we wont see 2.4 for another year or so.
    • The 2.5/2.6 list of kernel changes and additions is still being drawn up.
    • The patch probably needs improving so not to cause slow downs and problems for 99% of users. Some more thought needs to go in here. Yes a extra feature might be really good but Linus wants to keep the kernel nice and not turn into one huge blob, he's already talking about structural changes to increase the modularity while maintaining the efficient monolithic core of Linux, this will take some work. Maybe Linux 3.0?
    So lets not complain, but ask why? Linus is a logical guy, but sometimes we can't always see his reasoning.
    I do think however that Linux is getting so big that Linus will have to change the way patches are integrated and accepted, he's going to have to delegate more and become more concerned with Linux's overall direction and working with the big companies while also working in the interest of the community. I think the US government should pay for this even, why not, they pay for NASA and Linux is alot more use to citizens than a space shuttle.
  • Read the link, and I quote:

    9. Linus agrees in principle to take this code in. It has already been reviewed by Ingo and Andrea. Linus wants to clean up the page allocation data structures a bit before imposing this code on top of it; I am trying to help him do that. New: As of 2.3.31, this code is in under CONFIG_DISCONTIGMEM.

    There will be no kernal split - Linus has agreed to put it in -

    Geesh...

  • by Anonymous Coward
    S/390s, Starfires, Wildfires, SP clusters. All scale to 1000 processors+, many gigabytes of RAM, pushing a petabyte of store (soon)

    Itty-bitty palm-top, wear-on-your-wrist PDAs. One processor, 2Mb RAM, no permanent store.

    Linux runs at both extremes. Inefficiently, but it runs.

    Now, you want to manage this scalability with the preprocessor. Well, that's nice. Off you go.

    One day, you may get to work on a large software project. Clearly, you haven't done so far.
  • Are you saying that they are forking it (as the headline suggests) or simply guessing that some day they may for it? Just fear mongering?

    If they produce stable patches, that can compile cleanly in with everythign else, especially after the new kernel revs are done in 2.3, and 2.4 is stable, I bet they WOULD make it into the mainstream.
    They simply don't add everything just because it's just starting. Lots of great features started out as separate kernel patches and eventually made it into the main tree.

    And if they want to fork, what's the big deal? Who cares? They are more than free to do so, and produce their own. It's not like it would be any less open.. and heck, a third party can always glue them back together and ship his 'complete linux' or whatever...

    Sheesh. hard up for topics today?
  • Perhaps the time has come to fork the older machines.. Few of us run Linux on anything less powerful than a Pentium, and even fewer on a 486.
    A 486 has a lot more in common with the computer I'm running now than does anything with 256Gb of RAM. None of the patches for big iron have anything to offer me or the vast majority of people who run Linux on modest hardware.

    If 486's weren't supported it probably wouldn't be that big a deal -- there's little lost in running a 2.0 kernel, and in the future that will probably remain true. (We should face it -- the kernel is really rather boring) But getting rid of 486 support wouldn't help much.
    --

  • Ok, one of the threats of the open source is that it will for. We've all seen the forks in bsd, and it certainly hasnt killed that. Why not a fork for big iron machines. It doesn't even have to be maintained by Linus. We have crypto patches, and the AC patches, hows about a big ass computer patch.
  • by DebtAngel ( 83256 ) on Wednesday September 27, 2000 @12:22PM (#749006) Homepage
    I am constantly putting Linux onto old hardware. Need a quick, dirty, and cheap NAT box? Throw Linux on a DX2/66.

    MP3 file server for the geeks in IT? Throw in a big drive, but a 486 will do.

    Hell, my company's web server is running on a low end PII, and I think it's a horrendous waste! It could be doing *so* much more.

    Linux is a UNIX for cheap Intel hardware first. That's where its roots are, and I don't see why it should sacrifice its roots for big iron that can quite happily run a UNIX designed for big iron.

    Neither does Linus, apparently.
  • by matman ( 71405 ) on Wednesday September 27, 2000 @12:23PM (#749007)
    So many things are distributed as kernel patches that it doesnt really matter. Anyone with that kind of hardware will obviously have the expertise and the money to install an appropriate kernel patch. No box that big is going to run an out-of-the-box kernel anyway, if you're using that sort of hardware, you're going to want to tweak it. As long as there is not a division in the majority of users' needs, there is not likely to be a major fork.
  • I think one reason that some stuff isn't going into the official standard kernel is that there's no way to put code into the official kernel such that people who don't want it don't have to download it. It would be really helpful if you could run a configuration pass, and then download only those files that you were actually going to use. That way the kernel sources could get really big, containing all the patches and versions of stuff that are probably good ideas, without making it impractical to get and unpack.

    There's no real reason there can't be different official memory managers for low memory and high memory situations, since there are clearly different issues. Of course, at this point, lots of people testing a single one is important.
  • by MemRaven ( 39601 ) <kirk.kirkwylie@com> on Wednesday September 27, 2000 @12:24PM (#749009)
    (sorry for the double post, this is to the first half of the comment).

    It depends on how pervasive the code changes have to be. If it involves #ifdeffing every single file, then it's going to be very difficult to maintain that, and it's going to be very unlikely that the maintainers of the project are going to allow that feature to remain part of the major distribution.

    That problem is a dual-edged sword. It also means that maintaining one big patch is a complete nightmare. Every version of the kernel that comes out has to be separately patched, with two important considerations:

    • The code which needs to be inserted has to be reinserted. If this is all separate files, that's easy, but if it's not that's a complete nightmare. And the code to call into that separate file is then a nightmare.
    • Any changes which have broken the patch have to be investigated and possibly changed. If you're working on filesystem patches, for example, someone working on the core fs work may have broken your patch without your knowing it, because they're not including your code in their coding/debugging process. So every time there's a change to the kernel, you have to figure out whether that change will potentially break your work.
    The only way to resolve the second is to keep the patch inside the actual kernel, so that the authors of the rest of the system are aware of it, and will either try their best not to break it, or will do first-round of changing the new functionality to work with their changes.

    Basically, it comes down to how pervasive the work has to be. If it's a really pervasive change which touches on almost everything, then the only option from a software engineering perspective is a fork. Anything else is being done from a feel-good PR perspective, because it just doesn't make any sense from a technical perspective to try to maintain a huge patch that covers everything.

  • Even in the cases where Linus has outright rejected BigIron patches, nothing stops a hardware vendor from patching the source after the fact - almost every major Linux distribution does this now for x86/ppc/sparc etc. It is not a matter of patches, it is a matter of designing the kernel. Have you seen what happens to the linux kernel above four processors? Nothing it flat lines, no improvement. What about other Unices? SCO and Solaris are both scalable. Even that dread OS from Redmond Windows NT can scale well up to 8 processors. Here comes my point. The current linux kernel run fine for my machines at home, and some servers here at work. But the BIG IRON, no way. What was the big iron two years ago. Pentium II/450 What is that average workstation at my workplaces Pentium 450. My point is the Big Iron of today is the workstation of tommorow. We need to get the kernal working now for these machines.

  • Right now, for example, if you want apache HTTPD threads at the kernel level, there is an option (in make menuconfig atleast) to include this support, it isn't included by default.

    Couldn't it be worked out like that? Include the patch into the kernel, have it disabled by default, but have a accessiable method of readily adding support for it?

    No I am not a kernel hacker or (real) programmer, that is why I am asking.


  • You are forgetting about possible embedded uses for Linux. There are a number of devices with 386 clones in them running Linux in the consumer market with only 2MB of RAM. Soon cellphones, PDA's, etc everywhere will be running Linux.

    "Evil beware: I'm armed to the teeth and packing a hampster!"
  • If 'linux' wants to be a mainstream desktop os? 'it' shouldn't fork?

    This is the problem, folks.. linux isn't an 'it'. It's a plural, it's an ideology, and relatively loosely defined codebase.

    We have compatability between distributions right now by *fluke* because noone has seen a need to change that. There is no 'rule' that says it has to stay this way.

    If the community wants linux to be on the desktop, then THAT IS WHERE IT WILL GO. Period. Regardless of who forks what. If we need a way to distinguish between our 'community' supported stuff that runs on 'true' linux, and the forks, we will do so. It's no big deal, really.
  • Solaris IS SunOS.

    SunOS V4.x was based on BSD.
    SunOS V5.x was based on SysV 4.x

    Solaris is the name for SunOS 5 + OpenWin
  • All these talks about linux on big iron.. so when will I finally be able to run Apache on a tire iron ?

    Imagine a beowulf cluster of tire irons. Now that would make a great stress reliever.
  • by mindstrm ( 20013 ) on Wednesday September 27, 2000 @02:31PM (#749016)
    Because now is not the time.
    A great many features start out as independent kernel patches.
    When thigns stabilize down the road, I'm sure they will gladly put 'Big Iron' flags in the compile stuff.

    The point is, linus (et al) can't just stick everyting everyone submits, big OR small, into the main kernel, especially if it's not even developed yet!
    Also... the feature set for current kernels is already listed... and this isn't one of htem.

    You don't just add shit to a project partway through because someone wants you to.

    I'm sure than by the time 2.5 kicks up, we'll see a 'big iron' flag in main kernel options.
  • i'd hazard this kernel patch [sch.bme.hu] is the one in question.

    i've not tested it, but now that i've got a laptop
    and windows people to impress, i just might give it a go ;)
    -
    sig sig sputnik

  • by account_deleted ( 4530225 ) on Wednesday September 27, 2000 @12:24PM (#749020)
    Comment removed based on user account deletion
  • ...for servers with more then X gigs of ram to use this algorithm, and for servers with less then X gigs of ram use this algorithm, etc...?
  • Ifdef's are one of the hacks in C that are nice if used in moderation, but you can see where this might go....

    Say Linux puts one BigIron patch in, then he won't have any reason for not putting the rest in, and when you do that, you get a nest of #ifdefs and #endifs ( because they are funtamentally different than PCs, there would be a lot of changes -- the style of the kernel might have to be changed in order for the patches to be applied and keep it in a useable state).

    What this means is, that it is significantly harder for kernel hackers to read the code. That is a bad thing (tm). As I read in another post, Linus will put these things in, just not in the 2.4 kernel.

  • What are you replying to? I always block port access to anything unless it is explicitly required.

    Is it regarding my comment of incompetant sys-admins not installing critical security updates who should basically be dismissed for gross incompetance? If so, I stand by this. A lot of security breaches are made by known open ports. Broken CGI scripts, holes in wu-ftp that a truck could be driven through, default passwords and so on.

    I've seen the after effects of incompetant staff, and had to clean up after them. Yucck!

  • ...but the name of that option should be "CONFIG_AWW_YOU_BASTARD_BIGMEM"
  • In said same conversation, they mention that "I would be surprised if we had any serious problem at 32 or 64 CPUs."

    This issue of scalability of Linux has been put to rest, IMHO.

    So... the scalability of Linux has been put to rest because of an opinion on how it might work out?

    In my opinion, the scalability of Linux can be put to rest if someone proves it by running it on 32 or 64 processors, and get the same kind of scalability as other OSses that run on such number of processors.

    -- Abigail

  • IBM sure is ambitious about their embeded Linux toys, aren't they? I just hope we don't see headlines when some idiot pokes an eye out: Linux fork's too sharp; downgrade to MS Spork.
  • FWIW, grep and wc report more than half a million #ifdefs in the 2.2.16 kernel.

    That must have been a huge increase since 2.2.13 then.

    $ find /usr/src/linux-2.2.13 -name '*.[ch]' | xargs grep '^# *if' | wc -l
    22022
    $

    -- Abigail

  • This is probably true in the long-run, but expecting current Linux kernel maintainers to maintain code for machines they'll never see is unrealistic. These sort of changes are going to occur at first experimentally in-house at a large corporation. That will be a fork for at least a little while. Presumably they'll be GPL'd (they better be!) so the changes can always be brought in if people want them. And hopefully the unnamed corporation will want the good karma they'll achieve by later hiring someone to help with folding the resulting code back into the regular kernel.

    With GPL'd code, I don't find a (possibly temporary) fork to do something extremely specialized all that threatening; if anything it sounds like a practical necessity at the moment.

  • Perhaps the time has come to fork the older machines

    Sun microsystems delivers a kernel that runs from Sparc Classics to E10ks, without forking off "older machines".

    Which is great, as you can develop stuff on low end machines and run it on large production servers, without the need to have costly development servers around.

    -- Abigail

  • The 2.2 series kernels will still be maintained for several years after the the release of 2.4. Linus has already said that he will stop supporting systems with less than 4MB of ram in the 2.2 series. Why not raise that cap with the 2.4 or 2.5/6. I see very little reason to run a more recent kernel on a 486. Is anyone going to have USB or AGP on a 486? What about a new netcard or raid controller? Old kernels are still being mantained, so security is not an issue. It will benifit more people by raising the standard than keeping it low.
  • Now, I haven't done any kernel hacking myself, but if I were working on the kernel I'd feel kinda taken advantage of if the IBMs and SGIs of the world were to fork the kernel, and focus all their efforts on scaling the system, without contributing to the areas that make a difference on affordable machines (ie sub-$100K)

    Just to make things clear, IBM and SGI don't want to fork. It would be unfair to accuse IBM and SGI from taking advantage of you if they make the effort of writing non-trivial patches, offering those patches back to the community, but see those patches rejected.

    IBM and SGI want Linux to succeed. On both the big iron and the simple workstations (programs for big iron machines have to be developped somewhere, and you don't think every developper for a big iron machine has 32 CPUs stacked under his/her desk, do you?), but to do the former, changes have to be made. They don't demand from anyone to make those changes - they made them themselves. But if the people in charge reject them, what can IBM and SGI do?

    -- Abigail

  • by Anonymous Coward
    However, if you are forking an OS, especially for the reasons mentioned (eg. mongo huge memory management) you wouldn't neccessarily have to change the API at all, rather the underlying implementation. If I use a malloc(sizeof(int)) somewhere withing my software, and proceed to compile it into a binary, when the malloc system call gets switched to, it would use the IBM developed access my 1 terabyte of ram algorithm and then return. When I compile this on my dinky little 386, when I run it, it uses the memory allocation routine which is currently written into the kernel. I doubt that there would be any addition in the number of system calls, so as to maintain POSIX compliance, just the bells and whistles which are hidden well under the hood. just my own os perspective. matt winkle_m.at.NOFRIGGINSPAM.denison.NOSPAMDAMNIT.edu
  • Duh. Actually, Java is (in theory) an abstraction of platform. If you're hoping for Transmeta to do that, then I have a bridge to sell you.

    "<BR><BR>"Sig

  • by gwernol ( 167574 ) on Wednesday September 27, 2000 @01:00PM (#749047)

    It sounds inevitable that a Big Iron fork will occur, and as Linus says above, this is not necessarily a bad thing. The problem comes when you have competing factions trying to do the same thing and causing confusion (as in the UNIX wars of the past). But when you have different solutions for different problems, yet everyone is moving forward together overall, it should be manageable. Indeed, it should be helpful, for it maximizes the solution for each platform.

    The biggest potential problem of forking an OS is binary and API incompatibility. The reason most people use computers is to run specific applications. I want to be able to walk into my local CompUSA/log on to Egghead and get a copy of application X and run it on my computer. I don't really care what the OS is, as long as it runs application X.

    If I've got Linux on my system, I'd like all applications that run on Linux to run on my system. The more forks that introduce binary or API incompatibilites, the less chance I have of being able to run the apps I want, and the more reason I have for removing Linux from my computer.

    If Linux wants to be a mainstream desktop OS, it needs to make sure it doesn't fork too much. That was a big part of the reason desktop UNIX failed to take off in the late 80's/early 90's.

  • I tottaly disagree. The power of linux steams from the vast array of machines I can use it on, from my XT (I have a boot disk for the 1.0 kernel series), to my 486 NAT box, to my mail/ldap server (AMD/400).

    What is bothering me about the current distributions is that they are forgetting about old hardware. I can't install Mandrake on a system with 8 megs of ram, but the system will run.. How screwed it that - the installer needs more ram than the OS!

    If this Linux bloat continues, I'll just keep moving more of my boxes to the BSD's (Free and Net are my personal fav's - gotta love the deamon!)

    Just don't forget that linux has prided itself on excelling on hardware that most people would call "old". As we go forward, we can't forget the past.

  • Why are you downloading the whole sourcetree?

    You download it once after installation. After that, just download the patches. 1 Mb should be pretty fast, even on a slow connection.

    And if the distributions would just give us a kernel source that wasn't patched to heck, it would be even easier (although I think Redhat's .src.rpm keep the pristine source and the patches separately before compilation).
  • All long-lived software projects go through architectural crises as their scope expands. This is normal. It goes a bit like this:

    Joe Hacker writes a small application to solve a simple problem. Simple data structures are used. Unless Joe is a genius (and maybe not so productive) no thought is spent on interactions between components.

    The thing proves useful. Other developers contribute pathes. Initially the patches merely fix things and fill gaps, so the overall quality of the software rises. The software becomes a "polished" package.

    After some time, there is a significant amount of contributed functionality that was outside of the original scope of the package. The underlying data structures don't quite fit it. As there is no coherent model for the interactions between components, people tend to just add things where it seems most immediately convenient. Quality suffers. The project is in a crisis of architecture.

    The way to get out of the crisis is to take a step back, look at the new scope of the project, look at the way the current components ought to interact, including forseeable extensions, and design a new architecture. This is not a case of throwing the code out and starting again, but a refactoring of how things hang together. There will be a period of instability, but it will be relatively short.

    These architectural crises are normal and the only way to have a successful long-lived project. Some approaches that don't work are:

    The ivory tower: Design the whole architecture in the beginning and never change it. X11 is the best example.

    The clean slate: Throw out all or most of the project and rewrite the central framework from scratch. See Mozilla.

    The ancient ship: Do nothing. Continue to add functionality in whatever way each developer sees fit. Eventually the software resembles a sci-fi starship that has been patched here, expanded there, re-plumbed somewhere else...

    As for Linux, it appears to do this very well with the even/odd release pattern. Every odd kernel release affords the developers a planned architectural crisis, so they can accommodate a new set of sunctionality cleanly. I am confident that the developers will find whatever architectural tool is needed (#ifdefs, macros, templates, modules...) to maintain everything from embedded to high-end systems in one code base. It may be till 2.5 though...

    Pavlos

  • by lwagner ( 230491 ) on Wednesday September 27, 2000 @12:42PM (#749068)

    Yes, it is nice that it will still run on a 386, but there are other factors to consider:

    1. Earlier platforms generally had no CD-ROM. Most Linux distros (except for fringe distros) come on CD-ROMs. Most people do not want to buy a CD-ROM for their 386, 486s. There are places that offer small "floppy-disk-sized" Linux distros, but they are obviously chopped. 1400K on a 500MB HDD.

    2. Earlier machines usually had a 5 1/4" floppy disk, until the late 486s started really using 3.5" floppies. Most people are not going to spend money and time ripping out an old floppy.

    3. Earlier machines had RAM limitations, aside from the fact that no one wants to really waste the money on putting more EDO memory into an obsolete machine.

    4. Some earlier machines had fscked BIOSes, aside from Y2K-unfriendly BIOSes; Most people will not research whether the particular BIOS is okay to determine whether or not to spend money on the first three items.

    5. Earlier machines had ISA, EISA, etc. Oh, what, you want to run GNU/Linux in something other than CGA?

    6. Earlier network cards are not all supported to get around many of these limitations... I tried to get around not having a CD or a 3.5" floppy in an old 486 by using some sort of older ISA-based network card.

    Obviously, there are many issues to consider before nodding one's head to allow Linus to try to preserve performance in ancient boxen for nostalgic purposes.

    Lucas



    --
    Spindletop Blackbird, the GNU/Linux Cube.
  • by Foogle ( 35117 ) on Wednesday September 27, 2000 @01:20PM (#749073) Homepage
    It's not excessive to expect someone to recompile their kernel to get optimal performance under extreme circumstances. It would be excessive to expect someone to recompile under tiny differences, but we're talking about the difference between 64-128 megabytes and 256 gigabytes of memory. People setting up machines that use such enormous amounts of RAM won't be put too much out of their way to recompile with a ENORMOUS_MEMORY option.

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • You're assuming that merging this code would be as simple as adding a BigIron.o module... I really doubt that this is the case.

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • Surely checking the amount of memory at runtime and using a different algorithm based on that value is not too hard.
  • by Private Essayist ( 230922 ) on Wednesday September 27, 2000 @12:07PM (#749080)
    From the article:

    The process of non-standard kernel patches is just fine with Torvalds. "On the whole we've actually tried to come up with compromises that most people can live with," he said. "It's fairly clear that at least early on you will see kernel patches for specific uses -- that's actually been going on forever, and it's just a sign of the fact that it takes a while to get to a solution that works for all the different cases." He continued:

    "That's how things work in Open Source. If my taste ended up being the limiting factor for somebody, the whole point of Open Source would be gone."

    It sounds inevitable that a Big Iron fork will occur, and as Linus says above, this is not necessarily a bad thing. The problem comes when you have competing factions trying to do the same thing and causing confusion (as in the UNIX wars of the past). But when you have different solutions for different problems, yet everyone is moving forward together overall, it should be manageable. Indeed, it should be helpful, for it maximizes the solution for each platform.
    ________________

  • by OrenWolf ( 140914 ) <ksnider@flarERDOSn.com minus math_god> on Wednesday September 27, 2000 @12:07PM (#749082) Homepage
    If you've followed the SGI/Linux debate on K-T, it's obvious that they indend to incorporate the option to enable BigIron features in the future, just not for 2.4 - as has been traditional with Linux.

    Even in the cases where Linus has outright rejected BigIron patches, nothing stops a hardware vendor from patching the source after the fact - almost every major Linux distribution does this now for x86/ppc/sparc etc. (NFSv3 is a great example)

  • Quite true, our Solaris boxes here on campus report themselves as running both SunOS 5.7 and Solaris 7.

    Here at ASU the Sun guys I know in IT refer to the SysV versions of SunOS as Solaris and the previous BSD based versions which came before as just SunOS. What the vernacular terms are where you are at I don't know.

    Lee
  • by fgodfrey ( 116175 ) <fgodfrey@bigw.org> on Wednesday September 27, 2000 @06:53PM (#749085) Homepage
    So Quake isn't the big issue here. Oracle is the big issue. As is Sybase and DB2, etc. The problem is, at what point will ISV's say "this isn't Linux anymore"? The whole reason that large companies like us (SGI) and IBM, et. al. are going to Linux is to get more applications. If we issue the SGI patch for moster systems, we could do all kinds of things like rearrange and add locks, add kernel threading types, and make the kernel preemptible. Is that really still the Linux kernel in the eyes of Oracle? Probably not. Then we lose 'cause customers aren't going to buy from us to run Oracle if they can't get support from Oracle (whether they will buy from us to run Oracle anyway is another question).

    The other reason that we are scared of the monster systems patch is the number of Linux kernels that come out. How often do we recheck the patch? Which kernels do we release the patch officially for? How do we decide? There are no really good answers to any of those questions which is why the big patch is to be avoided if at all possible.

  • If they fork all it would cause is a temporary fork. It will be incorporated back into the kernel any how if it is any good. If it is not good people will not use it. If they want to fork let them.

    No, you're missing the point. They would need to fork because the memory mangement techniques for "Big Iron" machines are fundamentally different from low end home machines. You need to use different techniques on machines that are so different, so they won't get incorporated back into a single kernel. You will get (for some reasonably long timeframe) two different kernels as a result of this.

    Now, whether that's a bad thing or not is a different question.

  • I had Windows 2000 Professional running on my computer, and I print alot of stuff on my printer.

    I installed SP1NETWORK.EXE (service pack 1) like a good user, and now when I print, it takes over 2 minutes per black and white page of text, whereas before service pack 1 it was fast as usual. I was already running the latest printer drivers for my model of printer - I checked their website.

    When I installed SP1 I chose to save automatically so I could uninstall it if I had to. When I went to uninstall it, I got the error message "Windows will uninstall the Service Pack 1 but will not uninstall the Service Pack 1." I wish I had a screenshot of it.

    Now my only option is to save to disk and print somewhere else, or follow THE USUAL MICROSOFT SOLUTION - RRR = Reboot, Reformat, Reinstall.

    And I can't beleive I paid fucking money for this peice of shit.
  • Perhaps the time has come to fork the older machines.. Few of us run Linux on anything less powerful than a Pentium, and even fewer on a 486.

    It's not a question of older but of smaller, and if you've ever compiled a kernel from scratch, you know how insanely flexible the choices are. Kernel Traffic, as others have mentioned is a must-read [linuxcare.com] if you want to understand the design decisions being made.

    For systems with limited resources -- embeded systems, or those mini-distribututions with under 16MB of storage (flash) and RAM -- the decisions made for the kernel in general are the same as larger systems with a few gigs of RAM and multiple processors. Read a few comments on these in KT, and the reasons will become more obvious.

    I agree with others who said that this is just Ziff-Davis making an issue out of nothing, and that nearly everything can be a patch or an ifdef -- no fork needed.

  • The issue isn't really the 386, that's just an exageration of the problem. The same patches that help these massive machines will hurt performance on most machines. The article mentions allocating memory for caching is a problem for machines with little RAM, 15 megs of pure cache can really hurt on say... a celeron with 64 megs. Not a big problem for say a 31 cp Alpha with 256GB of Ram. Compared to monsters like that most desktop machines running linux are about as powerful as that 386 or 486.

    I can see Linux eventually dropping mainstream support for the 386, but right now there's no huge gain to be made by not supporting it. Killing performance on most x86 machines out there on the other hand is a bad thing.
    treke

  • by Christopher B. Brown ( 1267 ) <cbbrowne@gmail.com> on Wednesday September 27, 2000 @01:36PM (#749095) Homepage
    If the system gets wedged up with a whole lot of #ifdefs, that makes it more and more difficult to maintain. LOTS of them can make software impossible to maintain.

    I wouldn't be shocked if the stretching of boundaries that comes from:

    • "Big Iron" changes, as well as
    • Embedded System changes
    winds up turning into there being some clear demands for forking.

    The fundamental problem with a fork comes in the code that you'd ideally like to be able to share between the systems. Device drivers float particularly to mind.

    After a 2-way fork, it becomes necessary to port device drivers to both forks, which adds further work.

    And if a given driver is only ported to one fork, and not the other, can it correctly be said that

    Linux supports the device
    or do we need to be forever vague about that?
  • by Spoing ( 152917 ) on Wednesday September 27, 2000 @01:39PM (#749096) Homepage
    Soo.. Bleh, I hate it when people talk about linux liek hes omnipotent.

    Read KT [linuxcare.com]. Read KT [linuxcare.com] often.

  • by mikpos ( 2397 ) on Wednesday September 27, 2000 @01:40PM (#749097) Homepage
    Well you could put (the analog of) ifdefs into the Makefile. e.g. if there were big differences between conventional and big iron ways of doing things with feature 'foo', you would have 'foo-garbage.c' and 'foo-bigiron.c' and have make figure things out accordingly.

    Of course then you would have to ensure that both offer a similar interface so that either can be used transparently. This *could* be a maintainence nightmare. I think there are a lot of ways that this *could* be done, but it depends greatly on the details involved whether it will be practical or not. I find it hard to believe that Linus would have looked over something as obvious as ifdefs or makefile tricks, so he's probably used his (undoubtedly god-like) judgement to decide that it would be a bad idea in the long run.

  • by BlowCat ( 216402 ) on Wednesday September 27, 2000 @12:10PM (#749098)
    I don't understand why anybody should fork code only because it has to behave differently on different systems. Why not use ifdef's? If too many ifdef's would be needed it may be better to have separate files and an option in "menu config". Even the current configuration system can handle it.

    Forks are usually justified only if the original maintainer pollutes the source with hacks or changes the license.

  • by Shotgun ( 30919 ) on Wednesday September 27, 2000 @12:11PM (#749099)
    Why are the patches being rejected? Couldn't they be conditionally compiled in?

    There is a patch out there that stores the computer's state to disk before shutdown and then give you an instant boot. My home machine is used that much, and my UPS needs repair. This patch would be useful for me, but I'd have to patch it in by hand and then I'd be out of sync with the official Mandrake kernel. That means I'd have to patch in security update by hand.

    The problem is, this patch is as useless to Big Iron as support for 256GB of memory is to me (right now). But why can't both Big Blue and I have our way with conditional compiles? All it would take are a couple of more menu selections in xconfig.

    Do you have more that 2G of memory?
    Would you like instant-on?

  • by technos ( 73414 ) on Wednesday September 27, 2000 @12:11PM (#749100) Homepage Journal
    Perhaps the time has come to fork the older machines.. Few of us run Linux on anything less powerful than a Pentium, and even fewer on a 486.

    I don't know, it depends on where the split of cost/benefit falls.. ZD doesn't say...

    `Sides, having a Compaq/SGI/IBM 'approved' kernel patch doesn't hurt much..
  • by systemapex ( 118750 ) on Wednesday September 27, 2000 @12:12PM (#749101)
    I'm not claiming to be a kernel expert, but forking the kernel so that there would be kernels specialized for specific applications only seems logical. A builder doesn't go around hammering everything in site because the hammer obviously isn't the correct tool in every situation. It's great for pounding nails into 2x4s, but isn't so good when it comes to painting walls.

    Specialized kernels are good, so long as the support behind all of these kernels remains great enough. I don't think I need to point out the possible pitfalls of forking the kernel and thus, effectively forking the developers behind the kernel into two or more camps. But at some point, the linux kernel that runs on a 386 should be different than the one that runs on the XYZ super computer, just because it can take full advantage of all the wonderful scaleability that the XYZ super computer offers.

    Anyway, as I said I'm not an expert but this just seems logical.
  • by Svartalf ( 2997 ) on Wednesday September 27, 2000 @12:12PM (#749103) Homepage
    There's kernel "forks" for hard (deterministic) real-time (RT-Linux, etc.). There's kernel "forks" for non-MMU machines (ELKS, uCLinux, etc...). So, why not a "fork" for big iron? If the fork for big iron doesn't hinder current modern machines or improves overall operation- it will become the main fork with the one that just supports the older machines becoming like the other "forks" we see today.
  • You're points are valid, but unfortunatly don't apply here.

    All the examples you've chosen are either processor architecture or device driver related. Most of the code that both of these classes use in the "core" of the OS are non-architecture dependant, and are coded for best general use.

    Forking the kernel for "big iron" may be required because utilizing that many resources effectively requires different algorithms at the very core of the OS - scheduling, virtual memory, caching, etc.

    The advantage to forking the kernel is the simpilicity in maintaining the code for either. However, the major disadvantage -- as seen with the BSD's -- is that features in either tree just end up getting implemented twice.

    It's a tough question to answer, and both choices have major long term implications.

    -Jeff
  • you dont really give too much info on your setup,
    but arent you consuming a tremendous amount of
    power by running 3 or 4 machines when one new
    machine could do the whole thing?
  • The problem is that it needs different data structures, and possibly the different algorithms would actually need different interfaces.
  • I don't think Linus is going to listen to you. It doesn't look like you've read what he's said on these issues already, or have spent much time compiling kernels on different systems.

    The issues you raise are packaging issues, important to people putting together distributions -- not kernel development or design issues. Even though that is the case, most distributions tend to load specific hardware support as a module.

    If you roll your own kernel you have ultimate control over what disk, BIOS, and bus types are supported. Very little in the Linux kernel is manditory. That's why it runs on such wildly different systems.

  • by DFX ( 135473 ) on Wednesday September 27, 2000 @01:53PM (#749114)
    Let me clear up a few things here.

    1. Earlier platforms generally had no CD-ROM.
    Install via NFS or on a pre-formatted hard disk with all the necessary files. Been there, done that.

    2. Earlier machines usually had a 5 1/4" floppy disk, until the late 486s started really using 3.5" floppies.
    You can boot from a 5.25 floppy disk as well as from a 3.5 one. Besides from booting for the installation, there is no need at all for a floppy drive.

    3. Earlier machines had RAM limitations
    Many old 3/486s can use up to 16 or even 32 MB RAM. That's more than enough for a small (slow) home-sized server. Even 8 MB does the job.

    4. Some earlier machines had fscked BIOSes, aside from Y2K-unfriendly BIOSes
    Y2k is only an issue during boot-up, after that you can set the system's time to whatever you want. From what I've seen, Linux deals better with really old motherboards than some brand new ones.

    5. Earlier machines had ISA, EISA, etc. Oh, what, you want to run GNU/Linux in something other than CGA?
    There are very good SVGA cards for ISA, although running XFree with a "modern" window manager on such an old box is suicide. However, any kind of video card does the job for a "server" type of computer.

    6. Earlier network cards are not all supported to get around many of these limitations
    Granted, very old ISA cards might not work well, but many cards do. NE2000, old 3Com cards? No problem, work fine, and deliver good speeds too.

    To make a long story short, killing support for old systems is a Bad Thing IMHO, and isn't necessary either, it would only make the kernel tarball smaller. I'm all for conditional compiles, and I actually wondered why some of the kernel patches out there (like the openwall patch) haven't been put into the mainstream kernel as 'make config' option. If they can put in accelerator thingies for Apache, why not this?
  • by pete-classic ( 75983 ) <hutnick@gmail.com> on Wednesday September 27, 2000 @12:12PM (#749115) Homepage Journal
    Okay, first there were systems, and the were all different.

    Then someone "abstracted" them with "BIOS"

    Then there were lines of systems, and they were all different.

    Then someone "abstracted" them with "C"

    Then there were platforms, and they were all different.

    Someone (Transmeta?) will come up with a way of abstracting platforms (or architectures) and
    make them "seem" the same.

    This relates directly with performance increases. When you find yourself wondering what is going to make a 10GHz system better than a 1GHz system I think the answer is the level of abstraction.

    Any number of quibbles can be made with the above statements, but I am illustrating a point, not
    being a historian.

    -Peter
  • The Kernel DOES NOT need to fork over this. If someone did fork the kernel, it's only because they can't design the patch good enough.
    If the patch is getting rejected by Linus, it's not because he favors 386's over 40k bogoMips. It's because it is bad design. Besides, if they want to make changes to the kernel that help some people (but hurt the majority), they need to design it in a way that it can be a compile-time feature. In the same way that 1GB or 2GB support is a compile-time option right now.
    Of course, it's always easier said than done. Another solution would be to forever maintain a big-iron-patch.tgz. But the reason they want to fork the kernel is because it's probably too hard to maintain a patch like that.
    Another solutions would be to start another branch (alpha, MIPS, intel, and BIG-IRON), but it includes more than cpu stuff so that would be an issue.
  • by d.valued ( 150022 ) on Wednesday September 27, 2000 @12:13PM (#749118) Journal
    This was bound to happen sooner or later. The Linux kernel's flexibility is being taken to the limit, and people are forgetting the easiest way to improve performance for their particular rig: Customize your kernel! You can add all the code in the universe, and then you pick and choose the particular things you need or don't need! Say I run a 486/25 with 16 MB RAM as an IP Masq router. The hard drive is an old IDE with 600 megs of space. I have two network cards, and that's about it. Do I need SCSI support? Do I need to support joysticks, X, Pentiums, AX.25, or anything else? No! I compile a kernel specifically to run the IP Masq, and run it well. My P100 laptop, on the other hands needs a bit more. I use it for packet, so I need AX.25. It uses PCMCIA, so PCMCIA support needs to go in. I use XWS to run Netscape and the GIMP, so I need graphics. But, my HD is not SCSI. I yank out SCSI. My CPU is subject to the 0xf00f bug, so that gets included. I brew a custom kernel, and boot time is a lot shorter. My big-rig is a C433. I need just about everything, as I have a 3dfx card for Quake3; XWS; a SCSI scanner; and a connection to my Packet base station. I optimize compilation for the higher-end computers. I plan on getting a Cube from Apple and putting SuSE on it. Again, by optimizing the options I optimize my system. Get the point? If you want a once-size-fits-all kernel, use Windows. If you want a kernel which can be adjusted for your particular and peculiar environment, use Linux and customize your kernel! Now, for my laptop.
  • by Mark F. Komarinski ( 97174 ) on Wednesday September 27, 2000 @12:14PM (#749119) Homepage
    Uhmmm....There would be a few problems:

    1) Is the resulting code still Linux?
    This is a BIG question, especially for IBM and SGI who want to say they're Linux supporters. If Linus doesn't grant use of the Linux name to their OS, they're back to naming the resulting kernel something other than Linux. Big PR problem.

    2) Will the "Linus approved" patches make it into the follow up kernels released by IBM and SGI?
    I'd be willing to bet both companies are willing to do the right thing and include them, but how big can this fork get?

    Now, all that aside, distros have been doing small scale forks for a while now. I think SuSE had a 1GB mem patch, and RedHat frequently patches the kernels they distribute. Nothing bad for most ussers.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...