×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Kernel Fork For Big Iron?

Hemos posted more than 13 years ago | from the what-to-do dept.

Linux 155

Boone^ writes: "ZDNet is running an article on the future of Linux when used on Big Iron. Just a bit ago we read about running Linux on a large scale Alpha box, and SGI wants NUMA support in Linux so it can support their hardware configuration. The article talks about how memory algorithms used with 256GB machines would hamper performance on 386s with 8MB ram. So far Linus et al have been rejecting kernel patches that provide solutions for Big Iron scaling problems. How soon before a Big Iron company forks the kernel?"

Sorry! There are no comments related to the filter you selected.

Just an excuse? (1)

Trevor Goodchild (187368) | more than 13 years ago | (#748965)

Maybe this is just an excuse by Big Blue^H^H^H^HIron companies to escape from the OS community somewhat and have their "own" kernel? I know how we all love to think that once these companies adopt Linux they undergo a significant "change of heart", but do corporations ever really change in this sort of way? I bet the real attitude is, "Let's 'adopt' Linux for now, and exploit the OS community until we can get a better grip on this thing and create our own version."

Forks and the maintainer (4)

MemRaven (39601) | more than 13 years ago | (#748966)

Not really. Forks are also justified if the maintainer has effectively abandoned the project and refuses to relinquish it (and someone else has to "seize" control to make sure that it continues to go forward).

Just as importantly, forks are probably necessary when a significant part of the user/developer base disagrees with the direction of the project. This usually implies that the forked version and the original version are aiming at solving different problems within the same vein. If the original project wants to continue in the original direction and some people want to use the source to solve a slightly different project, then they pretty much have to fork in order for the project to achieve its maximal result of being most useful to the most people.

This isn't a bad thing if it's done right. It's just that most of the big forks you hear of are at least partially the result of bitter, angry wars (OpenBSD anyone?). You don't hear that much about the ones which are completely amicable.

Re:You don't know what you're asking. (1)

msew (2056) | more than 13 years ago | (#748971)

what was your command line to get that number?

I saw the .5 million and freaked out and I am not getting quite the same number as you. Wondering what you are using and if you have like 2 kernels worth of files or something.

How about embedded systems? (2)

achurch (201270) | more than 13 years ago | (#748972)

You forget that ancient machines are not the only places 386's and 486's appear. Embedded systems generally don't need a heap of processing power, so you can get things done cheaper (and cooler) with a 386- or 486-level chip.

To take your points one by one:

1. Earlier platforms generally had no CD-ROM. Most Linux distros . . . come on CD-ROMs.

1. You install at the factory onto ROM/flash/whatever. No need for a distribution's install CD.

2. Earlier machines usually had a 5 1/4" floppy disk . . .

2. See above.

3. Earlier machines had RAM limitations . . .

3. So what? Even without limiting oneself to embedded systems, there's no real need for huge amounts of RAM besides the RAM companies saying "BUY MORE RAM". I ran Linux on a 386 with 8MB at a summer job a few years back with little trouble, and that only in the setup. (On the other hand, it would be nice to see a libc that wasn't as bloated as glibc...)

4. Some earlier machines had fscked BIOSes, aside from Y2K-unfriendly BIOSes.

4. Repeat after me: Linux does not use the BIOS. The BIOS is only used at boot time (and by DOS). And as far as embedded systems go, you can use a modern BIOS that works, or just write something simple that starts up Linux on your box. After all, embedded systems don't need to worry about being general.

5. Earlier machines had ISA, EISA, etc.

5. Modern embedded systems probably use PCI if they need anything at all.

6. Earlier network cards are not all supported . . .

6. Modern embedded systems can use supported hardware.

How about #ifdef CONFIG_BIG_IRON? :) (2)

Kaz Kylheku (1484) | more than 13 years ago | (#748976)

You don't need to fork the whole kernel, just make it support ``big iron'' as a configurable feature.

If the same code cannot handle both kinds of machines, then you eventually need both pieces of code in the same codebase, not a fork.

Forking is essential for experimentation. That's why we have tools like CVS which encourage forking for making stable releases and for experimenting with new features.

Re:this is kinda kewl (1)

Sanchi (192386) | more than 13 years ago | (#748978)

Hey this was to win a bet. Cant blame me if i want to get some money off of a poor sap can ya?

Bad usability for server admins and users alike. (1)

EatenByAGrue (210447) | more than 13 years ago | (#748980)

The last thing Linux needs is more installation and setup complexity.

Woohoo! (2)

1010011010 (53039) | more than 13 years ago | (#748982)

Fork! Fork!

Maybe we can get changes to the VFS and VM system now!



___________________________

Re:Let them fork (3)

jjr (6873) | more than 13 years ago | (#748983)

What I would believe to happen for the big iron machines is that they would have a different directory for them and the memory management code will be under there. So now you have the memory management for big guys and the other memory management code under the same source tree. When you compile your kernel it looks for the proper managemnt code. I know not that simple there is more to it than that but is what will happen if they fork and come back together.

Re:Why not? (1)

silicon_synapse (145470) | more than 13 years ago | (#748984)

What about an optional switch during the install? Include support for both memory management methods an allow the user to choose. Of course the default would be standard and optional would be BigHonkinMem.

Re:Why not? (1)

arthurs_sidekick (41708) | more than 13 years ago | (#748985)

Yes ... after all, it's not as if people are looking to run Quake on big iron.

/me pauses to look at the Alpha thread

Never mind ...

=) [on a serious note, I agree ... so *what* if development forks, would it really impact the average user all that much?

Isn't it ALREADY forked? (1)

JCCyC (179760) | more than 13 years ago | (#748986)

According to the latest stable kernel's Release Notes [linux.org.uk] , there are separate source trees for MIPS, ARM, 68k and S/390. Looks reasonable too, it's a wildly different architecture. Heck, IBM themselves could maintain it.

Surprised (1)

Geccoman (18319) | more than 13 years ago | (#748987)

I'm surprised that RedHat or some other big dollar linux company hasn't already begun R&D on something like this. Perhaps no one believes the market would support the amount of money it would take to develop and support it?

Re:Why reject? (1)

ldm314 (105638) | more than 13 years ago | (#748988)

Where can i find this kernel patch for instant-on, I am definately interested

Re:hmm.. (1)

haggar (72771) | more than 13 years ago | (#748992)

I hope I'm not too rendundant: I don't agree. One of my issues with Microsoft is that MS is in bed with HW manufacturers, so that they can sell faster and better hardware that Windows requires.
I agree that we do need faster CPUs, but not just because the OS demands it. Linux is the best example and proof: you can add features to the OS; while still being able to install it on your 486.

I myself have two 486 running Slackware, and they do their job amazingly well, just as an Athlon 1 GHz would have done. But I don't want to be forced to buy an Athlon 1 GHz just because someone decided that I don't need my 486 PCs anymore.

Re:Why not have a kernel option... (4)

josepha48 (13953) | more than 13 years ago | (#748993)

They do this now for 1 Gig mem limitations. The problem is that there are so many #ifdef and #ifndef's in the linux kernel now that some people do not want added kernel options (more #ifdefs).

One of the issues that people seem to fail to realize is that Linus is not necessarily rejectiung the patches because of what they do, but how they are implemented. If patch code is submitted to Linus and the patch is going to make mataining that system difficult (read messy unmaintainable code) Linus will reject it. Linus also does not like large patches either. He likes bits and peices and clean fixes. Hey he started this whole thing, I think he has that right.

Another thing to think of is that ZDNet is a news network. Everyone has been saying that the kernel will fork and blah blah. There are already forks in the kernel but people just don't realize this.

Redhat kernels: Have you ever tried to apply a patch to a stock redhat kernel? I know that since RH5.2 they ship the Linux kernel with there own patches.

SuSE kernels: Last SuSSE I installed (5.3) had both a stock Linux kernel and a custom SUSE kernel with custom SuSE patches.

Corel: never tries them but they patched kde and made it hard to compile other kde software with there distro.

Point? There are already forks in the Linux community, yet it goes on. That is the whole thing about open source. There can be forks. If an idea is good it gets into the mainstream kernel. But these 'forks' need to be tried first and become tested and cleand up in such a maner that they can exist with the rest of the linux kernel.

If you think that everyone is running P200 or P500 or GigHz machines you are wrong. I am sure that there are lots of people out there that are running old 386 / 486 with Linux as routers firewalls, etc. After all you do not need a superfast machine for a firewall if all you are going to firewall is 3 or 4 other machines.

I don't want a lot, I just want it all!
Flame away, I have a hose!

First fork! (1)

clgoh (106162) | more than 13 years ago | (#748995)

First fork!

Re:speaking of code forks (2)

leereyno (32197) | more than 13 years ago | (#748996)

Huh????

Solaris is based off SysV 4.x. SunOS was based off BSD, but the current BSD's are not based off it, they are based off the same code it is based off.

Re:Why not have a kernel option... (2)

be-fan (61476) | more than 13 years ago | (#748998)

You miss the central issue. The problem isn't the memory thing, but should developers at these companies fork the kernel to take advantage of their hardware. You can't decide this issue by issue, but should have a grand plan for it. Otherwise, the desicion processes slows down development of the kernel.

Linux Kernel (1)

dale@shiraz (70141) | more than 13 years ago | (#749000)

I've seen many articles about why Linus won't include this or that. Don't forget Linus is one of the greatest computer experts on this planet, he doesn't just say no because he's having a bad hair day.
Linus is probably rejecting this because:
  • 2.4 is frozen and is not accepting any more major changes. If he doesn't stick to his guns here we wont see 2.4 for another year or so.
  • The 2.5/2.6 list of kernel changes and additions is still being drawn up.
  • The patch probably needs improving so not to cause slow downs and problems for 99% of users. Some more thought needs to go in here. Yes a extra feature might be really good but Linus wants to keep the kernel nice and not turn into one huge blob, he's already talking about structural changes to increase the modularity while maintaining the efficient monolithic core of Linux, this will take some work. Maybe Linux 3.0?
So lets not complain, but ask why? Linus is a logical guy, but sometimes we can't always see his reasoning.
I do think however that Linux is getting so big that Linus will have to change the way patches are integrated and accepted, he's going to have to delegate more and become more concerned with Linux's overall direction and working with the big companies while also working in the interest of the community. I think the US government should pay for this even, why not, they pay for NASA and Linux is alot more use to citizens than a space shuttle.

Over to you then. (1)

Anonymous Coward | more than 13 years ago | (#749002)

S/390s, Starfires, Wildfires, SP clusters. All scale to 1000 processors+, many gigabytes of RAM, pushing a petabyte of store (soon)

Itty-bitty palm-top, wear-on-your-wrist PDAs. One processor, 2Mb RAM, no permanent store.

Linux runs at both extremes. Inefficiently, but it runs.

Now, you want to manage this scalability with the preprocessor. Well, that's nice. Off you go.

One day, you may get to work on a large software project. Clearly, you haven't done so far.

I'm confused. (2)

mindstrm (20013) | more than 13 years ago | (#749003)

Are you saying that they are forking it (as the headline suggests) or simply guessing that some day they may for it? Just fear mongering?

If they produce stable patches, that can compile cleanly in with everythign else, especially after the new kernel revs are done in 2.3, and 2.4 is stable, I bet they WOULD make it into the mainstream.
They simply don't add everything just because it's just starting. Lots of great features started out as separate kernel patches and eventually made it into the main tree.

And if they want to fork, what's the big deal? Who cares? They are more than free to do so, and produce their own. It's not like it would be any less open.. and heck, a third party can always glue them back together and ship his 'complete linux' or whatever...

Sheesh. hard up for topics today?

Re:hmm.. (2)

Ian Bicking (980) | more than 13 years ago | (#749004)

Perhaps the time has come to fork the older machines.. Few of us run Linux on anything less powerful than a Pentium, and even fewer on a 486.
A 486 has a lot more in common with the computer I'm running now than does anything with 256Gb of RAM. None of the patches for big iron have anything to offer me or the vast majority of people who run Linux on modest hardware.

If 486's weren't supported it probably wouldn't be that big a deal -- there's little lost in running a 2.0 kernel, and in the future that will probably remain true. (We should face it -- the kernel is really rather boring) But getting rid of 486 support wouldn't help much.
--

Why not? (2)

evilned (146392) | more than 13 years ago | (#749005)

Ok, one of the threats of the open source is that it will for. We've all seen the forks in bsd, and it certainly hasnt killed that. Why not a fork for big iron machines. It doesn't even have to be maintained by Linus. We have crypto patches, and the AC patches, hows about a big ass computer patch.

Speak for yourself (5)

DebtAngel (83256) | more than 13 years ago | (#749006)

I am constantly putting Linux onto old hardware. Need a quick, dirty, and cheap NAT box? Throw Linux on a DX2/66.

MP3 file server for the geeks in IT? Throw in a big drive, but a 486 will do.

Hell, my company's web server is running on a low end PII, and I think it's a horrendous waste! It could be doing *so* much more.

Linux is a UNIX for cheap Intel hardware first. That's where its roots are, and I don't see why it should sacrifice its roots for big iron that can quite happily run a UNIX designed for big iron.

Neither does Linus, apparently.

not too bad (3)

matman (71405) | more than 13 years ago | (#749007)

So many things are distributed as kernel patches that it doesnt really matter. Anyone with that kind of hardware will obviously have the expertise and the money to install an appropriate kernel patch. No box that big is going to run an out-of-the-box kernel anyway, if you're using that sort of hardware, you're going to want to tweak it. As long as there is not a division in the majority of users' needs, there is not likely to be a major fork.

Cleaner kernel trees (1)

iabervon (1971) | more than 13 years ago | (#749008)

I think one reason that some stuff isn't going into the official standard kernel is that there's no way to put code into the official kernel such that people who don't want it don't have to download it. It would be really helpful if you could run a configuration pass, and then download only those files that you were actually going to use. That way the kernel sources could get really big, containing all the patches and versions of stuff that are probably good ideas, without making it impractical to get and unpack.

There's no real reason there can't be different official memory managers for low memory and high memory situations, since there are clearly different issues. Of course, at this point, lots of people testing a single one is important.

Ifdefs imply accepted patch. (5)

MemRaven (39601) | more than 13 years ago | (#749009)

(sorry for the double post, this is to the first half of the comment).

It depends on how pervasive the code changes have to be. If it involves #ifdeffing every single file, then it's going to be very difficult to maintain that, and it's going to be very unlikely that the maintainers of the project are going to allow that feature to remain part of the major distribution.

That problem is a dual-edged sword. It also means that maintaining one big patch is a complete nightmare. Every version of the kernel that comes out has to be separately patched, with two important considerations:

  • The code which needs to be inserted has to be reinserted. If this is all separate files, that's easy, but if it's not that's a complete nightmare. And the code to call into that separate file is then a nightmare.
  • Any changes which have broken the patch have to be investigated and possibly changed. If you're working on filesystem patches, for example, someone working on the core fs work may have broken your patch without your knowing it, because they're not including your code in their coding/debugging process. So every time there's a change to the kernel, you have to figure out whether that change will potentially break your work.
The only way to resolve the second is to keep the patch inside the actual kernel, so that the authors of the rest of the system are aware of it, and will either try their best not to break it, or will do first-round of changing the new functionality to work with their changes.

Basically, it comes down to how pervasive the work has to be. If it's a really pervasive change which touches on almost everything, then the only option from a software engineering perspective is a fork. Anything else is being done from a feel-good PR perspective, because it just doesn't make any sense from a technical perspective to try to maintain a huge patch that covers everything.

But what is 'it'. (2)

mindstrm (20013) | more than 13 years ago | (#749013)

If 'linux' wants to be a mainstream desktop os? 'it' shouldn't fork?

This is the problem, folks.. linux isn't an 'it'. It's a plural, it's an ideology, and relatively loosely defined codebase.

We have compatability between distributions right now by *fluke* because noone has seen a need to change that. There is no 'rule' that says it has to stay this way.

If the community wants linux to be on the desktop, then THAT IS WHERE IT WILL GO. Period. Regardless of who forks what. If we need a way to distinguish between our 'community' supported stuff that runs on 'true' linux, and the forks, we will do so. It's no big deal, really.

Re:speaking of code forks (2)

mindstrm (20013) | more than 13 years ago | (#749014)

Solaris IS SunOS.

SunOS V4.x was based on BSD.
SunOS V5.x was based on SysV 4.x

Solaris is the name for SunOS 5 + OpenWin

Re:Why reject? (3)

mindstrm (20013) | more than 13 years ago | (#749016)

Because now is not the time.
A great many features start out as independent kernel patches.
When thigns stabilize down the road, I'm sure they will gladly put 'Big Iron' flags in the compile stuff.

The point is, linus (et al) can't just stick everyting everyone submits, big OR small, into the main kernel, especially if it's not even developed yet!
Also... the feature set for current kernels is already listed... and this isn't one of htem.

You don't just add shit to a project partway through because someone wants you to.

I'm sure than by the time 2.5 kicks up, we'll see a 'big iron' flag in main kernel options.

Re:Why reject? (1)

swotl (24969) | more than 13 years ago | (#749017)

i'd hazard this kernel patch [sch.bme.hu] is the one in question.

i've not tested it, but now that i've got a laptop
and windows people to impress, i just might give it a go ;)
-
sig sig sputnik

Why not make it an option (1)

Krellan (107440) | more than 13 years ago | (#749019)

Why not make the large memory algorithms an option, that can be enabled by the user at will? That way, both algorithms can remain in place.

You could have something like these two kernel command lines:

linux mem=8M

linux mem=256G memalgo=large

Then, different users could pick the right one for their needs.

I sure hope the kernel doesn't fork. We do not need NetLinux, FreeLinux, OpenLinux...

ZDNet's tendencies to sensationalize at work? (5)

Ross C. Brackett (5878) | more than 13 years ago | (#749020)

So far Linus et al have been rejecting kernel patches that provide solutions for Big Iron scaling problems.


This makes it sound like Linus has been rejecting them because they provide solutions for Big Iron scaling problems. Having read kernel traffic and the linux-kernel list enough, this statement looks immediately suspicious. I have never seen Linus ever purposely reject a patch that's an all-around good fix for a problem. Usually it's "Well, Linus rejected my patch even though it does all this cool stuff and fixes all these problems, so it's probably because he just doesn't like such-and-such feature/platform/interface" and then Linus replies, "no, I rejected them because you're a dumbass and your patch sucked."

The link to the SGI page somewhat confirms this:


9. When will this code be added into 2.3?

Linus agrees in principle to take this code in. It has
already been reviewed by Ingo and Andrea. Linus wants to
clean up the page allocation data structures a bit before
imposing this code on top of it; I am trying to help him
do that. New: As of 2.3.31, this code is in under
CONFIG_DISCONTIGMEM.


I just kinda heavily doubt that Linus wouldn't want awesome NUMA support if the potential was there. My best bet is that the people pushing for it just aren't on exactly the same wavelength as Linus (is anyone?) and it's slowing down progress.

Another quote that points in this direction

Linus: "A lot of the problems, especially with NUMA, are that the solutions tend to add complexity that simply isn't needed at all on 'normal' machines,"


I don't think Linus mean any solution, just the solutions presented to him.

Re:It only makes sense (1)

pyros (61399) | more than 13 years ago | (#749021)

A builder doesn't go around hammering everything in site because the hammer obviously isn't the correct tool in every situation. It's great for pounding nails into 2x4s, but isn't so good when it comes to painting walls.

True, but "when your only tool is a hammer, all your problems start to look like nails." (Not sure where that's from, I read it on someone else's email sig)

Why not have a kernel option... (2)

Squeezer (132342) | more than 13 years ago | (#749022)

...for servers with more then X gigs of ram to use this algorithm, and for servers with less then X gigs of ram use this algorithm, etc...?

Recompile? (1)

ResHippie (105522) | more than 13 years ago | (#749023)

Why can't the BigHonkinMem option be part of the kernel config options. Don't have it standard, like a lot of options that aren't standard, but make it a choice.

Also, I'm assuming Big Iron has something to do with clusters, or huge ass servers, but would someone mind posting an explicit definition?
Thanks.

Linus is too finicky about what he lets in (1)

The Big Bopper (150305) | more than 13 years ago | (#749024)

A fork isn't all that bad an idea, but it should be more of a coup by people who are more interested in seeing Linux grow.

It is silly that every time a new kernel gets installed, I have to patch it with MOSIX before I compile. Why can't MOSIX code just be included and compiled in with a switch set in my Makefile?

Re:What's wrong with ifdef's? (2)

Dante Aliegri (119831) | more than 13 years ago | (#749025)

Ifdef's are one of the hacks in C that are nice if used in moderation, but you can see where this might go....

Say Linux puts one BigIron patch in, then he won't have any reason for not putting the rest in, and when you do that, you get a nest of #ifdefs and #endifs ( because they are funtamentally different than PCs, there would be a lot of changes -- the style of the kernel might have to be changed in order for the patches to be applied and keep it in a useable state).

What this means is, that it is significantly harder for kernel hackers to read the code. That is a bad thing (tm). As I read in another post, Linus will put these things in, just not in the 2.4 kernel.

Re:ZDNet's tendencies to sensationalize at work? (2)

h2odragon (6908) | more than 13 years ago | (#749029)

...but the name of that option should be "CONFIG_AWW_YOU_BASTARD_BIGMEM"

Re:Good Thing (1)

mholve (1101) | more than 13 years ago | (#749031)

Like the other fellow said - if you don't like my comments, then don't read/reply to them.

Now go run along and be a good little slashbot.

First watches, and now forks? (2)

Froid (235187) | more than 13 years ago | (#749033)

IBM sure is ambitious about their embeded Linux toys, aren't they? I just hope we don't see headlines when some idiot pokes an eye out: Linux fork's too sharp; downgrade to MS Spork.

Re:What's wrong with ifdef's? (2)

Pig Bodine (195211) | more than 13 years ago | (#749035)

This is probably true in the long-run, but expecting current Linux kernel maintainers to maintain code for machines they'll never see is unrealistic. These sort of changes are going to occur at first experimentally in-house at a large corporation. That will be a fork for at least a little while. Presumably they'll be GPL'd (they better be!) so the changes can always be brought in if people want them. And hopefully the unnamed corporation will want the good karma they'll achieve by later hiring someone to help with folding the resulting code back into the regular kernel.

With GPL'd code, I don't find a (possibly temporary) fork to do something extremely specialized all that threatening; if anything it sounds like a practical necessity at the moment.

Re:Supporting 386s: Some Problems... (2)

THB (61664) | more than 13 years ago | (#749037)

The 2.2 series kernels will still be maintained for several years after the the release of 2.4. Linus has already said that he will stop supporting systems with less than 4MB of ram in the 2.2 series. Why not raise that cap with the 2.4 or 2.5/6. I see very little reason to run a more recent kernel on a 486. Is anyone going to have USB or AGP on a 486? What about a new netcard or raid controller? Old kernels are still being mantained, so security is not an issue. It will benifit more people by raising the standard than keeping it low.

Re:Speak for yourself (1)

rgmoore (133276) | more than 13 years ago | (#749038)

I am constantly putting Linux onto old hardware...

Linux is a UNIX for cheap Intel hardware first. That's where its roots are, and I don't see why it should sacrifice its roots for big iron that can quite happily run a UNIX designed for big iron.

OTOH, there's no reason why you can't keep around an old distribution of Linux based on a 2.0 or 2.2 kernel and use that for your old hardware. After all, a big driving force behind the development of new versions of the kernel is to add support for new hardware, so it makes little sense to cripple that forward development by demanding perfect backward compatibility. It's not as though Linus is going to stop providing the old kernels and demand that you upgrade (as some monopolistic OS vendors one could name are apt to do).

In fact, you could view the current continued development of the 2.0 series kernels as being, in effect, a Linus approved fork for old hardware. They're just getting set to release 2.0.39, so the older versions are still under active if slow development to squash bugs. It's not as though you're going to be putting most of the features of the new kernel, like USB and AGP support, into use on old hardware anyway.

Re:ZDNet's tendencies to sensationalize at work? (1)

B.B.Wolf (42548) | more than 13 years ago | (#749040)

I think you are the first poster so far to notice
the journelistic bias. I noticed it because I am
always suspiciouce of anything in Zip Data, who
have shown so often how willingly they get on their
knees to suck Bill Gates FUD.
Many ZD Linux articals while seeming to praise
the penguines streangths, at the sametime, are
attempting to make any OS, other then the stuff
from Rearmount, sound to complicated for the
non-geek user. This artical strikes me as simulare,
but aimed at the quasi-tech-savy-investor type.
They are saying "Ignore those `Linux on Big-Iron` stories,
because we have proof that it won't work.
Invest in M$ instead.. Trust us."

Re:Is The Fork Neccessary? (1)

Desdinova77 (184164) | more than 13 years ago | (#749043)

Also, if a Big Iron fork does take place, will the public support a corporate driven project, or will we look to Linus, Alan, and the bunch to maintain that tree as well. How much access do those guys have to Big Iron anyway? Not much I guess. --- Hell I'll maintian the fork if IBM and company will give me the hardware

Re:A brief history of computing. (2)

Enoch Root (57473) | more than 13 years ago | (#749044)

Duh. Actually, Java is (in theory) an abstraction of platform. If you're hoping for Transmeta to do that, then I have a bridge to sell you.

"<BR><BR>"Sig

Kernel Forks and Patch Madness (1)

darkcyde (9561) | more than 13 years ago | (#749045)

I don't claim to be an expert on the subject, but SGI might as well create a new fork of the kernel. The advantage to the Open Source Model has always been the ability to create the right tool for the job.

Provided they aren't competing with the other kernel maintainers, then it doesn't seem like it would be much of a problem to let them do it their way provided they keep up with the new features in later kernel versions. Besides, the kernel that powers a Cray should have different / more / better options than that which supports a i3/4/586 simply because there are more complex and advanced features on the Cray.

As long as SGI keeps with the program and doesn't start adding all kinds of "closed source" mumbo jumbo in their version of the kernel, then all should be happy.

Anyway, thats my two cents..

Speaking of Big Iron (1)

dan the person (93490) | more than 13 years ago | (#749046)

http://linuxcare.com.au/anton/e10000/maketime_24.s html

Re:Inevitable, but not so bad (3)

gwernol (167574) | more than 13 years ago | (#749047)

It sounds inevitable that a Big Iron fork will occur, and as Linus says above, this is not necessarily a bad thing. The problem comes when you have competing factions trying to do the same thing and causing confusion (as in the UNIX wars of the past). But when you have different solutions for different problems, yet everyone is moving forward together overall, it should be manageable. Indeed, it should be helpful, for it maximizes the solution for each platform.

The biggest potential problem of forking an OS is binary and API incompatibility. The reason most people use computers is to run specific applications. I want to be able to walk into my local CompUSA/log on to Egghead and get a copy of application X and run it on my computer. I don't really care what the OS is, as long as it runs application X.

If I've got Linux on my system, I'd like all applications that run on Linux to run on my system. The more forks that introduce binary or API incompatibilites, the less chance I have of being able to run the apps I want, and the more reason I have for removing Linux from my computer.

If Linux wants to be a mainstream desktop OS, it needs to make sure it doesn't fork too much. That was a big part of the reason desktop UNIX failed to take off in the late 80's/early 90's.

Re:Speak for yourself (1)

luge (4808) | more than 13 years ago | (#749048)

I don't see why it should sacrifice its roots for big iron that can quite happily run a UNIX designed for big iron.

Umm, maybe because the real roots of Linux are "why have reverse compabitility when you can make the OS more useful?" Yes, it is nice that Linux will run on older boxes. But there is no reason to impede progress for everyone just because a few people don't want to blow $50 for a Pentium.

Re:hmm.. (2)

StudentAction.CA (167871) | more than 13 years ago | (#749049)

I tottaly disagree. The power of linux steams from the vast array of machines I can use it on, from my XT (I have a boot disk for the 1.0 kernel series), to my 486 NAT box, to my mail/ldap server (AMD/400).

What is bothering me about the current distributions is that they are forgetting about old hardware. I can't install Mandrake on a system with 8 megs of ram, but the system will run.. How screwed it that - the installer needs more ram than the OS!

If this Linux bloat continues, I'll just keep moving more of my boxes to the BSD's (Free and Net are my personal fav's - gotta love the deamon!)

Just don't forget that linux has prided itself on excelling on hardware that most people would call "old". As we go forward, we can't forget the past.

Ok, that answers my question (1)

p3d0 (42270) | more than 13 years ago | (#749050)

This guy [slashdot.org] seems to have a clue.
--
Patrick Doyle

Re:Speak for yourself (1)

StudentAction.CA (167871) | more than 13 years ago | (#749051)

But there is no reason to impede progress for everyone just because a few people don't want to blow $50 for a Pentium.

Why should I waste money on a new system when the 486 that runs my firewall is running fine, the 386 that I screw around with works great, etc....

Buying new hardware is a waste if the old hardware will do it.... throwing computers out creates alot of garbage in landfills...

Perhaps we need the slogan "make faster code, not waste!"

Re:ZDNet's tendencies to sensationalize at work? (1)

elmegil (12001) | more than 13 years ago | (#749052)

It seems to me (as an interested outsider) that, while it makes sense to have the kernel be as accessable to all levels of machines as possible, that Linus will have to bend eventually in some way.

One problem is that #ifdefs have no way to tell what the memory size of your machine is going to be, and it seems (to me at least) a bit excessive to expect a recompile just to get a kernel that's efficient for a given sized machine.

I realize this is probably the wrong hammer (possibly too big) but what prevents the possibility of having two sets of memory management code, the "standard" set and the "wait we've got >256 Meg of RAM" set? Yah, very complex, but if kernel forking is a Bad Thing (tm) then what prevents that approach from working?

Two points everyone seems to overlook (1)

Anonymous Coward | more than 13 years ago | (#749053)

First, how many millions od $ do you think companies like IBM will pour into Linux before they want some direct control over the OS? I'm not saying this is bad, good, or anything else; it's just a natural outcome of any business betting that heavily on anything, particularly technology. And making a Blue Hat Linux would instantly fork the code base.

Second, this wouldn't be a bad thing. Who says that the Linux that boots from single-floppy rescue disks and runs in routers has to be the same basic kernel as all other Linuxs, on any hardware? There's bad forking, such as what we saw run Inix into the ground, and then there's specialization for different uses and hardware. Maybe we need to stop calling the later forking, since it's such a different thing, and has such different results.

LOL! (1)

mholve (1101) | more than 13 years ago | (#749067)

But Hemos doesn't know a nine iron from big iron! Hehehehe. :)

Supporting 386s: Some Problems... (4)

lwagner (230491) | more than 13 years ago | (#749068)

Yes, it is nice that it will still run on a 386, but there are other factors to consider:

1. Earlier platforms generally had no CD-ROM. Most Linux distros (except for fringe distros) come on CD-ROMs. Most people do not want to buy a CD-ROM for their 386, 486s. There are places that offer small "floppy-disk-sized" Linux distros, but they are obviously chopped. 1400K on a 500MB HDD.

2. Earlier machines usually had a 5 1/4" floppy disk, until the late 486s started really using 3.5" floppies. Most people are not going to spend money and time ripping out an old floppy.

3. Earlier machines had RAM limitations, aside from the fact that no one wants to really waste the money on putting more EDO memory into an obsolete machine.

4. Some earlier machines had fscked BIOSes, aside from Y2K-unfriendly BIOSes; Most people will not research whether the particular BIOS is okay to determine whether or not to spend money on the first three items.

5. Earlier machines had ISA, EISA, etc. Oh, what, you want to run GNU/Linux in something other than CGA?

6. Earlier network cards are not all supported to get around many of these limitations... I tried to get around not having a CD or a 3.5" floppy in an old 486 by using some sort of older ISA-based network card.

Obviously, there are many issues to consider before nodding one's head to allow Linus to try to preserve performance in ancient boxen for nostalgic purposes.

Lucas



--
Spindletop Blackbird, the GNU/Linux Cube.

Isn't that what "make menuconfig" is for? (1)

p3d0 (42270) | more than 13 years ago | (#749069)

Why couldn't they implement both algorithms, and have users choose between them with "make menuconfig"?
--
Patrick Doyle

Re:Speak for yourself (1)

elmegil (12001) | more than 13 years ago | (#749070)

So used the forked small kernel then! Why is this a problem? It makes good sense to me...

Re:this is kinda kewl (1)

fsck (120820) | more than 13 years ago | (#749071)

P.S. I am having a problem with X. It crashes when i look at it wrong. I happened when I changed my Video card from my GeForce to my VooDoo3. (and it works fine with my GeForce) E-Mail if you can help please.

Now I know what that one guy's .sig means "The wheel is turning but the hampster is dead."

You don't know what you're asking. (1)

Static (1229) | more than 13 years ago | (#749072)

#ifdefs are a lot of trouble. Linus has posted to LK several times about reducing the number of #ifdefs.

FWIW, grep and wc report more than half a million #ifdefs in the 2.2.16 kernel.

Wade.

Re:ZDNet's tendencies to sensationalize at work? (3)

Foogle (35117) | more than 13 years ago | (#749073)

It's not excessive to expect someone to recompile their kernel to get optimal performance under extreme circumstances. It would be excessive to expect someone to recompile under tiny differences, but we're talking about the difference between 64-128 megabytes and 256 gigabytes of memory. People setting up machines that use such enormous amounts of RAM won't be put too much out of their way to recompile with a ENORMOUS_MEMORY option.

-----------

"You can't shake the Devil's hand and say you're only kidding."

Re:There is a point: One size rarely fits all. (2)

Foogle (35117) | more than 13 years ago | (#749074)

You're assuming that merging this code would be as simple as adding a BigIron.o module... I really doubt that this is the case.

-----------

"You can't shake the Devil's hand and say you're only kidding."

Why not detect memory size at runtime? (2)

Steven Reddie (237450) | more than 13 years ago | (#749075)

Surely checking the amount of memory at runtime and using a different algorithm based on that value is not too hard.

Re:ZDNet's tendencies to sensationalize at work? (1)

JanKotz (228776) | more than 13 years ago | (#749076)

My guess would be that in most cases, it's not feasible to just slap in a CD-ROM, power up the system, and install Red hat onto a "big iron" machine. Building a new kernel is probably not a big deal, especially considering how fast it would compile, and would definitely be the least of inconveniences.
--

Re:fork in kernel... (1)

Anonymous Coward | more than 13 years ago | (#749077)

or am i smoking crack

There's a simple way to find out. Do you currently have moderator points? If so, you're probably smoking crack. If not, you're probably clean.

(Note to moderators: This post is meant as a joke, not a flame. It's not flamebait. I have nothing against you, and I'm sure you're all very nice people, when you're not high on crack.)

I am the real Anonymous Coward. Anyone else who claims to be me is an imposter.

Let them fork (1)

jjr (6873) | more than 13 years ago | (#749078)

If they fork all it would cause is a temporary fork. It will be incorporated back into the kernel any how if it is any good. If it is not good people will not use it. If they want to fork let them.

Inevitable, but not so bad (3)

Private Essayist (230922) | more than 13 years ago | (#749080)

From the article:

The process of non-standard kernel patches is just fine with Torvalds. "On the whole we've actually tried to come up with compromises that most people can live with," he said. "It's fairly clear that at least early on you will see kernel patches for specific uses -- that's actually been going on forever, and it's just a sign of the fact that it takes a while to get to a solution that works for all the different cases." He continued:

"That's how things work in Open Source. If my taste ended up being the limiting factor for somebody, the whole point of Open Source would be gone."

It sounds inevitable that a Big Iron fork will occur, and as Linus says above, this is not necessarily a bad thing. The problem comes when you have competing factions trying to do the same thing and causing confusion (as in the UNIX wars of the past). But when you have different solutions for different problems, yet everyone is moving forward together overall, it should be manageable. Indeed, it should be helpful, for it maximizes the solution for each platform.
________________

Not an issue (5)

OrenWolf (140914) | more than 13 years ago | (#749082)

If you've followed the SGI/Linux debate on K-T, it's obvious that they indend to incorporate the option to enable BigIron features in the future, just not for 2.4 - as has been traditional with Linux.

Even in the cases where Linus has outright rejected BigIron patches, nothing stops a hardware vendor from patching the source after the fact - almost every major Linux distribution does this now for x86/ppc/sparc etc. (NFSv3 is a great example)

Fear of Forking (1)

gavinhall (33) | more than 13 years ago | (#749087)

Posted by polar_bear:

I don't think providing patches for memory managment on big iron really is any cause for concern. This isn't the same as the Unices forking and becoming almost wholly incompatible - this seems to be a patch or line of development for specific hardware that wouldn't really cause any disruption to the rest of kernel development, and no disruption whatsoever to the remainder of the tools that make up Linux distributions.

The danger would be if Red Hat or SuSE or someone like that started doing ugly things to make their distribution incompatible with the others in mainstream use. Since Big Iron servers are such a rarified environment, they're not very important to the average user.

Re:Let them fork (2)

gwernol (167574) | more than 13 years ago | (#749088)

If they fork all it would cause is a temporary fork. It will be incorporated back into the kernel any how if it is any good. If it is not good people will not use it. If they want to fork let them.

No, you're missing the point. They would need to fork because the memory mangement techniques for "Big Iron" machines are fundamentally different from low end home machines. You need to use different techniques on machines that are so different, so they won't get incorporated back into a single kernel. You will get (for some reasonably long timeframe) two different kernels as a result of this.

Now, whether that's a bad thing or not is a different question.

Re:Who cares? (2)

fsck (120820) | more than 13 years ago | (#749089)

I had Windows 2000 Professional running on my computer, and I print alot of stuff on my printer.

I installed SP1NETWORK.EXE (service pack 1) like a good user, and now when I print, it takes over 2 minutes per black and white page of text, whereas before service pack 1 it was fast as usual. I was already running the latest printer drivers for my model of printer - I checked their website.

When I installed SP1 I chose to save automatically so I could uninstall it if I had to. When I went to uninstall it, I got the error message "Windows will uninstall the Service Pack 1 but will not uninstall the Service Pack 1." I wish I had a screenshot of it.

Now my only option is to save to disk and print somewhere else, or follow THE USUAL MICROSOFT SOLUTION - RRR = Reboot, Reformat, Reinstall.

And I can't beleive I paid fucking money for this peice of shit.

Re:hmm.. (2)

Spoing (152917) | more than 13 years ago | (#749090)

Perhaps the time has come to fork the older machines.. Few of us run Linux on anything less powerful than a Pentium, and even fewer on a 486.

It's not a question of older but of smaller, and if you've ever compiled a kernel from scratch, you know how insanely flexible the choices are. Kernel Traffic, as others have mentioned is a must-read [linuxcare.com] if you want to understand the design decisions being made.

For systems with limited resources -- embeded systems, or those mini-distribututions with under 16MB of storage (flash) and RAM -- the decisions made for the kernel in general are the same as larger systems with a few gigs of RAM and multiple processors. Read a few comments on these in KT, and the reasons will become more obvious.

I agree with others who said that this is just Ziff-Davis making an issue out of nothing, and that nearly everything can be a patch or an ifdef -- no fork needed.

Re:The obvious solution: the kernel does have to f (1)

jallen02 (124384) | more than 13 years ago | (#749091)

Oh oh, this is one of those "praise linux" and get a mod point posts?

You talk like because Linus says its bad design it just HAS to be terrible. He is ONE person and he is NOT perfect.

Big Iron is easy, you download the patches at a distro level, have a few kernels with the patches applied and or, have pre compile dstuff (not that common) :)

Soo.. Bleh, I hate it when people talk about linux liek hes omnipotent.

Jeremy

Re:Supporting 386s: Some Problems... (1)

poodlemaster (237451) | more than 13 years ago | (#749092)

Lucas
... one of the fundemental facts of Linux ... right up there with the ability to compile your own kernel is that no matter whether you nod or shake yer head or even get it maddly spinning around ...

Linus will do as he pleases.
Get used to it ;).

CC

Re:Supporting 386s: Some Problems... (1)

Daniel_ (151484) | more than 13 years ago | (#749093)

old (486) intel boxes are in frequent use!

These boxes make an ideal single purpose servers. Only install the service its supposed to run and strip everything else. Secure, cheap and hassle free (once installed).

With a working NIC (I usually use an old ISA card), you can install linux quite effectively via ftp (especially on a LAN). You'd be surprised how many "broken" 486's can be made to work with the right NIC. Even if that fails, install on a HD using a another computer, then swap disks. Linux was designed for the low end precisely because no one else would support these kinds of machines. Its more than just "nostalgia"

Re:Supporting 386s: Some Problems... (2)

treke (62626) | more than 13 years ago | (#749094)

The issue isn't really the 386, that's just an exageration of the problem. The same patches that help these massive machines will hurt performance on most machines. The article mentions allocating memory for caching is a problem for machines with little RAM, 15 megs of pure cache can really hurt on say... a celeron with 64 megs. Not a big problem for say a 31 cp Alpha with 256GB of Ram. Compared to monsters like that most desktop machines running linux are about as powerful as that 386 or 486.

I can see Linux eventually dropping mainstream support for the 386, but right now there's no huge gain to be made by not supporting it. Killing performance on most x86 machines out there on the other hand is a bad thing.
treke

Inappropriate Ifdefs: BAD (5)

Christopher B. Brown (1267) | more than 13 years ago | (#749095)

If the system gets wedged up with a whole lot of #ifdefs, that makes it more and more difficult to maintain. LOTS of them can make software impossible to maintain.

I wouldn't be shocked if the stretching of boundaries that comes from:

  • "Big Iron" changes, as well as
  • Embedded System changes
winds up turning into there being some clear demands for forking.

The fundamental problem with a fork comes in the code that you'd ideally like to be able to share between the systems. Device drivers float particularly to mind.

After a 2-way fork, it becomes necessary to port device drivers to both forks, which adds further work.

And if a given driver is only ported to one fork, and not the other, can it correctly be said that

Linux supports the device
or do we need to be forever vague about that?

Re:The obvious solution: the kernel does have to f (3)

Spoing (152917) | more than 13 years ago | (#749096)

Soo.. Bleh, I hate it when people talk about linux liek hes omnipotent.

Read KT [linuxcare.com] . Read KT [linuxcare.com] often.

Re:What's wrong with ifdef's? (3)

mikpos (2397) | more than 13 years ago | (#749097)

Well you could put (the analog of) ifdefs into the Makefile. e.g. if there were big differences between conventional and big iron ways of doing things with feature 'foo', you would have 'foo-garbage.c' and 'foo-bigiron.c' and have make figure things out accordingly.

Of course then you would have to ensure that both offer a similar interface so that either can be used transparently. This *could* be a maintainence nightmare. I think there are a lot of ways that this *could* be done, but it depends greatly on the details involved whether it will be practical or not. I find it hard to believe that Linus would have looked over something as obvious as ifdefs or makefile tricks, so he's probably used his (undoubtedly god-like) judgement to decide that it would be a bad idea in the long run.

What's wrong with ifdef's? (5)

BlowCat (216402) | more than 13 years ago | (#749098)

I don't understand why anybody should fork code only because it has to behave differently on different systems. Why not use ifdef's? If too many ifdef's would be needed it may be better to have separate files and an option in "menu config". Even the current configuration system can handle it.

Forks are usually justified only if the original maintainer pollutes the source with hacks or changes the license.

Why reject? (3)

Shotgun (30919) | more than 13 years ago | (#749099)

Why are the patches being rejected? Couldn't they be conditionally compiled in?

There is a patch out there that stores the computer's state to disk before shutdown and then give you an instant boot. My home machine is used that much, and my UPS needs repair. This patch would be useful for me, but I'd have to patch it in by hand and then I'd be out of sync with the official Mandrake kernel. That means I'd have to patch in security update by hand.

The problem is, this patch is as useless to Big Iron as support for 256GB of memory is to me (right now). But why can't both Big Blue and I have our way with conditional compiles? All it would take are a couple of more menu selections in xconfig.

Do you have more that 2G of memory?
Would you like instant-on?

hmm.. (4)

technos (73414) | more than 13 years ago | (#749100)

Perhaps the time has come to fork the older machines.. Few of us run Linux on anything less powerful than a Pentium, and even fewer on a 486.

I don't know, it depends on where the split of cost/benefit falls.. ZD doesn't say...

`Sides, having a Compaq/SGI/IBM 'approved' kernel patch doesn't hurt much..

It only makes sense (3)

systemapex (118750) | more than 13 years ago | (#749101)

I'm not claiming to be a kernel expert, but forking the kernel so that there would be kernels specialized for specific applications only seems logical. A builder doesn't go around hammering everything in site because the hammer obviously isn't the correct tool in every situation. It's great for pounding nails into 2x4s, but isn't so good when it comes to painting walls.

Specialized kernels are good, so long as the support behind all of these kernels remains great enough. I don't think I need to point out the possible pitfalls of forking the kernel and thus, effectively forking the developers behind the kernel into two or more camps. But at some point, the linux kernel that runs on a 386 should be different than the one that runs on the XYZ super computer, just because it can take full advantage of all the wonderful scaleability that the XYZ super computer offers.

Anyway, as I said I'm not an expert but this just seems logical.

this is kinda kewl (1)

Sanchi (192386) | more than 13 years ago | (#749102)

I would love to see how well big iron will perform running Linux. Linux's strongest point is stability and IO. Big Iron's strongest point is, well, Stability and IO.

One concern that I have is that the GPL could deter changes from comming. Corperations don't like the idea of giving something for free (altho it looks like IBM, Compaq and SGI are pushing for support). Also it seams that because of the Open Source nature of Linux, the Military (at least where i'm stationed) will not touch it. which is a shame because i could have used it in my last project :(

Sanchi

P.S. I am having a problem with X. It crashes when i look at it wrong. I happened when I changed my Video card from my GeForce to my VooDoo3. (and it works fine with my GeForce) E-Mail if you can help please.

Kernel fork for big iron? Why not? (3)

Svartalf (2997) | more than 13 years ago | (#749103)

There's kernel "forks" for hard (deterministic) real-time (RT-Linux, etc.). There's kernel "forks" for non-MMU machines (ELKS, uCLinux, etc...). So, why not a "fork" for big iron? If the fork for big iron doesn't hinder current modern machines or improves overall operation- it will become the main fork with the one that just supports the older machines becoming like the other "forks" we see today.

Re:Let them fork (1)

Meleschi (4399) | more than 13 years ago | (#749105)

How fundamentally hard is it to change the code so different memory management techniques are used for different architectures? I'm not a coder by heart, much less a low level c or assembly coder, so I don't even know if this is possible.

It seems to me if IBM submitted a patch that fundamentally changed memory management for all architectures, of course it would get thrown out. What's the problem for having additional or different code for "mainframe" type computers vs. "desktop" or "server" type computers?

Re:There is a point: One size rarely fits all. (2)

Jeff Mahoney (11112) | more than 13 years ago | (#749106)

You're points are valid, but unfortunatly don't apply here.

All the examples you've chosen are either processor architecture or device driver related. Most of the code that both of these classes use in the "core" of the OS are non-architecture dependant, and are coded for best general use.

Forking the kernel for "big iron" may be required because utilizing that many resources effectively requires different algorithms at the very core of the OS - scheduling, virtual memory, caching, etc.

The advantage to forking the kernel is the simpilicity in maintaining the code for either. However, the major disadvantage -- as seen with the BSD's -- is that features in either tree just end up getting implemented twice.

It's a tough question to answer, and both choices have major long term implications.

-Jeff

wasting old pc's or electricity? (2)

Lawrence_Bird (67278) | more than 13 years ago | (#749108)

you dont really give too much info on your setup,
but arent you consuming a tremendous amount of
power by running 3 or 4 machines when one new
machine could do the whole thing?

Re:Why reject? (1)

jlg (215187) | more than 13 years ago | (#749109)

How about loadable virtual memory modules? It seems difficult, but what is the alternative?

A forked kernel will make it difficult to write new features for both. This will mean that people will just write for the kernel they want to use and ignore the other one. Of course code could be ported by someone, but it's kind of a duplication of effort.

If linux does fork it should be the 8MB 386 crowd that goes off and does their own thing. Computers aren't getting any less powerful, even embedded ones.

Microsoft parallels? (1)

Global-Lightning (166494) | more than 13 years ago | (#749111)

From the article:
"Should the Linux kernel group be dictating to hardware manufacturers how to architect their systems? Of course not."

Ummm, isn't this exactly what Microsoft does with its Hardware compatibility list [microsoft.com] and bus/driver specifications? [microsoft.com]

Re:Supporting 386s: Some Problems... (2)

Spoing (152917) | more than 13 years ago | (#749113)

I don't think Linus is going to listen to you. It doesn't look like you've read what he's said on these issues already, or have spent much time compiling kernels on different systems.

The issues you raise are packaging issues, important to people putting together distributions -- not kernel development or design issues. Even though that is the case, most distributions tend to load specific hardware support as a module.

If you roll your own kernel you have ultimate control over what disk, BIOS, and bus types are supported. Very little in the Linux kernel is manditory. That's why it runs on such wildly different systems.

Re:Supporting 386s: Some Problems... (5)

DFX (135473) | more than 13 years ago | (#749114)

Let me clear up a few things here.

1. Earlier platforms generally had no CD-ROM.
Install via NFS or on a pre-formatted hard disk with all the necessary files. Been there, done that.

2. Earlier machines usually had a 5 1/4" floppy disk, until the late 486s started really using 3.5" floppies.
You can boot from a 5.25 floppy disk as well as from a 3.5 one. Besides from booting for the installation, there is no need at all for a floppy drive.

3. Earlier machines had RAM limitations
Many old 3/486s can use up to 16 or even 32 MB RAM. That's more than enough for a small (slow) home-sized server. Even 8 MB does the job.

4. Some earlier machines had fscked BIOSes, aside from Y2K-unfriendly BIOSes
Y2k is only an issue during boot-up, after that you can set the system's time to whatever you want. From what I've seen, Linux deals better with really old motherboards than some brand new ones.

5. Earlier machines had ISA, EISA, etc. Oh, what, you want to run GNU/Linux in something other than CGA?
There are very good SVGA cards for ISA, although running XFree with a "modern" window manager on such an old box is suicide. However, any kind of video card does the job for a "server" type of computer.

6. Earlier network cards are not all supported to get around many of these limitations
Granted, very old ISA cards might not work well, but many cards do. NE2000, old 3Com cards? No problem, work fine, and deliver good speeds too.

To make a long story short, killing support for old systems is a Bad Thing IMHO, and isn't necessary either, it would only make the kernel tarball smaller. I'm all for conditional compiles, and I actually wondered why some of the kernel patches out there (like the openwall patch) haven't been put into the mainstream kernel as 'make config' option. If they can put in accelerator thingies for Apache, why not this?

A brief history of computing. (3)

pete-classic (75983) | more than 13 years ago | (#749115)

Okay, first there were systems, and the were all different.

Then someone "abstracted" them with "BIOS"

Then there were lines of systems, and they were all different.

Then someone "abstracted" them with "C"

Then there were platforms, and they were all different.

Someone (Transmeta?) will come up with a way of abstracting platforms (or architectures) and
make them "seem" the same.

This relates directly with performance increases. When you find yourself wondering what is going to make a 10GHz system better than a 1GHz system I think the answer is the level of abstraction.

Any number of quibbles can be made with the above statements, but I am illustrating a point, not
being a historian.

-Peter

Is The Fork Neccessary? (1)

Scooter[AMMO] (98851) | more than 13 years ago | (#749116)

What is preventing the Linux development gurus from compiling different memory algorithms or Big Iron neccessities into the kernel depending on what architecture #define's have been made?

I can see how Linus and the crew don't want megs and megs of source code that is specific to certain architectures, and that as much of that code should be consolidated as possible, but is there *so* much code that has to be changed in so many aspects of the OS that it requires maintaining a completely separate tree?

Also, if a Big Iron fork does take place, will the public support a corporate driven project, or will we look to Linus, Alan, and the bunch to maintain that tree as well. How much access do those guys have to Big Iron anyway? Not much I guess.

This isn't a smug post, it's just a legit query. I'm not up to speed on many of the quirks involved in OS kernel development ;) ...

The obvious solution: the kernel does have to fork (2)

danpbrowning (149453) | more than 13 years ago | (#749117)

The Kernel DOES NOT need to fork over this. If someone did fork the kernel, it's only because they can't design the patch good enough.
If the patch is getting rejected by Linus, it's not because he favors 386's over 40k bogoMips. It's because it is bad design. Besides, if they want to make changes to the kernel that help some people (but hurt the majority), they need to design it in a way that it can be a compile-time feature. In the same way that 1GB or 2GB support is a compile-time option right now.
Of course, it's always easier said than done. Another solution would be to forever maintain a big-iron-patch.tgz. But the reason they want to fork the kernel is because it's probably too hard to maintain a patch like that.
Another solutions would be to start another branch (alpha, MIPS, intel, and BIG-IRON), but it includes more than cpu stuff so that would be an issue.

There is a point: One size rarely fits all. (5)

d.valued (150022) | more than 13 years ago | (#749118)

This was bound to happen sooner or later. The Linux kernel's flexibility is being taken to the limit, and people are forgetting the easiest way to improve performance for their particular rig: Customize your kernel! You can add all the code in the universe, and then you pick and choose the particular things you need or don't need! Say I run a 486/25 with 16 MB RAM as an IP Masq router. The hard drive is an old IDE with 600 megs of space. I have two network cards, and that's about it. Do I need SCSI support? Do I need to support joysticks, X, Pentiums, AX.25, or anything else? No! I compile a kernel specifically to run the IP Masq, and run it well. My P100 laptop, on the other hands needs a bit more. I use it for packet, so I need AX.25. It uses PCMCIA, so PCMCIA support needs to go in. I use XWS to run Netscape and the GIMP, so I need graphics. But, my HD is not SCSI. I yank out SCSI. My CPU is subject to the 0xf00f bug, so that gets included. I brew a custom kernel, and boot time is a lot shorter. My big-rig is a C433. I need just about everything, as I have a 3dfx card for Quake3; XWS; a SCSI scanner; and a connection to my Packet base station. I optimize compilation for the higher-end computers. I plan on getting a Cube from Apple and putting SuSE on it. Again, by optimizing the options I optimize my system. Get the point? If you want a once-size-fits-all kernel, use Windows. If you want a kernel which can be adjusted for your particular and peculiar environment, use Linux and customize your kernel! Now, for my laptop.

Re:Inevitable, but not so bad (3)

Mark F. Komarinski (97174) | more than 13 years ago | (#749119)

Uhmmm....There would be a few problems:

1) Is the resulting code still Linux?
This is a BIG question, especially for IBM and SGI who want to say they're Linux supporters. If Linus doesn't grant use of the Linux name to their OS, they're back to naming the resulting kernel something other than Linux. Big PR problem.

2) Will the "Linus approved" patches make it into the follow up kernels released by IBM and SGI?
I'd be willing to bet both companies are willing to do the right thing and include them, but how big can this fork get?

Now, all that aside, distros have been doing small scale forks for a while now. I think SuSE had a 1GB mem patch, and RedHat frequently patches the kernels they distribute. Nothing bad for most ussers.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?