Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

High Performance Linux Kernel Project — LinuxDNA

timothy posted more than 4 years ago | from the squeezing-out-performance dept.

Intel 173

Thaidog submits word of a high-performance Linux kernel project called "LinuxDNA," writing "I am heading up a project to get a current kernel version to compile with the Intel ICC compiler and we have finally had success in creating a kernel! All the instructions to compile the kernel are there (geared towards Gentoo, but obviously it can work on any Linux) and it is relatively easy for anyone with the skills to compile a kernel to get it working. We see this as a great project for high performance clusters, gaming and scientific computing. The hopes are to maintain a kernel source along side the current kernel ... the mirror has 2.6.22 on it currently, because there are a few changes after .22 that make compiling a little harder for the average Joe (but not impossible). Here is our first story in Linux Journal."

Sorry! There are no comments related to the filter you selected.

can't you guys stop ripping people off? (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#27005289)

you can't even come up with an original fucking name! how stale.

Thank to this fast linux kernel I got first post (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#27005299)

Yippeee

Good job (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#27005373)

waste a mod point on an AC, what a good move, wanna try again?

Re:Thank to this fast linux kernel I got first pos (1)

McGiraf (196030) | more than 4 years ago | (#27005425)

hum... not that impressive ...

Re:Thank to this fast linux kernel I got first pos (-1, Troll)

Anonymous Coward | more than 4 years ago | (#27005489)

I want to eat you're puckered ass.

Mmmmmm, twofo.

GCC compatibility (1, Interesting)

psergiu (67614) | more than 4 years ago | (#27005367)

Why don't they try to make ICC fully GCC compatible so we can recompile EVERYTHING with ICC and have a 8-9 to 40% performance gain.

Re:GCC compatibility (4, Insightful)

NekoXP (67564) | more than 4 years ago | (#27005395)

Compilers shouldn't need to be compatible with each other; code should be written to standards (C99 or so) and Makefiles and configure scripts should weed out the options automatically.

Re:GCC compatibility (0, Insightful)

Anonymous Coward | more than 4 years ago | (#27005471)

Too bad that C99 (et al.) isn't enough to write a high performance kernel... Not even close (no interupts, no threads, etc, etc...)

Re:GCC compatibility (2, Informative)

NekoXP (67564) | more than 4 years ago | (#27005707)

Amazing. You have no idea what you're talking about :D

C99 doesn't stop you writing interrupt code OR threaded code.

Re:GCC compatibility (1)

smitty_one_each (243267) | more than 5 years ago | (#27007973)

Of course not. C99 is the standard. Has there ever been an executable standard? U R teh st00p3d.

Re:GCC compatibility (0)

dmp123 (547038) | more than 4 years ago | (#27005501)

Also, *compilers* should also be written to standards too, as well as code.

Don't let them off the hook too easily.

If all the compilers stuck rigidly to standards, you'd never be able to have code that compiled with one and not the other.

Davidhttp://linux.slashdot.org/article.pl?sid=09/02/26/2216241#

Re:GCC compatibility (4, Insightful)

NekoXP (67564) | more than 4 years ago | (#27005805)

:)

I think the point is that ICC has been made "gcc compatible" in certain areas by defining a lot of pre-baked defines, and accepting a lot of gcc arguments.

In the end, though, autoconf/automake and cmake and even a hand-coded Makefile could easily abstract the differences between compilers so that -mno-sse2 is used on gcc and --no-simd-instructions=sse2 on some esoteric (non-existent, I made that up) compiler. I used to have a couple of projects which happily ran on BSD or GNU userland (BSD make, GNU make, jot vs. seq, gcc vs. icc vs. amiga sas/c :) and all built fairly usable code from the same script automatically depending on the target platform.

The over-reliance of the Linux kernel and it's hardcoded options for GCC means you have to port GCC to your platform first, before you can use a compiler which may already be written by/for your CPU vendor (a good example was always Codewarrior.. but that's defunct now)

Of course there is always configure script abuse; just like you can't build MPlayer for a system with less features than the one you're on without specifying 30-40 hand-added options to force everything back down.

A lot of it comes down to laziness - using what you have and not considering that other people may have different tools. And of course the usual Unix philosophy that while you may never need something, it should be installed anyway just because an app CAN use it (I can imagine using a photo application for JPEGs alone, but they will still pull in every image library using the dynamic linker, at load time.. and all these plugins will be spread across by disk)

Re:GCC compatibility (1)

Mad Merlin (837387) | more than 5 years ago | (#27006707)

The over-reliance of the Linux kernel and it's hardcoded options for GCC means you have to port GCC to your platform first, before you can use a compiler which may already be written by/for your CPU vendor (a good example was always Codewarrior.. but that's defunct now)

GCC itself is rather prolific... Is there any noteworthy platform that it doesn't already support?

Re:GCC compatibility (2, Funny)

Jurily (900488) | more than 5 years ago | (#27006995)

GCC itself is rather prolific... Is there any noteworthy platform that it doesn't already support?

Commodore 64?

Re:GCC compatibility (1)

NekoXP (67564) | more than 5 years ago | (#27007335)

None but you should think about the hurdles of porting it to a non-POSIX operating system like AmigaOS (yes they did..) and MorphOS (which is like AmigaOS but the GCC port supports a bunch of craaazy extra options) and OMG think of the children!!!!!!!

Both of those had to rely on a special portability library (newlib port in the first instance, and the ancient "ixemul" library in the second instance) to get it to work, notwithstanding the actual platform features and ABI support.

Maybe they're not noteworthy but there's plenty of scope for a non-POSIX operating system in the embedded space, where having a custom compiler is a part daily life. What about when you're supporting a new architecture which isn't in mainline GCC for instance, using CodeSourcery patches for a while to enable custom processor features?

Re:GCC compatibility (1)

BrokenHalo (565198) | more than 5 years ago | (#27007873)

GCC itself is rather prolific... Is there any noteworthy platform that it doesn't already support?

In any case, the Intel compiler, with its US$599 price-tag is less than likely to be a top contender for inclusion in any distro.

Sure, there is a "free non-commercial" download available, but you don't have to be Richard Stallman to see the downside of that.

Re:GCC compatibility (1)

michaelmuffin (1149499) | more than 5 years ago | (#27007909)

gcc doesn't run on plan 9. plan 9 is just wonderful for clusters. but as most HPC stuff is for gcc only, you either have to port every bit of gcc software you want to use or port gcc itself. both rather daunting tasks

Re:GCC compatibility (1)

Nutria (679911) | more than 5 years ago | (#27007835)

A lot of it comes down to laziness - using what you have

No, it's called using your tools to their fullest capacity.

Re:GCC compatibility (1)

NekoXP (67564) | more than 5 years ago | (#27007933)

There's no reason you can't build your code to support all the tools you could possibly use to their fullest capacity, though. No reason at all. Except when one tool doesn't do something that the other does that you find important.

I very much doubt any C compiler shipping these days misses the features required to build the kernel, but the kernel developers only care about adding in GCC options and GCC pragmas and attributes.. in spite of those who would prefer to use some other compiler.

Re:GCC compatibility (1)

gzipped_tar (1151931) | more than 5 years ago | (#27008913)

(I can imagine using a photo application for JPEGs alone, but they will still pull in every image library using the dynamic linker, at load time.. and all these plugins will be spread across by disk)

I don't think this is how it's done... In your example, the functions in a imaging library only have their stubs in the PLT (procedure linkage table) loaded into the process. A stub is replaced by the real code once it gets called. If a non-JPEG backend is not used, it will not be loaded into the memory. The argument of disk usage remains valid, though.

Yes! (3, Insightful)

Arakageeta (671142) | more than 4 years ago | (#27005791)

I completely agree. I ran into this when I was working as a software architect on a project that had been around for a while. Contracts were required compiler compatibility instead of standard compatibility. It made updates to the dev environment much more complicated. The contracts should have specified standards, but its writers didn't know any better-- the customer had no need to stick to a compiler product/version. It also makes your code more dependent upon the compiler's quirks. I would mod you up if I had the points.

Re:GCC compatibility - Time to move to Java? (5, Funny)

Anonymous Coward | more than 4 years ago | (#27005923)

They should think about moving to a Java kernel. They could just bootstrap one of the new, clever "Just-In-Time" Virtual Machines at powerup.
These JVMs are able to dynamically optimize the running code in real-time, far beyond what could be achieved by C or C++ compilers, without any performance degradation.
A Java kernel would likely run at least 50 times faster then the very best hand coded assembler - and since the language is completely type-safe and doesn't implement dangerous legacy language features such as pointers or multiple-inheritance then it would be unlikely to ever crash.

Re:GCC compatibility - Time to move to Java? (-1, Redundant)

Anonymous Coward | more than 5 years ago | (#27008069)

Java is not a "systems language", meaning you don't write operating systems and systems level code in it for very good reasons.

One of them being, name me a processor that can run Java bytecode nativly.

Re:GCC compatibility - Time to move to Java? (3, Informative)

Anonymous Coward | more than 5 years ago | (#27008173)

There are actually quite a few ARM processors that do. See Jazelle [wikipedia.org] .

Re:GCC compatibility - Time to move to Java? (4, Informative)

Ninnle Labs, LLC (1486095) | more than 5 years ago | (#27008357)

Java is not a "systems language", meaning you don't write operating systems and systems level code in it for very good reasons.

Funny cause Sun already did that like 13 years ago.

One of them being, name me a processor that can run Java bytecode nativly.

The ARM9E.

Re:GCC compatibility (3, Informative)

SpazmodeusG (1334705) | more than 5 years ago | (#27006501)

And what is the C99 standard to tell the compiler to pack structures with a 1 byte alignment?

(Hint: there is no standard way)

Re:GCC compatibility (4, Informative)

NekoXP (67564) | more than 5 years ago | (#27007317)

There isn't one, so what you do is use pragmas (I remember #pragma pack(1)) or attributes (__attribute__((packed)) or something similar.

Of course they're compiler-specific but there's no reason that code can't be written wrapped in defines or typedefs to stop compiler-specific stuff getting into real production code nested 10 directories down in a codebase with 40,000,000 lines.

Linux does an okay job of this - but since coders usually reference the compiler manual to use these esoteric pragmas and types, they are usually told "this is specific to GCC" (GCC does a good job of this anyway) so they should be wrapping them by default to help their application be portable and maintainable to future compilers (especially if they change the attribute name or the way it works - as has been done on many a GCC, let alone other compilers).

What usually nukes it (and why linux-dna has a compiler wrapper) is because they're hardcoding options and doing other weird GCC-specific crap. This is not because they are lazy but because the Linux kernel has a "we use GCC so support that, who gives a crap about other compilers?" development policy and it usually takes some convincing - or a fork, as linux-dna is - to get these patches into mainline.

Re:GCC compatibility (1)

SpazmodeusG (1334705) | more than 5 years ago | (#27007945)

Sure i could put all the compiler directives around #if-#else blocks but how do i handle possible new compilers with new directives that i don't even know about yet?
Like the Kernel authors i could do everything you say and still have my code break in a compiler that does things differently.

The only real solution is for compilers to all start doing things in a fairly standard way. Which leads us back to the great-grandparents suggestion...

Re:GCC compatibility (3, Interesting)

NekoXP (67564) | more than 5 years ago | (#27008553)

I find it hard to believe that the Linux kernel developers never heard of ICC. Or, to take another example, never used Codewarrior or XL C (IBM's PPC compiler, especially good for POWER5 and Cell) or DIAB (or Wind River Compiler or whatever they call it now). Or even Visual C++. Personally I've had the pleasure of using them all.. they all do things differently, but when you have a development team which is using more than one.. I once worked on a team where most of the developers had DIAB, but they didn't want to pay for licenses for EVERYONE, so it was just for the team leaders and release engineering guys, so we all got GCC instead. We had to be mindful not to break the release builds.. and the work ethic meant everything went pretty much fine all round.

All of them have at one time or still today produce much better code and have much better profiling than GCC and are used a lot in industry. If the commercial compiler doesn't do what you want or is too expensive, GCC is your fallback. Linux turns this on it's head because it "wants" to use as much free, GNU software, but I don't think the development process should be so inhibited as to ignore other compilers - especially considering they are generally always far better optimized for an architecture.

As a side note, it's well known that gcc 2.95.3 generates much better code on a lot of platforms, but some apps out there are refusing to compile with gcc 2.x (I'm looking at rtorrent here.. mainly because it's C++ and gcc 2.x C++ support sucks. This is another reason why commercial compilers are still popular :) and some only build with other versions of gcc, patches flying around to make sure it builds with the vast majority, significant amounts of development time is already "wasted" on compiler differences even on the SAME compiler, so putting ICC or XCC support in there shouldn't be too much of a chore, especially since they are broadly GCC compatible anyway.

Like the article said, most of the problem, and the reason they have the wrapper, is to nuke certain gcc-specific and arch-specific arguments to the compiler, and the internal code is mostly making sure Linux has those differences implemented. There is a decent white-paper on it here [intel.com] . The notes about ICC being stricter in syntax checking are enlightening. If you write some really slack code, ICC will balk. GCC will happily chug along generating whatever code it likes. It's probably better all round (and might even improve code quality generated by GCC, note the quote about GCC "occasionally" doing the "right" thing when certain keywords are missing) if Linux developers are mindful of these warnings, but as I've said somewhere in this thread, Linux developers need some serious convincing on moving away from GCC (I've even heard a few say "well, you should fix GCC instead", rather than take a patch to fix their code to work in ICC)

Re:GCC compatibility (1)

JohnFluxx (413620) | more than 5 years ago | (#27008923)

> I've even heard a few say "well, you should fix GCC instead"

Well what's wrong with that? If GCC is parsing "bad" code without giving warnings, then GCC should be fixed. The bad code can be fixed to avoid those warnings.

Re:GCC compatibility (-1, Troll)

Anonymous Coward | more than 5 years ago | (#27006781)

shut up, dick sucker.

Re:GCC compatibility (0, Offtopic)

NekoXP (67564) | more than 5 years ago | (#27007369)

yes, dad.

Re:GCC compatibility (0)

Anonymous Coward | more than 5 years ago | (#27007785)

You do realize that you just admitted to sucking dicks, and by implication, your father's?

Re:GCC compatibility (1)

NekoXP (67564) | more than 5 years ago | (#27007927)

yeah on a comments thread to some wanker who won't even get a Slashdot account..

in the grand scheme of things, not very important, wouldn't you say?

Re:GCC compatibility (4, Interesting)

forkazoo (138186) | more than 5 years ago | (#27006793)

Compilers shouldn't need to be compatible with each other; code should be written to standards (C99 or so) and Makefiles and configure scripts should weed out the options automatically.

Unfortunately, writing an OS inherently requires making use of functionality not addressed in the C standards. If you stick only to behavior well defined by the ISO C standards you *can* *not* write a full kernel. Doing stuff that low level requires occasional ASM, and certainly some stuff dependent on a particular hardware platform. I think that being as compiler-portable as it is hardware-portable should certainly be a goal. The ability to build on as many platforms as possible certainly helps shake out bugs and bad assumptions. But, just saying "clean it up to full C99 compliancy, and don't do anything that causes undefined behavior" would be ignoring the actual reality of the situation, and makes as much sense as porting the whole kernel to Java or Bash scrips.

Re:GCC compatibility (1)

NekoXP (67564) | more than 5 years ago | (#27007355)

See my other reply on the topic.

I fully understand the limitations of the C99 standard, but there are also ways to stop your code being tied to a compiler which it seems a lot of coders simply do not bother to use because supporting GCC is their only goal.

Re:GCC compatibility (2, Insightful)

Punto (100573) | more than 5 years ago | (#27006597)

Why don't they improve GCC to have a 8-9 to 40% performance gain? it's not like intel has some kind of secret magical piece of code that lets them have a better compiler.

Re:GCC compatibility (0)

Anonymous Coward | more than 5 years ago | (#27006681)

They might, on their own hardware.

Re:GCC compatibility (3, Informative)

forkazoo (138186) | more than 5 years ago | (#27006847)

Why don't they improve GCC to have a 8-9 to 40% performance gain? it's not like intel has some kind of secret magical piece of code that lets them have a better compiler.

To a large extent, they have. ICC really no longer has the performance lead that it once did over gcc. There was absolutely a time when the difference was consistent, and significant. But, a lot has changed since gcc 2.95, when egcs existed. The 4.x branch in particular has been about improving the optimisation capabilities of the compiler. These days, I generally reccomend just going with gcc to anybody who asks me.

Re:GCC compatibility (4, Informative)

Bert64 (520050) | more than 5 years ago | (#27008945)

Depends on the CPU... gcc has reasonable performance on x86, but on ia64 or ppc the vendor supplied compilers have a big advantage. even on x86 icc leads by a considerable margin in some areas, especially on very new processors.

Re:GCC compatibility (1)

complete loony (663508) | more than 5 years ago | (#27007377)

Personally, I'm waiting for clang to reach feature / compatibility parity with gcc. It should be able to compile code faster than gcc and in many cases produce better optimised binaries. But there is still a lot of work to be done.

Re:GCC compatibility (1)

Crossmire (1393021) | more than 5 years ago | (#27008379)

Actually, it's quite likely they do. The beauty of software patents.

Re:GCC compatibility (1)

Bert64 (520050) | more than 5 years ago | (#27008911)

The performance gap is even bigger on IA64 too...

Portability.. (5, Insightful)

thesupraman (179040) | more than 4 years ago | (#27005407)

IMHO This is a great development, for one important reason.

Portability of the kernel.

GCC is a great compiler, but relying on it excessively is a bad thing for the quality of kernel code, the wider range of compilers used, the more portable and robust the code should become.

I know there will be the usual torrent of its-just-not-open-enough rants, but my reasoning has nothing to do with that, it is simply healthy for the kernel to be compilable across more compilers.

It also could have interesting implications with respect to the current GCC licensing 'changes' enforcing GPL on the new plugin structures, etc.

GCC is a wonderful compiler however it has in the past had problems with political motivations rather than technical, and moves like this could help protect against those in the future (some of us still remember the gcc->pgcc->egcs->gcc debarcle).

Of course no discussion of compilers should happen without also mentioning LLVM, another valuable project.

Re:Portability.. (4, Insightful)

mrsbrisby (60242) | more than 5 years ago | (#27006227)

GCC is a great compiler, but relying on it excessively is a bad thing for the quality of kernel code ... it is simply healthy for the kernel to be compilable across more compilers.

Prove it.

The opposite (relying on GCC is a good thing for code quality) seems obvious to me. The intersection of GCC and ICC is smaller than GCC, so I would assume that targetting something big would afford greater flexibility in expression. As a result, the code would be cleaner, and easier to read.

Targetting only the intersection of ICC and GCC may result in compromises that confuse or complicate certain algorithms.

Some examples from the linked application include:

  • removing static from definitions
  • disabling a lot of branch prediction optimizations
  • statically linking closed-source code
  • tainting the kernel making debugging harder

I cannot fathom why anyone would think these things are "good" or "healthy", and hope you can defend this non-obvious and unsubstantiated claim.

(some of us still remember the gcc->pgcc->egcs->gcc debarcle).

When pgcc showed up, it caused lots of stability problems, and there were major distribution releases that made operating a stable Linux system very difficult: 2.96 sucked badly.

The fact that gcc2 still outperforms gcc4 in a wide variety of scenarios is evidence this wasn't good for technical reasons, and llvm may prove RMS's "political" hesitations right after all.

I'm not saying gcc4 isn't better overall, and I'm not saying we're not better for being here. I'm saying it's not as clear as you suggest.

Re:Portability.. (0)

Anonymous Coward | more than 5 years ago | (#27006379)

...and llvm may prove RMS's "political" hesitations right after all.

What does this mean?

LLVM has enabled propritary forks of GCC (0)

Anonymous Coward | more than 5 years ago | (#27006983)

LLVM has enabled proprietary forks of GCC, effectively.

For example Adobe has recently released a C -> FlashVM compiler. It leverages GCC's great front end. Some of the people most excited by alchemy were the folks working on open flash engines (and projects like haXe), unfortunately, alchemy uses LLVM to couple GCC's front end with proprietary a code generation backend.

So it looks like we're headed back to the bad old days where everything had its own proprietary and incompatible compiler. :(

dunno exactly (1)

emj (15659) | more than 5 years ago | (#27007179)

I might be completely wrong but:

RMS felt that making it easy to produce plugins for GCC would be a very bad idea since closed source could exploit this. We really want GCC improvements to be free software so his hesitation has some merits.

Exactly how this relates to LLVM I dunno..

Re:Portability.. (1)

LeafOnTheWind (1066228) | more than 5 years ago | (#27008105)

Try compiling your C Real Mode code in GCC and get back to me.

Re:Portability.. (2, Insightful)

thesupraman (179040) | more than 5 years ago | (#27009001)

Oh, wait a second, I see the problem here.

You are a moron.

What exactly do you think happens when GCC changes behavior (as it has done in the past, many times) within the C spec?

Perhaps we better freeze on version x.y.z of GCC?

The same would apply to for example assumptions with branch prediction - gcc can and quite probably one day will change behavior - do you really want major features of the kernel to change behavior when this happens?
The good effect this will have when addressed properly (and remember what you are referencing above is a small group making a starting attempt to achieve this outcome..) is that anything worthwhile AND compiler specific will become clearly marked and optional to the compiling process - therefore increasing the total quality of the kernel. Such assumptions should NEVER be simple spread through the code unmarked.

By supporting a range of compilers we help make the kernel MORE robust to such changes, and these are both highly competent compilers, so the 'intersection' of features is actually most of the C/C++ specs..

Of course you obviously have zero experience of such things. You seem to think 'better' means more highly tuned code - try maintaining a major project for more than 6 months, and you may well learn a thing or two.

pgcc, and more importantly egcs, were the only things that broke the complete stagnation and navel-gazing of gcc that was threatening to cause its death. with the hard work and risk taken by the developers of both, gcc would not be nearly as strong as it is now.

Again, you dont seem to know what you are talking about, do you perhaps measure compiler 'goodness' by Dhrystone mips?

40% faster kernel, but what overall performance? (3, Interesting)

whoever57 (658626) | more than 4 years ago | (#27005429)

Since all the userland code is still compiled with GCC, what overall performance improvement will this bring?

Ingo A. Kubblin is quoted as saying:

"... boost up to 40% for certain kernel parts and an average boost of 8-9% possible"

is that 8-9% overall speedup of applications, or just kernel tasks?

Re:40% faster kernel, but what overall performance (1)

jd (1658) | more than 4 years ago | (#27005559)

I would imagine that it means for the kernel. We would then need to factor in how much time user applications spend in the kernel. Anything that is I/O-intensive is kernel-intensive. Anything that is malloc-intensive may be kernel-intensive if you're using a VM-based memory pool rather than a pre-allocated one.

I'm also wondering how this would compare to using Cilk++ and #defining the few keywords it has to the standard keywords when using vanilla GCC or ICC.

Perhaps there should be a table showing the relative performance of the different kernel subsystems under different compilation methods.

Re:40% faster kernel, but what overall performance (3, Interesting)

setagllib (753300) | more than 5 years ago | (#27006363)

If your program is malloc-intensive and you care about performance, you may as well just use a memory pool in userland. It is very bad practice to depend upon specific platform optimisations when deciding which optimisations not to perform on your code. Then you move to another operating system like FreeBSD or Solaris and find your assumptions were wrong and you must now implement that optimisation anyway.

Re:40% faster kernel, but what overall performance (1)

Jurily (900488) | more than 5 years ago | (#27007135)

We would then need to factor in how much time user applications spend in the kernel. Anything that is I/O-intensive is kernel-intensive.

What do you mean? I don't think icc will speed up my hard drive.

Re:40% faster kernel, but what overall performance (1)

jd (1658) | more than 5 years ago | (#27008535)

It won't speed up the hard drive, but it should reduce the latency of a context switch (something like 21 microseconds, isn't it?) and it should also reduce the latency involved in going through the various layers of the kernel.

Yes, this isn't much in comparison to the speed of the drive, but that's not the point. I didn't say it would speed it up by a lot, merely that it would speed up.

I don't know what the latency is within the kernel in the VFS layer or within the different filesystems (ignoring mechanical delays whether from reading the data or any metadata needed due to the FS algorithm), but I can be certain it won't be zero. I can also be certain that much of this latency won't be synchronized with the disk spinning, so it's not going to vanish in a spout of parallelism. Although I can't see any reason why this would be impossible if the FS and hard drive were designed in tandem. That's not the way it's usually done, though.

The practical upshot is that using ICC and getting the 8% savings in the kernel might give you a 0.0008% improvement in performance (assuming no savings via the drive cache). Not a whole lot, certainly not enough to show on any but the most sensitive of disk I/O performance gauges, but it's still a saving.

If the drive has an on-board RAM cache large enough to eliminate consideration of the mechanical components, then I/O savings would return to the more normal 8%.

Wasn't it just a couple of years ago... (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#27005447)

that Democrats were pooh-pooing the US missile defense program, saying it was so complex of a problem that is was totally infeasible? Now what do you have to say for yourselves? Man, if the world stood by and waited for Democrats to actually create solutions, we'd be in some kind of masssive global financial meltdown or something...

http://abcnews.go.com/International/story?id=6965611&page=1 [go.com]

Re:Wasn't it just a couple of years ago... (1)

ClosedSource (238333) | more than 5 years ago | (#27006455)

Get back to us when the US missile defense system has actually destroyed a foreign missile (assuming that Slashdot is still around then).

Re:Wasn't it just a couple of years ago... (1)

thewils (463314) | more than 5 years ago | (#27006877)

It's not designed to do that. It's designed to suck up as much money as possible whilst simply threatening to down a missile.

gcc lock-in broken (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#27005491)

The C language was designed so that virtually anyone could write a compiler. Yet the GNU project managed to add enough extensions to bind major open source projects to their particular implementation.
I wonder if it spurs gcc development, now that they lost the most prominent victim for demanding everyone to use the GNU/ prefix.

My post is 5-9% faster to read overall... (2, Interesting)

mattaw (718560) | more than 4 years ago | (#27005563)

...and 40% faster in parts. FACTS - give me some context to judge if this is good or bad.

Looking at Amdahl's law (golden oldie here) how much time does a PC spend on kernel tasks these days?

It's a Bad Idea. (4, Funny)

Anonymous Coward | more than 4 years ago | (#27005667)

Personally, I am looking forward to the Low Performance Linux Kernel project.

You see, I'm a consultant and am paid by the hour.

Re:It's a Bad Idea. (3, Funny)

PrescriptionWarning (932687) | more than 5 years ago | (#27006209)

You could just use Vista in that case... why wait for something slower! Oh wait, you like waiting. Ok, wait away.

Average Joe, the ubergeek (0)

Anonymous Coward | more than 4 years ago | (#27005679)

FTA: the mirror has 2.6.22 on it currently, because there are a few changes after .22 that make compiling a little harder for the average Joe (but not impossible). Here is our first story in Linux Journal."
..because the average Joe compiles his Linux 2.6.22 kernel with the intel C compiler. On gentoo linux! His neighbour, Sixpack Fred on the other hand, can compile his lastest kernel with the intel compiler. On a C64. From 7 feet away. While humming all the instruments in Ride of the Valkyries.

compilers? (2, Insightful)

Fackamato (913248) | more than 4 years ago | (#27005697)

So GCC is slow compared to the Intel compiler?

Re:compilers? (1)

TheThiefMaster (992038) | more than 4 years ago | (#27006077)

Does GCC run faster if compiled with ICC?

That would take the biscuit.

Re:compilers? (3, Interesting)

gzipped_tar (1151931) | more than 5 years ago | (#27006253)

I can't judge because my experience with ICC is minimal. GCC is constantly improving, but I feel it concentrates more on platform support than performance. The GCC team has to work on ARM/MIPS/SPARC/whatever while ICC only need to work on x86.

So I'm not surprised to see GCC falling behind Intel in x86 performance. In fact, only recently did GCC began to support local variable alignment on the stack, which I think is a basic optimization technique. (See the 4.4 pre-release notes http://gcc.gnu.org/gcc-4.4/changes.html [gnu.org] , search for the phrase "align the stack" in that page)

Re:compilers? (2, Informative)

dfn_deux (535506) | more than 5 years ago | (#27007117)

The GCC team has to work on ARM/MIPS/SPARC/whatever while ICC only need to work on x86.

ICC supports IA-32, Itanium 1 & 2, x86-64, and xscale. Not that it kicks too much of a leg from your argument, but if you are going to argue the point you should at least make it accurate. Ah yeah almost forgot to mention all the extended instruction sets too... SSE, SSE2, SSE3, MMX, MMX2, etc...

Re:compilers? (1)

gzipped_tar (1151931) | more than 5 years ago | (#27007311)

hmm, you are right, I was using the term "x86" rather loosely..

Re:compilers? (1)

Cheapy (809643) | more than 5 years ago | (#27008145)

ICC is better at optimization than GCC.

Re:compilers? (1)

SSCGWLB (956147) | more than 5 years ago | (#27008175)

I have used both GCC (v3.x, v4.x) and Intel compiler (v10.x and v11) on Intel and AMD cpus. Exact same code, only change was the compiler. I don't remember a single case where icc compiled binaries were slower, in general they were significantly faster.

Overall, we saw a 1% to 30% decrease in execution time. The performance improvement was application specific and most significant when doing a lot of math. As expected, applications that where not CPU bound received little if any performance improvements. Changing the compiler does not speed up your network, disk, or fix poor design decisions :)

Given the price of intel compilers, I think GCC is all the majority of the linux world needs.

Will this kernel run fast on AMD processors? (5, Interesting)

steveha (103154) | more than 4 years ago | (#27005817)

A few years ago someone figured out that Intel's compiler was engaged in dirty tricks: it inserted code to cause poor performance on hardware that did not have an Intel CPUID.

http://techreport.com/discussions.x/8547 [techreport.com]

But perhaps they have cleaned this up before the 10.0 release:

http://blogs.zdnet.com/Ou/?p=518 [zdnet.com]

steveha

Re:Will this kernel run fast on AMD processors? (4, Interesting)

Jah-Wren Ryel (80510) | more than 5 years ago | (#27006195)

A few years ago someone figured out that Intel's compiler was engaged in dirty tricks: it inserted code to cause poor performance on hardware that did not have an Intel CPUID.

It wasn't necessarily malicious, all the compiler did was default to a "slow but safe" mode on CPUIDs that it did not recognize. Intel's reasoning was that they only tweaked the code for cpus that they had qual'd the compiler against. Seeing as how they were Intel, they were not particularly interested in qualing their compiler against non-Intel chips. In hindsight, what they should have done is add a "I know what I'm doing dammit!" compilation flag that would enable optimizations anyway.

Re:Will this kernel run fast on AMD processors? (2, Insightful)

Anonymous Coward | more than 5 years ago | (#27006613)

It was completely intentional. Intel's CPUID protocol defines how to determine the capabilities of a CPU. AMD follows this protocol. Intel could have checked the CPUID for the level of SSEx support, etc. Instead they checked for the "GenuineIntel" string before enabling support for extra instructions that speed up many diverse activities (e.g. copying memory).

Perhaps your gullibility meter needs recalibration.

Re:Will this kernel run fast on AMD processors? (3, Insightful)

Tokerat (150341) | more than 5 years ago | (#27008993)

Ok I'll bite. By your logic, Intel should:

  • Spend the time and money to test competitors current CPUs against their compiler.
  • Take the blame when their compiler causes unforseen problems on current or newer models due to changes, or aspects they did not know to test for.

While I agree that something like --optimize_anyway_i_am_not_stupid would have been a good idea, does it make more sense for Intel to spend money and time making their competition faster? You'd need to make a lot of assumptions to think that optimizations for one CPU will work well for another, even from the same manufacturer. Besides, doesn't AMD have their own compiler?

Re:Will this kernel run fast on AMD processors? (5, Interesting)

Anonymous Coward | more than 5 years ago | (#27006783)

It wasn't necessarily malicious

Like Hell it wasn't. Read this and see if you still believe it wasn't malicious.

http://yro.slashdot.org/comments.pl?sid=155593&cid=13042922 [slashdot.org]

Intel put in code to make all non-Intel parts run a byte-by-byte memcpy().

Intel failed to use Intel's own documented way to detect SSE, but rather enabled SSE only for Intel parts.

Intel's C compiler is the best you can get (at least if you can trust it). It produces faster code than other compilers. So, clearly the people working on it know what they are doing. How do you explain these skilled experts writing a byte-by-byte memcpy() that was "around 4X slower than even a typical naive assembly memcpy"?

People hacked the binaries such that the Intel-only code paths would always be taken, and found that the code ran perfectly on AMD parts. How do you then believe Intel's claims that they were only working around problems?

I'm pissed at Intel about this. You should be too.

Re:Will this kernel run fast on AMD processors? (1)

thesupraman (179040) | more than 5 years ago | (#27008599)

Its their compiler, they are damn well allowed to do what they want - call me when AMD pour that kind of resource into having their own compiler.

Of course, developers are also free to therefore ignore the compiler, and hence this situation righted itself pretty quickly and naturally.

I wonder, do you consider RMSs current moves on GCC to also be 'malicious' since it in effect could result in lower performance for end users than is possible, and defined along political lines?

Re:Will this kernel run fast on AMD processors? (2, Insightful)

palegray.net (1195047) | more than 5 years ago | (#27008883)

Its their compiler, they are damn well allowed to do what they want - call me when AMD pour that kind of resource into having their own compiler.

Sure, they can do what they want. But it's generally a bad idea to lie about what you've done once you're caught red-handed. You go from losing a lot of respect to nearly all respect in the minds of many customers.

Re:Will this kernel run fast on AMD processors? (3, Informative)

JohnFluxx (413620) | more than 5 years ago | (#27008939)

> Its their compiler, they are damn well allowed to do what they want - call me when AMD pour that kind of resource into having their own compiler.

ARM put money into GCC. That's far better than them trying to make their own compiler.

Re:Will this kernel run fast on AMD processors? (2, Informative)

Anonymous Coward | more than 5 years ago | (#27006403)

Nope, they have not changed that, and I think it is quite bad behavior for Intel.

However the do _not_ insert _bad_ code. What they do is that they prevent code optimized for the newest Intel CPUs to run on non-Intel CPUs, even if all the used instructions is present. I think -xW (use SSE, SSE2, optimize for Pentium4) is the highest that will run on AMD.

However in almost all cases the Intel compilers will still produce the fastest binaries on AMD. Not only compared to GCC, but also compared to other commercial compilers like PGI (with have specific optimization flags for the latest AMD CPUs.)

Re:Will this kernel run fast on AMD processors? (0)

Anonymous Coward | more than 5 years ago | (#27007925)

However the do _not_ insert _bad_ code.

Read this, and pay close attention to the discussion of the byte-at-a-time memcpy.

http://yro.slashdot.org/comments.pl?sid=155593&cid=13042922 [slashdot.org]

What they do is this (in C-like pseudocode):

if (cpuid() == "GenuineIntel")
        run_efficient_code();
else
        run_horrible_inefficient_code();

You could replace all of the above with

run_efficient_code();

So I think it is fair to say that they insert bad code.

Also, they use the GenuineIntel test to decide whether to enable SSE or not, instead of asking the chip if it supports SSE, as you noted.

Do the community a favor... (-1, Redundant)

Anonymous Coward | more than 4 years ago | (#27005845)

And implement the performance benefits in GCC as well...

SSE, SSE2, SS3, SSE4, etc. (1)

Enderandrew (866215) | more than 4 years ago | (#27005985)

I've always wondered if anyone has spent time trying to develop optimizations for the kernel if various specific instruction sets are detected?

letme google that for ya. (1)

emj (15659) | more than 5 years ago | (#27006107)

Not much.
http://www.google.com/codesearch?q=SSE2+package%3Akernel.org [google.com]

but do you realy want to?

Re:letme google that for ya. (0)

Anonymous Coward | more than 5 years ago | (#27006173)

Errr, why not? If it gives more performance, why not? Are there any negative sides?

Re:letme google that for ya. (2, Informative)

setagllib (753300) | more than 5 years ago | (#27006421)

It severely cripples maintenance. Any optimisation, especially one that forks you into multiple parallel implementations (raw C, x86 asm, amd64 asm, amd64 ASM with SSE4, PPC, ....), has to be carefully weighed against its extra maintenance cost.

The parts that do benefit from optimisation, such as RAID parity calculation, symmetric encryption, etc. are already optimised. At any rate I think the kernel developers know a lot more about this than you or I do.

Re:letme google that for ya. (1)

Enderandrew (866215) | more than 5 years ago | (#27006265)

Both Intel and AMD have contributed code before. You figure if anyone knows how to optimize code for specific processor instruction sets it would be them. It would be a neat way for them to contribute.

We're going about the problem the wrong way (1)

doofusclam (528746) | more than 5 years ago | (#27006109)

Wouldn't it be better to fix GCC so it has the same optimisations?

Re:We're going about the problem the wrong way (2, Informative)

setagllib (753300) | more than 5 years ago | (#27006435)

That's being done too. GCC 4.3 with Profile Guided Optimisation is SWEET. I don't think plain PGO can be run on a kernel (but that would be an awesome project), but it would definitely close the gap between ICC and GCC. ICC's PGO is not as good, or rather, ICC itself is better at making the kind of fuzzy predictions that PGO makes definite.

Who cares? (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#27006121)

No one uses Linux for anything important.

For desktops, OS X rules.
For servers, Windows Server 2008 is unbeatable.

Linux is for people too dumb or too cheap to use the right tool for the job.

Re:Who cares? (0)

Anonymous Coward | more than 5 years ago | (#27006259)

Looking around, without leaving my chair, I see a commercial firewall/VPN/router box running Linux. I see a commercial wireless accesspoint running Linux. I see a commercial PBX running Linux. And I see two different embedded boards running Linux (admittedly our private design).

Yup...I guess you're right. No one uses Linux for anything important.

Re:Who cares? (1)

Troy Baer (1395) | more than 5 years ago | (#27006339)

No one uses Linux for anything important.

Other than every supercomputer on the planet worth talking about, that is...

Unimpressed with ICC (4, Interesting)

Erich (151) | more than 5 years ago | (#27007033)

We tried ICC on our simulator. The result: 8% slower than GCC. On intel chips. Specifying the correct archtiecture to ICC.

We were not impressed.

Re:Unimpressed with ICC (1)

thesupraman (179040) | more than 5 years ago | (#27008625)

I call BS, there are cases where GCC can beat ICC, however there are many more where ICC is significantly better.

My bet, either you are full of BS, or you 'tried' a rather specific and limited codebase.

I also suspect your codebase was developed under gcc and then just thrown at icc? hmmmm?

ICC is a VERY impressive compiler, GCC is a quite good compiler. we are lucky to have both (and then a few other options as well).

This is ancient (2, Insightful)

scientus (1357317) | more than 5 years ago | (#27007299)

This kernel is so ancient that any possible performance gains are outweighed by the new kernels performance, bug fixes, and improved driver support. Plus why would someone want to toss away their freedom by using a non-free compiler? Also, does the Intel compiler work with AMD processors?

There is so much against this that it is useless, until Intel open sources, can work with up to date kernels, and can work on all x86 and x86_64 compatible hardware (im not sure if this is a problem) then im not interested.

Re:This is ancient (0)

Anonymous Coward | more than 5 years ago | (#27008585)

You don't even know the validity of your own objections? How can you expect to be taken seriously?

Javascript (1)

a09bdb811a (1453409) | more than 5 years ago | (#27007559)

As a certified and accredited software engineer, I think it's time for Linux to be re-written in Javascript. The competition between Chrome, Firefox, IE and Safari has resulted in incredibly fast Javascript interpreters, and if Axl Torvalds mandates a switch to JS, the kernel could automatically take advantage of these improvements. After all, the OS and the web are becoming one, and within 10 years all applications will be in the cloud, delivered via the raintubes.

Re:Javascript (1)

MichaelSmith (789609) | more than 5 years ago | (#27008789)

As a certified and accredited software engineer, I think it's time for Linux to be re-written in Javascript. The competition between Chrome, Firefox, IE and Safari has resulted in incredibly fast Javascript interpreters, and if Axl Torvalds mandates a switch to JS, the kernel could automatically take advantage of these improvements. After all, the OS and the web are becoming one, and within 10 years all applications will be in the cloud, delivered via the raintubes.

That way Apple will never be able to block you from booting Linux on the iPhone.

Thank you, I look forward to trying this. (1)

urbanriot (924981) | more than 5 years ago | (#27007847)

This is very relevant to my interests. We'd tried a while back to compile a Linux kernel with ICC and had too numerous issues to list. We do a lot of work with fluid dynamics and it's ALL CPU based - any increase in speed would be appreciated. With the economy the way it is, and a lot of companies shelving projects, budgeting for new clusters isn't on the list of priorities.

Re:Thank you, I look forward to trying this. (2, Informative)

gzipped_tar (1151931) | more than 5 years ago | (#27008263)

I'm afraid the boost of kernel code won't help you much. Since you're doing fluid physics, I guess the hotspots are in the floating point math computation, and your code doesn't do context switching often. In that case, kernel speed isn't that important.

Well, I'm just saying it. I hope I'm wrong :)

Re:Thank you, I look forward to trying this. (2, Insightful)

thesupraman (179040) | more than 5 years ago | (#27008643)

It depends, if the system is distributed, the hotspots (ie performance bottlenecks) could quite easily be in network latency and throughput, something that could be reasonably impacted here.

Of course if its not, you are 100% right, however dont underestimate the proportion of cpu time the kernel spends in some situations (databases and distributed apps, for example).

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?