×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Linux x32 ABI Not Catching Wind

Soulskill posted about 3 months ago | from the try-a-bigger-sail dept.

Software 262

jones_supa writes "The x32 ABI for Linux allows the OS to take full advantage of an x86-64 CPU while using 32-bit pointers and thus avoiding the overhead of 64-bit pointers. Though the x32 ABI limits the program to a virtual address space of 4GB, it also decreases the memory footprint of the program and in some cases can allow it to run faster. The ABI has been talked about since 2011 and there's been mainline support since 2012. x32 support within other programs has also trickled in. Despite this, there still seems to be no widespread interest. x32 support landed in Ubuntu 13.04, but no software packages were released. In 2012 we also saw some x32 support out of Gentoo and some Debian x32 packages. Besides the kernel support, we also saw last year the support for the x32 Linux ABI land in Glibc 2.16 and GDB 7.5. The only Linux x32 ABI news Phoronix had to report on in 2013 was of Google wanting mainline LLVM x32 support and other LLVM project x32 patches. The GCC 4.8.0 release this year also improved the situation for x32. Some people don't see the ABI as being worthwhile when it still requires 64-bit processors and the performance benefits aren't very convincing for all workloads to make maintaining an extra ABI worthwhile. Would you find the x32 ABI useful?"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

262 comments

no (4, Insightful)

Anonymous Coward | about 3 months ago | (#45778949)

no

Re:no (4, Insightful)

mlts (1038732) | about 4 months ago | (#45779627)

For general computing, iffish.

For embedded computing where I am worried about every chunk of space, and I can deal with the 3-4 GB RAM limit, definitely.

This is useful, and IMHO, should be considered the mainstream kernel, but it isn't something everyone would use daily.

Subject (1, Insightful)

Daimanta (1140543) | about 3 months ago | (#45778951)

With memory being dirt cheap I ask: Who cares?

Re:Subject (3, Insightful)

mellon (7048) | about 3 months ago | (#45779103)

Memory? What about cache? Is cache dirt cheap?

Re:Subject (0)

Anonymous Coward | about 4 months ago | (#45779293)

Yes. And next iteration of CPUs will DOUBLE it (or quadruple it).

Re:Subject (1)

TheRealMindChild (743925) | about 4 months ago | (#45779315)

Sort of. It will be in the form or l4 or even a next layer, l5 cache. While this is still faster than grabbing system memory, we are approaching the point where it isn't

Re:Subject (4, Insightful)

mellon (7048) | about 4 months ago | (#45779327)

In answer to my question, no, it is not dirt cheap. For any size cache you will get fewer cache misses if your data structures are smaller than if they are larger. Until the cache is so big that everything fits in it, you always win if you can double what you can cram into it.

Re:Subject (1)

ultranova (717540) | about 4 months ago | (#45779593)

Until the cache is so big that everything fits in it, you always win if you can double what you can cram into it.

Which is all nice and good except this implies your data structure was mostly pointers to begin with, so if you want to increase cache efficiency forget about pointer size and redesign them for better locality.

I suspect this is the real reason why this ABI has not caught wind: anyone who cares has already taken steps that render it pointless.

Re:Subject (4, Informative)

dmbasso (1052166) | about 4 months ago | (#45779651)

Which is all nice and good except this implies your data structure was mostly pointers to begin with

And that's exactly the case of scripting languages, where every structure (say, a Python object) is a collection of pointers to methods and data.

Re:Subject (5, Interesting)

KiloByte (825081) | about 3 months ago | (#45779105)

For some workloads, it's ~40% faster vs amd64, and for some, even more than that vs i386. For a typical case, though, it's typical to see ~7% speed and ~35% memory boost over amd64.

As for memory being cheap, this might not matter on your home box where you use 2GB of 16GB you have installed, but vserver hosting tends to be memory-bound. And using bad old i386 means a severe speed loss due to ancient instructions and register shortage.

Re:Subject (1)

Anonymous Coward | about 4 months ago | (#45779645)

This needs to be modded up since these two points are exactly where the benefit lies and how non-trivial the benefit is!

vs amd64: much lower memory bandwidth and much higher cache density
vs i386: more registers

Despite what people seem to think, most binaries will probably never need 64-bit addressing. After all, look at your current process list and how many of those are anywhere near a 4GiB virtual size?

Memory might be cheap to buy but it sure as hell isn't cheap to access (especially when you have several cores fighting for it on a bus shared with a GPU and a display controller powering a high-res display).

Re:Subject (4, Interesting)

Evan Teran (2911843) | about 3 months ago | (#45779107)

It's not just about "having enough RAM". While that certainly is a factor, it's not the only one. As you suggest, pretty much everyone has enough RAM to run just about any normal application with 64-bit pointers.

But if you want speed, you also have to pay attention to things like cache lines. 64-bit pointers often means larger instructions are needed to be encoded to do the same work, larger instructions means more cache misses. This can be a large difference in performance.

Re:Subject (0)

Anonymous Coward | about 3 months ago | (#45779111)

Cache is not cheap.

Re: Subject (1)

Anonymous Coward | about 4 months ago | (#45779165)

Cache requires cash.

Re: Subject (0)

Anonymous Coward | about 4 months ago | (#45779247)

Cache is king

Re: Subject (0)

Anonymous Coward | about 4 months ago | (#45779253)

A kernel requires a colonel.

Re: Subject (0)

Anonymous Coward | about 4 months ago | (#45779423)

/popcorn sir!

Re:Subject (0)

Anonymous Coward | about 4 months ago | (#45779235)

You can still work with 32 (16,8) bit _data_ just fine on x86-64, and you still have short jumps, and you have plenty of registers to keep the data in on amd64.

Added cache pressure is not that huge, really.

Re:Subject (0)

Anonymous Coward | about 4 months ago | (#45779169)

With memory being dirt cheap I ask: Who cares?

To put into some embedded thing-a-whatcha-ma-call-it that has limited memory?

Re:Subject (1)

ProzacPatient (915544) | about 4 months ago | (#45779179)

Desktop memory is cheap but ECC server memory can be very expensive

Re:Subject (2, Insightful)

Anonymous Coward | about 4 months ago | (#45779453)

ECC memory is artificially expensive. Were ECC standard as it ought to be, it would only cost about 12.5% more. (1 bit for every byte) That is a pittance when considering the cost of the machine and the value of one's data and time. It is disgusting that Intel uses this basic reliability feature to segment their products.

Sometimes efficiency and performance come first (0)

Anonymous Coward | about 4 months ago | (#45779205)

- Having smaller data structures is much better for the small 64-byte cache lines of modern CPUs.
- And sometimes it seems really a waste to use 8 bytes to do some addressing.. And you can't always use offsets, and then you'd still need a base pointer. I'm coding a behavior tree with continuations and function pointers, and those pointers take much space in the behavior tree stream and the evaluation stacks...

See Andrei Alexandrescu's (Facebook NYC talk on C++ performance) talk about performance and specifically using 32-bit data in 64-bit compiled C++.

regards,

Re:Sometimes efficiency and performance come first (1)

Rockoon (1252108) | about 4 months ago | (#45779375)

Having smaller data structures is much better for the small 64-byte cache lines of modern CPUs.

If your data structure includes pointers that you actually use, then you are randomly accessing memory anyways. If you arent using those pointers, then I suggest 0-sized pointers which are compatible with x64.

Re:Subject (3, Informative)

Reliable Windmill (2932227) | about 4 months ago | (#45779391)

You've not understood this correctly. x32 is an enhancement and optimization for executable files that do not require gigabytes of RAM, primarily regarding performance. It has nothing to do with the availability or lack of RAM in the system, or how much RAM costs to buy in the computer store.

of course not (1)

rhubarb42 (887861) | about 3 months ago | (#45778967)

no. time will fairly quickly diminish the value as 64 bit cpus get faster.

Re:of course not (0)

Anonymous Coward | about 3 months ago | (#45779049)

Yeah, anybody who cares about this micro performance is rolling their own tuned gentoo anyway and getting far better performance. Who would want this, some niche embedded guys? I mean, more power to them, but this sort of post belongs on their mailing list.

Re:of course not (2)

ShanghaiBill (739463) | about 3 months ago | (#45779149)

Who would want this, some niche embedded guys?

Not many NEGs are using 64 bit processors, and this ABI offers too little advantage to bother with. Most embedded systems run a single primary process. If that process fits in a 4GB address space (as is required to use this ABI), then the system would just use a native 32 bit ABI on a 32 bit CPU, not this 32 bit ABI on a more expensive 64 bit CPU.

Eh? (3, Insightful)

fuzzyfuzzyfungus (1223518) | about 3 months ago | (#45778973)

If I wanted to divide my nice big memory space into 32-bit address spaces, I'd dig my totally bitchin' PAE-enabled Pentium Pro rig out of the basement, assuming the rats haven't eaten it...

Re:Eh? (0)

Anonymous Coward | about 4 months ago | (#45779199)

But with x32 you would still get the full AMD64 instruction set.

Stupid (-1, Troll)

fnj (64210) | about 3 months ago | (#45778983)

Absolutely the stupidest idea in the history of computing. Utterly worthless.

It has some value for embedded systems (1)

Anonymous Coward | about 3 months ago | (#45779041)

It has value for embedded cost-sensitive systems, of which there are many.

If it came out a few years earlier, it would have been more prevalent.

ARM (1)

tepples (727027) | about 4 months ago | (#45779163)

I thought "embedded cost-sensitive systems" would be using ARM CPUs, not Intel or AMD x86-64 CPUs with 32-bit pointers.

Re: It has some value for embedded systems (1)

jmauro (32523) | about 4 months ago | (#45779177)

I think the embedded systems that need this would be better off just getting a faster 32-bit processor.

Re:Stupid (2)

mjrauhal (144713) | about 3 months ago | (#45779057)

x32 at least has some merit, unlike your grasp of the history of computing. (Just not very much and probably not worth the trouble; you can probably relate.)

Re:Stupid (0)

fnj (64210) | about 4 months ago | (#45779283)

Utterly pointless. Just use either i686 or x86_64. Not a shitty design that combines the disadvantages of both and is in no way better.

Re:Stupid (0)

Anonymous Coward | about 4 months ago | (#45779673)

"Disadvantage" of x86 in x32 - size of code and data reduced to ~75% of x86_64 version due to narrower pointers
"Disadvantage" of x64 in x32 - extra 16 registers meaning less shuffling for temp vars between memory and CPU.

Are you sure you know what "disadvantage" means?

Re:Stupid (2)

s.petry (762400) | about 3 months ago | (#45779075)

I would not go that far since I'm sure a special case may exist, but that's exactly what it would be for. Hence the 'no massive wide scale adoption' or 'applications written for this' becomes an (what should be) obvious outcome.

If I'm custom Joe and see a workload that benefits from 32 vs. 64bit OS constraints I load a 32bit OS. The reason we went to larger memory however means those special cases are extremely rare today. They happen more because "we can't get new hardware" than by choice.

Re:Stupid (0)

Anonymous Coward | about 3 months ago | (#45779137)

Absolutely the stupidest idea in the history of computing. Utterly worthless.

No, Linux x32 ABI breaking wind is the stupidest idea in the history of computing.

Re:Stupid (2)

Reliable Windmill (2932227) | about 4 months ago | (#45779407)

You've just misunderstood it. It is in essence a performance enhancement, and you would benefit from it simply from selecting x32 target (instead of x86-64) when compiling.

You got it. (1)

Qwertie (797303) | about 3 months ago | (#45779001)

Some people don't see the ABI as being worthwhile when it still requires 64-bit processors

There's your answer. If I'm writing a program that won't need over 2GB, the decision is obvious: target x86. How many developers even know about x32? Of those, how many need what it offers? That little fraction will be the number of users.

Re:You got it. (0)

Anonymous Coward | about 4 months ago | (#45779185)

Dear clueless guy, if you had even a smidgen of background in assembly programming, you'd know about these things called "registers". AMD64 CPUs have a fuck-ton more registers than i686 CPUs. A fuck-ton of registers equals a fuck-ton of increased performance.

Re:You got it. (1)

Chalnoth (1334923) | about 4 months ago | (#45779445)

True. But for the vast majority of applications, that greater number of registers only translates into a small performance increase. I can potentially see x32 being useful for a rather small amount of heavily hand-optimized code (e.g. a massively optimized math or physics library), but for the vast majority of applications this performance benefit will be tiny.

To me, the real problem for the adoption of x32 is that so few programs on PC's need to worry that much about optimization. When it does become worthwhile for them to worry about optimization, there are likely to be many things that are more worthwhile to tackle for improving performance (e.g. algorithmic inefficiencies, using excessive I/O).

Re:You got it. (0)

Anonymous Coward | about 4 months ago | (#45779213)

Well, your decision-making is already flawed if you think than only feature AMD64/EM64T add is bigger address space.

Re:You got it. (0)

Anonymous Coward | about 4 months ago | (#45779511)

It's not the only feature, but it is the lack of this feature that makes x32 utterly useless.
Do you know what ASLR stands for?
A large address space is the main barrier keeping bugs in applications from being exploited.
With x32, even with everything fully randomized, an attacker with a memory corruption/ROP bug can assume the memory he is interested in will be somewhere and get in after a bit over a million attempts. If they have a few bots your web server can be taken in minutes.
In real world implementations where the address space is further constrained, your numbers would be far worse than that.
Compare that to 1 success per 4.5E15 attempts, now tell me you don't need that address space.

A simple reason... (0)

Anonymous Coward | about 3 months ago | (#45779017)

Gentoo isn't popular anymore. People don't feel the need to mess with compiler flags to try and squeeze 0.3% more performance out of the compiled program.

Re:A simple reason... (-1)

Anonymous Coward | about 4 months ago | (#45779193)

Most of us don't mess with compiler flags, idiot. A lot of us just disable building and linking to GNOME, GTK, and their crowd of shitty groupies.

Nice concept (3, Insightful)

Anonymous Coward | about 3 months ago | (#45779025)

I do not see many cases where this would be useful. If we have a 64-bit processor and a 64-bit operating system then it seems the only benefit to running a 32-bit binary is it uses a slightly smaller amount of memory. Chances are that is a very small difference in memory used. Maybe the program loads a little faster, but is it a measurable, consistent amount? For most practical use case scenarios it does not look like this technology would be useful enough to justify compiling a new package. Now, if the process worked with 64-bit binaries and could automatically (and safely) decrease pointer size on 64-bit binaries then it might be worth while. But I'm not going to re-build an application just for smaller pointers.

Re:Nice concept (0)

Anonymous Coward | about 3 months ago | (#45779119)

Well, the Wikipedia page says "On average x32 is 5–8% faster on the SPEC CPU integer benchmarks compared to x86-64 but it can as likely be much slower." So for some specific integer-heavy applications you might actually get a meaningful performance boost, or some others can be slowed down.

I'm still not sure if x32 is worth all the hassle, but I said this to underline that it's not only about a smaller memory footprint.

Re:Nice concept (1)

loufoque (1400831) | about 3 months ago | (#45779143)

Any application that does heavy-numerical computation should not be affected by much by the ABI if at all. All function calls are inlined inside the critical loop.

Re:Nice concept (1)

cnettel (836611) | about 4 months ago | (#45779233)

Any application that does heavy-numerical computation should not be affected by much by the ABI if at all. All function calls are inlined inside the critical loop.

The ABI here also defines the size of all pointers. All pointers are 32-bit here. Any purely compute intensive application will not be affected much, but something including some complexity in data structures, with pointers, could possibly benefit a lot. On the other hand, if all your code does is traversing trees, you should seriously consider allocating them in one bunch and using internal indices (of smaller integer type) rather than native pointers anyway.

Re:Nice concept (1)

loufoque (1400831) | about 4 months ago | (#45779371)

Number crunching rarely involve any pointers in the critical parts, the only exception I can think of is sparse matrices, which is actually usually done with fixed-size indexes rather than pointers.
Game engines however probably have a lot of trees of pointers for their scene graph, so they could be affected. But if they're well-optimized, they're designed to that each level fits exactly inside a cache line, and changing the size of the pointers will mess that up.

Re:Nice concept (1)

mjrauhal (144713) | about 3 months ago | (#45779121)

You misunderstand the desired impact. "Loads a little faster" doesn't really enter into it. It's rather that system memory is _slow_, and you have to cram a lot of stuff into CPU cache for things to work quickly. That's were the smaller pointers help, with some workloads. Especially if you're doing a lot of pointery data structure heavy computing where you often compile your own stuff to run anyway.

Still not saying it's necessarily worth the maintenance hassle, but let's understand the issues first.

Re:Nice concept (2, Informative)

maswan (106561) | about 3 months ago | (#45779133)

The main benefit is that it runs faster. 64-bit pointers take up twice the space in caches, and especially L1 cache is very space-limited. Loading and storing them also takes twice the bandwidth to main memory.

So for code with lots of complex data types (as opposed to big arrays of floating point data), that still has to run fast, it makes sense. I imagine the Linux kernel developers No1 benchmark of compiling the kernel would run noticably faster with gcc in x32.

The downside is that you need a proper fully functional multi-arch system like is slowly getting adopted by Debian in order to handle multiple ABIs. And then you get into iffy things on if you want the faster /usr/bin/perl or one that can handle 6-gig lists efficiently...

Re:Nice concept (2)

sribe (304414) | about 4 months ago | (#45779291)

So for code with lots of complex data types (as opposed to big arrays of floating point data), that still has to run fast, it makes sense.

Well, here's the problem. Code that is that performance-sensitive can often benefit a whole lot more from a better design that does not have so many pointers pointing to itty-bitty data bits. (For instance, instead of a binary tree, a B-tree with nodes that are at least a couple of cache lines, or maybe even a whole page, wide.) There are very very few problems that actually require that a significant portion of data memory be occupied by pointers. There are lots and lots of them where the most convenient data structure uses lots of pointers, but if you're going to optimize how much you can cram in cache at once, eliminating pointers is better than shrinking them. Also, in many cases (such as the example I mentioned earlier), chunking things instead of pointers to individual items can greatly improve locality of access. And finally, of course, the irony is an awful lot of problems that are so performance-sensitive need the high performance precisely because they're dealing with large amounts of data. So yeah, it could be useful--but the problems where it is really useful are probably extremely limited.

The downside is that you need a proper fully functional multi-arch system like is slowly getting adopted by Debian in order to handle multiple ABIs. And then you get into iffy things on if you want the faster /usr/bin/perl or one that can handle 6-gig lists efficiently...

You also get into the problem that having two sets of libraries in use is not exactly good for cache pressure ;-)

Re:Nice concept (2)

Rockoon (1252108) | about 4 months ago | (#45779513)

64-bit pointers take up twice the space in caches, and especially L1 cache is very space-limited.

L1 cache is typically 64KB, which is room for 8K 64-bit pointers or 16K 32-bit pointers. Now riddle me this.. if you are following thousands or more pointers, what are the chances that your access pattern is at all cache friendly?

The chance is virtually zero.

Of course, not all of the data is pointers, but that actually doesnt help the argument. The smaller the percentage of the cache that is pointers, the less important their size actually is, for after all when 0% are pointers then pointer size cannot have any performance impact.

So the best case for your argument is when there are literally 8192 pointers sitting in the cache, where you would be able to instead fit 16384 pointers if they were 32-bit. But surely the act of following 16384 pointers in your access pattern is actually going to make the L1 cache 100% completely moot with a cache miss at literally every follow...

Why isn't it done dynamically? (1)

Anonymous Coward | about 4 months ago | (#45779473)

It's no big surprise that takeup is low when developers are forced to make a conscious choice between x32 ABI and full 64-bit operation for their entire program. It's the wrong approach.

A far better approach would have been to enhance the 64-bit ABI to allow 32-bit pointers to be used wherever the compiler can guarantee that pointer operations will remain within the 32-bit range. There is no shortage of such situations even in pointer-flexible C, and it's even easier to find such small-range use in more tightly constrained languages. It's even possible to start off a pointer as x32 and then promote it to 64-bit on casts or wherever it is no longer possible to track where it's pointing --- that would make 32-bit pointers usable part of the time in almost all programs.

Done that way, there would be no complaints of lack of x32 adoption. Everyone would be using it and benefiting from it to the greatest extent possible in their programs, without losing access to the full 64-bit space.

The either-or choice of 32-bit or 64-bit ABIs was a mistake.

Re:Nice concept (1)

LWATCDR (28044) | about 4 months ago | (#45779475)

Simple.
It is just as fast.
Takes less drive space.
Uses less memory.
As to rebuilding apps it should be just a simple compile and yes while memory is cheap it is not always available even today. What about x86 tablets on Atom? I mean really does ls need to be 64bit what about more?

Nope (0)

Anonymous Coward | about 3 months ago | (#45779027)

Dumb idea, we went 64 bit for a reason.

More than one reason for x86-64 (4, Interesting)

tepples (727027) | about 4 months ago | (#45779271)

we went 64 bit for a reason.

We went to x86-64 for three reasons: 64-bit integer registers, more integer registers, and 64-bit pointers. Some applications need only the first two of these three, which is why x32 is supposed to exist.

Who cares if I'll use it? (4, Interesting)

93 Escort Wagon (326346) | about 3 months ago | (#45779045)

The maintainer(s) find it interesting, and they're developing it on their own dime... so I don't get the hate in some of these first few posts. No one's forcing you to use it, or even to think about it when you're coding something else.

If it's useful to someone, that's all that matters.

It's not only RAM (4, Informative)

jandar (304267) | about 3 months ago | (#45779071)

The company I work for compiles almost all programms with 32 bits on x86-64 CPUs. It's not only cheap RAM usage, it's also expensive cache which is wasted with 64 pointer and 64 bit int. Since 3 GB is much more than our programms are using, x86-64 would be foolish. I'm eager waiting for a x32 SuSE version.

Re:It's not only RAM (-1)

Anonymous Coward | about 4 months ago | (#45779187)

Well, this all reminds me of what Bruce Perens, the inventor of Perl, said about Linux:

"Linux is only free if your time has no value."

All this extra work for nothing. I run everything on a Hyper V cluster and have none of these issues. Things just work, and frankly they blaze.

Re:It's not only RAM (0)

Anonymous Coward | about 4 months ago | (#45779259)

but.. I've heard* that it is generally better to compile in 64-bit mode, because the 32-bit part of the CPU is "legacy" and potentially less efficient than the 64-bit operations. Still, I agree with the 8-bytes pointer and int concern, and I think we should benchmark those 3 approaches: 32-bit build, x32 64-bit build, and 64-bit build.

* http://vimeo.com/55639112
    Facebook NYC Tech Talk - Andrei Alexandrescu "Three Optimization Tips for C++"
 

x32 is a premature optimization (3, Interesting)

bheading (467684) | about 3 months ago | (#45779081)

The idea makes sense in theory. Build binaries that are going to be smaller (32-bit binaries have smaller pointers compared with 64-bit) and faster (because the code is smaller, in theory cache should be used more efficiently and accesses to external memory should be reduced).

But I suspect the problem is that the benefits simply outweigh the inconvenience of having to run with an entirely separate ABI. I doubt the average significant C program spends a lot of time doing direct addressing, and as such I suspect the size benefits of using 32-bit pointers is overstated.

Re:x32 is a premature optimization (1)

mysidia (191772) | about 3 months ago | (#45779123)

But I suspect the problem is that the benefits simply outweigh the inconvenience of having to run with an entirely separate ABI.

Well; if the benefits outweigh the inconvenience --- then it seems x32 should be catching on more than it is.

Personally I think it is a bad idea because of the 4GB program virtual address space limit; which applications will be frequently exceeding, especially the server applications that would otherwise benefit the most from optimization.

Very excited and still waiting.. (0)

Anonymous Coward | about 3 months ago | (#45779095)

Seriously and no humor. I'm extremely excited about x32. I've been eagerly awaiting an install I can just "plonk" in and run with both for debian and "hopefully" one day NetBSD and OpenBSD where memory constrained devices are oft present and speed is always appreciated.

No.. seriously. Get the distro rolling guys. I'm excited and know several others that are moderately interested in giving it a go. :)

*hugs* .. Appreciate your efforts!

Maybe (1)

cold fjord (826450) | about 3 months ago | (#45779113)

It depends on the delta. There are still many 32bit problems out there, and there are plenty of cases where having extra performance helps. If you have enough of the right size problems you could even reduce the number of systems that you would need.

It looks like it could allow packing a single system tighter with less wasted resources.

Reducing the footprint of individual programs could also have some benefits from system performance / management, especially in tight resource situations.

One minor drawback is that you would need to structure your user execution and runtime environment to account for the additional executable format.

Pulling some of the architectural advantages of the 64bit architecture (number of registers, etc.) into 32bit land should be gravy. A lot of that will depend on exactly how they behave in 32bit mode.

Wont use Linux without it! (-1, Offtopic)

Billly Gates (198444) | about 3 months ago | (#45779153)

Until a stable ABI is available I will keep using Windows. I wont use Linux. It is FreeBSD, Windows Server, or Solaris (ugh maybe not in this day and age!)

I came to slashdot as a BSDI and FreeBSD geek in 1999. I learned Linux afterwards. I know unusual as the other way around but I never liked Linux as much other than a quick way to try a desktop gui out. Linux lacks sorely in this area.

In Unix I can run 15 year old apps no problem in FreeBSD and Solaris. Why ABI? In Windows I can run updates and they wont break anything unless some app cough JAVA cough uses a security exploit for functionality. Why? Windows has an ABI. I can recompile and run 20 year old SunOS apps no problem with OpenSolaris. Try that with Linux?

Linux is the worst OS for desktops for this reason. I once worked a 2nd job in a PC shop and they wont touch Linux. Hairyfeet mentioned he tried linux and people kept calling back angry that their printer stopped working after an Ubuntu update.

I did not even know it existed? I will keep Linux on a VM I suppose but only CentOS as Redhat likes to make somewhat ABIs that do not break after each freaking update!

Re:Wont use Linux without it! (2, Insightful)

Anonymous Coward | about 4 months ago | (#45779241)

My dad drives a Ford and your dad drives a Chevy. Your dad sucks.

Didn't we do this already? Like when we were twelve years old.

Re:Wont use Linux without it! (2)

mjrauhal (144713) | about 4 months ago | (#45779295)

I could get into specifics but I shan't, because what you're blathering about has zero relevance for x32. It's not a replacement-to-be for the usual amd64 ABI, nobody is going to break amd64 to make x32 run. It's mostly a specialist tool for specific workloads (aside from being a hacker's playground, as are many things). Whether thinking it's useful as such is misguided or not, you're more so.

Great for smart phones (1)

MobyDisk (75490) | about 4 months ago | (#45779161)

This could have a home on smart phones. A smaller memory footprint is *key* on smartphone apps.

Seems reasonable. (1)

gallondr00nk (868673) | about 4 months ago | (#45779173)

There's plenty of applications around still without a 64 bit binary. From what I understand this layer just allows 32 bit programs to utilize some performance enhancing features of 64 bit architecture. It seems a genuinely good idea.

Re:Seems reasonable. (1)

cnettel (836611) | about 4 months ago | (#45779265)

There's plenty of applications around still without a 64 bit binary. From what I understand this layer just allows 32 bit programs to utilize some performance enhancing features of 64 bit architecture. It seems a genuinely good idea.

It allows 32-bit programs, which are *recompiled*, to benefit from those features. You still need the source and x32 builds of all dependencies. However, sometimes I guess there could be porting issues due to pointer size assumptions (but no other hard assumptions of x86 ABI behavior). Those codebases could not be recompiled for x64, but might port to x32 more easily.

Too little, too late (1)

TeknoHog (164938) | about 4 months ago | (#45779227)

x32 would have been nice as the first transition away from x86-32, but memory needs keep increasing, and we are far too used to full 64-bit spaces. In fact, it feels like we're finally over with the 32-64 bit transition, and people no longer worry about different kinds of x86 when buying new hardware. So introducing this alternative is a needless complication. As others have pointed out, it's too special a niche to warrant its own ABI.

Re:Too little, too late (1)

Reliable Windmill (2932227) | about 4 months ago | (#45779325)

It's not a complication, it's an enhancement. A majority of software does not need a 64-bit address space and can thus be streamlined while still getting the benefits of doing fast 64-bit integer math, among other things. Obviously you just select the target when compiling and that's that, it's like enabling an optimization, so what are you talking about?

freedom to do what you want is good. (0)

Anonymous Coward | about 4 months ago | (#45779237)

you just have to realize, that left to their own devices, most people do really stupid shit for no good reason.
it's just the way linux is, kinda like forcing the greatest democracy in the world to be run by a bunch of immigrants in the U.S.
if you don't appreciate the joke the universe is playing on you, you are probably very befuddled by the whole thing.

Is kernel still 64bit? (1)

ThePhilips (752041) | about 4 months ago | (#45779261)

General question about x32 ABI: is the OS still can use more than 4GB RAM w/o penalties? IOW, is kernel still 64bit? Only userspace is x32? Or x32 and pure 64-bit can run alongside?

Anyway. Most performance-sensitive programs went 64-bit anyway - since RAM is cheap and there are bunch of faster but memory-hogging algorithms.

Re:Is kernel still 64bit? (1)

mjrauhal (144713) | about 4 months ago | (#45779333)

The kernel needs to be an amd64 one for x32 to work, at least as things stand now. The most common situation would _probably_ be an amd64 system with some specialist x32 software doing performance intensive stuff. (Or possibly a hobbyist system running an all-x32 userspace for the hack value.)

Yeah, working with big data is unlikely to benefit, and data _is_ generally getting bigger.

Re:Is kernel still 64bit? (1)

Reliable Windmill (2932227) | about 4 months ago | (#45779351)

Of course the OS is still 64-bit in that regard, it's just the address space of that particular application which is reduced to 32-bit to streamline it. The majority of all executable files do not require several gigabytes of RAM, hence it makes sense to streamline their address space.

Re:Is kernel still 64bit? (1)

ThePhilips (752041) | about 4 months ago | (#45779505)

The majority of all executable files do not require several gigabytes of RAM, hence it makes sense to streamline their address space.

I know that. Many commercial *NIX systems are doing it. Though... Having a 32-bit "cat" doesn't really changes anything.

That why I have mentioned the memory hungry algorithms. Many applications are doing it this days. Needless to mention that java this days is started almost exclusively with the "-d64".

The market for 4GB address space is really small. Because modern general programming practices generally disregard the resources in general, RAM in particular. (The (number of) CPUs being the most disregarded resource.)

What about shared libraries? (3, Insightful)

billcarson (2438218) | about 4 months ago | (#45779267)

Wouldn't this require all common shared libraries (glib, mpi, etc.) to be recompiled for both x86-64 and x32? What am I missing here?

ABI? (0)

Anonymous Coward | about 4 months ago | (#45779269)

ABI = Application Brogramming Interface?

The main use cases are vertically integrated (1)

BusterB (10791) | about 4 months ago | (#45779525)

Think Atom processors running Android, or High-performance computing applications. Neither of these require a huge external ecosystem, but if you get a 30-40% boost in some workload, they are worth it. It's my understanding that small-cache Atoms benefit from this more than huge Xeons.

Supplant 32-bit ABI (-1)

Anonymous Coward | about 4 months ago | (#45779583)

The performance benefits of this approach are non-trivial for something is just a recompile.

Eventually, I assume that all binaries which don't need 64-bit addressing (which will probably always be more than 90% of them) will switch to this ABI since having access to the extended register set without the overhead of all the bus bandwidth and cache space lost to store lots of zeroes is a HUGE win with zero cost.

While this is largely x86-specific (since x86 was badly lacking registers), I wonder if it is the only place where it is worth looking at it or if ARM64 would see a similar benefit due to changes in their register set, as well.

Re:Supplant 32-bit ABI (1, Interesting)

0123456 (636235) | about 4 months ago | (#45779679)

Eventually, I assume that all binaries which don't need 64-bit addressing (which will probably always be more than 90% of them) will switch to this ABI since having access to the extended register set without the overhead of all the bus bandwidth and cache space lost to store lots of zeroes is a HUGE win with zero cost.

Uh, no.

Really, no.

It's just not going to happen.

90+% of applications are not CPU-intensive, so they don't give a crap. 90% of the other applications that are CPU-intensive would benefit far more from removing pointer accesses than from making the pointers half the size. Only the remaining 1% are going to go through the hassle of dicking around with a complete second set of libraries on their system just so they can halve the size of their pointers.

There's simply no benefit at all from compiling the vast majority of desktop x86 applications in anything other than x86-64. Which is why no sane x86 distro is even going to consider using this kludge.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...