Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Math Open Source Red Hat Software

Red Hat Engineer Improves Math Performance of Glibc 226

jones_supa writes: Siddhesh Poyarekar from Red Hat has taken a professional look into mathematical functions found in Glibc (the GNU C library). He has been able to provide an 8-times performance improvement to slowest path of pow() function. Other transcendentals got similar improvements since the fixes were mostly in the generic multiple precision code. These improvements already went into glibc-2.18 upstream. Siddhesh believes that a lot of the low hanging fruit has now been picked, but that this is definitely not the end of the road for improvements in the multiple precision performance. There are other more complicated improvements, like the limitation of worst case precision for exp() and log() functions, based on the results of the paper Worst Cases for Correct Rounding of the Elementary Functions in Double Precision (PDF). One needs to prove that those results apply to the Glibc multiple precision bits.
This discussion has been archived. No new comments can be posted.

Red Hat Engineer Improves Math Performance of Glibc

Comments Filter:
  • by Anonymous Coward

    I don't know ASM much, but my friend whose professional career is in programming told me that to really speed up the speed sometimes one may have to go the ASM way

    Is that true?

    • by gangien ( 151940 ) on Friday January 02, 2015 @06:14PM (#48720889) Homepage

      It may, but it's pretty rare that it's worth it and it also increases the cost of maintaining. Though a function in glibc, might be an exception.

      • It may, but it's pretty rare that it's worth it and it also increases the cost of maintaining. Though a function in glibc, might be an exception.

        There's nothing rare about it. SIMD vectorization is useful in tons of applications.

        • by gangien ( 151940 )

          really nothing rare about it? maybe in your line of work. But since we're just talking general programming here, it's quite rare.

        • by Pseudonym ( 62607 ) on Friday January 02, 2015 @08:49PM (#48721875)

          Indeed it is, however it's still rare that you have to go to ASM in those cases. In simple cases the compiler already generates SIMD code on code which can benefit from it, and for almost all other cases, there are C intrinsics.

        • by PhunkySchtuff ( 208108 ) <kai&automatica,com,au> on Friday January 02, 2015 @09:07PM (#48721957) Homepage

          It may, but it's pretty rare that it's worth it and it also increases the cost of maintaining. Though a function in glibc, might be an exception.

          There's nothing rare about it. SIMD vectorization is useful in tons of applications.

          Yes, and modern compilers are quite good at generating code that takes advantage of extended instruction sets.

        • There's nothing rare about it. SIMD vectorization is useful in tons of applications.

          That's got little to do with ASM: GCC and others offer compiler intrinsics so you can access the vector instructions from C or C++.

        • There's nothing rare about it. SIMD vectorization is useful in tons of applications.

          Vectorization is not the same as assembler. You can do vectorization just fine in a high level language. Actually, SSE3 is an absolute pain in the ass in assembler and trying to use it in assembler is absolutely misguided.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Compilers can't really compete with experts on performance, given enough time. If you are unfortunate enough to look into GCC, you can find some assembly here and there on performance-critical portions of the code. The major downsides are: you have to write it for each architecture you want to support (GCC supports many archs, so this is a really big deal), fewer people can read/write/fix it, bugs are even easier to slip by, takes longer to code.

      • by Pseudonym ( 62607 ) on Friday January 02, 2015 @08:55PM (#48721909)

        Not just every architecture. In general, you may need to write it for every major revision of every architecture. As CPU pipelines and instruction sets change, the hand-crafted assembler may no longer be optimal.

        (Exercise: Write an optimal memcpy/memmove.)

        • by perpenso ( 1613749 ) on Friday January 02, 2015 @11:02PM (#48722491)

          Not just every architecture. In general, you may need to write it for every major revision of every architecture. As CPU pipelines and instruction sets change, the hand-crafted assembler may no longer be optimal.

          (Exercise: Write an optimal memcpy/memmove.)

          I have some math code that I optimized in assembly for Pentium Pro (686) back in the day. The performance improvements vs C are getting smaller and smaller but they are still positive. At least through Core Duo, which was the last time I had access to that code. Whenever we upgraded compilers, or we upgraded our development systems, I would test my old assembly code against the reference C code.

          Regarding a case like your memcpy example. An assembly version may still be warranted. A piece of software may need to optimize for the low end computers out there. So if the assembly is a win for the low end and neutral or a slight loss for the high end then it may still be the way to go. The low end is where the attention is needed to expand the pool of computers included in the minimum system requirement, think video games. You have to optimize for the three year old computer, its the one having performance problems, not the new computer. And if it does matter on the high end its simple enough to have earlier generations of an architecture use the assembly and later generations use the reference C code. Fear of future systems is no reason to leave current systems hobbled.

          • Regarding a case like your memcpy example. An assembly version may still be warranted. A piece of software may need to optimize for the low end computers out there.

            MacOS X has memcpy built into the operating system. At boot time, the OS determines the processor and copies a memcpy (and memmove, and memset) function optimised for that processor to a fixed memory location. If you copy megabytes, you will see it doing _very_ interesting stuff with cache prefetches and so on. C++ doesn't use these, unfortunately.

      • by mwvdlee ( 775178 )

        Compilers can't really compete with experts on performance, given enough time.

        This.
        The thing is that the vast majority of us aren't assembly experts.
        Compilers kick my ass except for the rarest of highly specific cases (all of them SIMD).

    • by RingDev ( 879105 ) on Friday January 02, 2015 @06:32PM (#48721019) Homepage Journal

      99.9% of the time, no.

      The purpose of the compiler is to identify and optimize the code structures in higher level languages. There are many, many tools, and generations of compilers that have been dedicated to just that. For the vast majority of cases, the compiler will do a better job and leave you with the much easier task of maintaining a high level language codebase.

      That said, there are specific operations, most frequently mathematical in nature, that are so explicitly well defined and unchanging, that writing them in ASM may actually allow the author to take procedural liberties that the compiler is unknowledgeable of or exist in such a way that the compiler is unaware of.

      The end result of such code is typically virtually unreadable. The syntax masks the math, and the math obfuscates the syntax. But the outcome is a thing of pure beauty.

      -Rick

      • 99.9% of the time, no.

        Said by someone who doesn't do anything involving multimedia or DSP. Compilers are horrendous at vectorization. It's why things like ffmpeg, x264, libjpeg-turbo, and pretty much any video/audio/image codec that has any decent performance has SIMD assembly for all platforms that support it. Otherwise they would be well more than a magnitude and then some slower.

        If you don't believe me compile x264 for without ASM optimizations even at your compilers highest optimization level and watch how it slows to a craw

        • by JBMcB ( 73720 )

          Not to mention Intel Performance Primitives which is basically a library of ASM blobs tailored for various combinations of sse/avx - even cache and pipes, I believe.

        • shouldn't it be possible to automate the optimization based on architecture constants? number registers, execution width, matrix dimensions, and depth of ooperation? I feel llike if, someone with MLA experience profiled it they'd have no trouble dimensionality reducing and coming up with some tunables.

    • Comment removed based on user account deletion
      • As long as you can just guess the correct answer 100% of the time, why bother with calculations?
        Call it "The God Optimization".
      • sure thing, if you want to rewrite your code for every cpu architecture ...

        Nope. I only need to write assembly for the architectures I care about, all other architectures can have the reference C code implementation.

        ... (and preferably also every generation of said architecture)

        Probably not. Performance tuning may only be necessary for the older architectures in order to lower required/recommended system requirements so the potential market is larger.

        Also assembly is often more durable, more long lived, than you might imagine. I have some math code that I wrote in assembly targeting the Pentium Pro (686) back in the day. Every once and a

      • by debiansid ( 881350 ) on Friday January 02, 2015 @11:09PM (#48722517) Homepage
        FWIW, glibc string functions are hand-written in assembly for each CPU variant that has a new feature it can use. Like SSE, AVX, AVX512, etc. Likewise for non-x86 processors.
    • by TheGavster ( 774657 ) on Friday January 02, 2015 @07:52PM (#48721561) Homepage

      I think that saying "This piece of code is going to be called a lot, so I'll implement it in assembler" is inadvisable. The more reasoned approach is "after profiling, my program spends a lot of time in this routine, so I'll go over the assembler the compiler generated to make sure it optimized correctly". The upshot being, it is useful to be able to read and write assembler when optimizing, but it would be rare that you would produce new code in assembly from whole cloth.

      • On that note, I am pretty sure the optimizing was done in C in this case. The art is in the careful analysis of precision.

    • ASM for certain inner loops can be a good sixth step in the optimization process. It only matters after the algorithm is optimized, then optimized again, then the system stuff is right (ie look at how the storage structures used by the program translate to actual physical operations).

    • by sjames ( 1099 )

      At one time it was absolutely true. Nothing could beat hand assembly. These days, the compiler can do a better job MOST of the time and asm is only used for specialized implementation of low level functions like locking that C wasn't designed to handle.

    • I have on several occassions taken assembler code and sped it up significantly by turning it into normal high level language code.

      To make an algorithm fast, you need to improve the algorithm (so the number of operations that the algorithm needs to perform are minimised), and improve the number of operations per time unit that the algorithm can perform. Assembler _may_ help with the second task. However, writing asssembler code is much much more time consuming, so in a fixed amount of development time, mu
  • why ? (Score:5, Funny)

    by Anonymous Coward on Friday January 02, 2015 @06:09PM (#48720849)

    Who needs glibc anymore ? we have systemd now.

  • excellent (Score:4, Interesting)

    by buddyglass ( 925859 ) on Friday January 02, 2015 @06:11PM (#48720861)
    I love reading stories like this. I'd love it if a concerted effort were made to further optimize various non-FP glibc methods as well. I assume they're already fairly well optimized, but I would be shocked if there weren't still some gains to be made. +10 internets to you, Mr./Ms. Poyarekar.
    • Given how long these libraries have existed, I am surprised there are still opportunities for improvement such as that described in the TFA.

      Back in the mid-80s I was involved in the design of a "mini Cray" supercomputer. We did not yet have any hardware to run on, but we did have a software simulator, and we wanted to publish some "whetstone" numbers. We got some numbers, were not too happy with them, and really dug in to analyze what we could do to improve them. The Whetstone code was in C, and used a f
      • This is just one data point, but I recently compared the BSD and GNU qsort() code. Built both with LLVM (on a Mac), tried both -O2 and -O3, and did some crude benchmarking. IIRC the BSD code was slightly more efficient, not to mention drastically simpler. BSD basically uses the Bentley & McIlroy code from "Engineering a Sort Function" with a few small optimizations to make it more adaptive (and exploit the fact that the algorithm is tail-recursive). So the BSD version is still recursive, whereas GNU
        • by sshir ( 623215 )
          Recursion makes for elegant code, but in production code should be avoided. At least when you have no control of the input data. The reason is that there is a fairly tight restriction on stack size (you have to explicitly tell your OS that you need large stack). As a result you will coredump if your input is the right kind of nasty.
          • Agreed. It would be interesting to know the smallest number of crafted sort elements needed to cause BSD's qsort() to exceed a "normal" stack size, whatever that happens to be. If an array with that number of elements and a reasonable element size (probably sizeof(void*) since elements are almost always pointers) will always be larger than the largest system memory size then, in theory, we wouldn't need to worry about exceeding the available call stack.

            It may be the case, though, that it is possible to
            • by sshir ( 623215 )
              If the algorithm is made very unlucky with selection of the pivot, then it will need one stack frame per each 2 data elements. So yeah, I think it's doable (if pivot selection is stupid in some predictable way)
              • They use the method described in Bentley & McIlroy's paper, which uses a "median of medians" for large arrays, a simple median of first/last/middle for mid-sized arrays and the mid-point for small arrays. See page 1255 here [skidmore.edu]. No idea how effective that is. They also short-circuit to insertion sort when N 7 for a given sort window.
                • Slasdot ate my less-than symbol. They short-circuit to insertion when N < 7.
                • by sshir ( 623215 )
                  Very deterministic. Thus, easy to screw with. Just populate sample points with that worst case pivot value and voilà. Well... it needs to be done for all consecutive rounds so it's not completely trivial but doable.
                  • It should be noted that since both BSD and GNU versions rely on a stack (and neither chooses its pivot point randomly, so far as I know) they're both vulnerable to crafted data that's designed to cause the algorithm to exhaust its stack. It's possible GNU dynamically grows its custom stack when it nears the point of exhaustion, but I kind of doubt it.
                    • by sshir ( 623215 )
                      I would guess that that "custom stack" lives in the heap, so in that case the limitation on program's stack size does not apply.
                    • It looks like GNU actually allocates its custom stack *on the stack*. However, in the comments they claim their algorithm guarantees that the stack need only be as large as log(sort elements), so they allocate a stack that is (in theory) guaranteed to be big enough. You can check out the code here [columbia.edu]. I'm not sure what version of glibc that's from, but it looks similar to what I extracted from 2.10 a while back.
                    • by sshir ( 623215 )
                      Yes, but they are clever about what and when they put on that stack. In their case it's indeed no more than log base 2 frames (non overlapping intervals with biggest one larger than the rest combined with the "rest" following the same pattern)
            • It's quite trivial in the quicksort algorithm to limit the stack size. Split the range you are sorting using a pivot element, then push the larger range on the stack and start again with the smaller range. So the new range is always half the size or less of the original range, and the stack depth is at most log2 (n).
          • As the algorithm is "tail recursive" it is save to assume the compiler will apply "tail recursion optimization", which means it converts the recursive call into a loop.

          • Back when I was doing my PhD, the prevailing instruction to undergraduates was to favour iteration over recursion because it's faster. One of my colleagues rewrote some code to a single loop rather than recursive calls and found that on Linux it got slower, on Windows it stayed the same. Looking at the assembly, it turned out that managing his own stack on the stack generated significantly worse code with GCC than just using the call stack. On Windows, it was even more interesting: the Microsoft compiler

            • by sshir ( 623215 )
              On stack size issue - the point was about a case of malicious/pathological input.
      • so how much better was it?

    • by hublan ( 197388 )

      I was shocked to find how poor the performance of expf() was compared to exp() in glibc. Turns out that in a handful of functions, they are changing the rounding mode of the FPU, which flushes the entire FPU state, obliterating performance. After switching to a different version -- from another library -- that didn't change rounding modes, performance was back on par.

      It's perfectly understandable why rounding mode changes are necessary, since the FPU can be in any rounding mode coming in, and some guarantee

    • I assume they're already fairly well optimized

      You assume wrong. glibc is pretty poorly optimized in reality for most things. It works on god knows how many systems and is generally consistent, but optimization is not its strong point.

      If speed is your concern and you have a choice between Linux with glibc and commercial OS, you pick the commercial OS and its libraries almost every time. I can't actually think of a case where this doesn't apply.

      It could be a lot worse, but its certainly not as fast as it could be just about any where. I'm fairly cer

    • I don't know what the situation is like in GNU land, but in FreeBSD libc, we basically have two guys who care about libm (the library that implements the math.h stuff). One is a retired mathematics professor, I'm not sure what the background of the other is. There are a few other people who occasionally contribute to it, but no one else really wants to change it because floating point code is really hard to understand. You need to find someone who:

      • Is obsessive compulsive and likes optimising code.
      • Has a
  • How much benefit? (Score:4, Insightful)

    by scheme ( 19778 ) on Friday January 02, 2015 @06:36PM (#48721037)

    It looks like the slowest paths of the transcendental functions were improved by a lot. But how often do these paths get used? The article doesn't say so the performance benefits may be insignificant.

    • Re:How much benefit? (Score:5, Informative)

      by Baloroth ( 2370816 ) on Friday January 02, 2015 @06:49PM (#48721117)

      It looks like the slowest paths of the transcendental functions were improved by a lot. But how often do these paths get used? The article doesn't say so the performance benefits may be insignificant.

      From TFA, it sounds like the functions in question (pow() and exp()) work by first looking at a look-up table/polynomial approximation technique to see if the function can use that to get an accurate-enough value, then running through a series of multiplications to calculate the result if the table/approximation method wouldn't be accurate enough. Since this work improved the actual calculation part, my guess is that it will improve quite a few cases. TFA does say the lookup table method works in the "majority of cases", though it doesn't say exactly how big a majority, so it's hard to say exactly.

      • Curiously, the Red hat dev did not comment on average case performance improvement, only on the slow path improvement. I initially missed that in a quick reading, as, I suspect, did many others.

        • Curiously, the Red hat dev did not comment on average case performance improvement, only on the slow path improvement. I initially missed that in a quick reading, as, I suspect, did many others.

          It is difficult to compute impact of this work on the average case because we don't know precisely how many of the inputs in the entire domain (i.e. all unique FP representations in an IEEE754 double) are serviced by the slow path. I wasn't able to get the libm code to go down the mp path for the log function after a week of throwing different inputs at it. pow on the other hand hit it fairly regularly in an hour. Even the 'fairly regular' is about 1 in a 1000 inputs, so that is not much. We know that it

    • It seems it's the MPI functions only that are subject to this optimization, not the normal FP ones. My guess is that these aren't even used for soft floating point on FPU limited systems (since adding the exact size of the a FP type allows writing more efficient routines).

      So the impact is very low I guess.

    • The slow paths are "several thousand times" slower, according to the article. You only need to hit them rarely to see a significant degradation of performance.

      I was recently dealing with a code that spent most of its time in pow(). Some basic algebra significantly simplified the math, and I could do away with the call to the function, but this shows its performance is a real-life concern for some people.

      • by thogard ( 43403 )

        When I was in high school we had a FORTRAN class and one of the assignments was print out as many Pythagorean triples as you can in the allowed 1 minute of run time. Most students would start with power and square root function which would provide about a page and a half of results of which one was wrong because of rounding errors. Going from A^2 to A*A would get you far more pages. The system had a multiply-accumulate function that worked very well so a few changes in a formula could double the number

    • It looks like the slowest paths of the transcendental functions were improved by a lot. But how often do these paths get used? The article doesn't say so the performance benefits may be insignificant.

      They don't get used very often. I don't know how much though because that would require running the functions for the entire domain of inputs which would be years for the univariate functions and pretty much forever for pow. If you want anecdotal data then I have that: I didn't hit the slow path of log for the entire week I threw different inputs at it. pow hit quite a few in an hour of running but even that was about 1 in 1000 or less. However, since there is no pattern to determine which interval of i

  • by thogard ( 43403 ) on Friday January 02, 2015 @11:10PM (#48722525) Homepage

    Newer hardware can make use of newer features which will change what should be considered the best optimisations. Addition used to be much faster than multiplication until they put barrel multipliers in chips. Once floating point cores were added, other things became faster but the early FPUs could do things like add and multiply and anything else could be very slow. I wrote a floating point library for OS9 for the radio shack color computer which had a 2 mhz 8 bit cpu with good 16 bit instructions and no floating point hardware and I could do trig and log functions faster than a 4.77 mhz 8087 floating point unit. I could use tricks like bit shifting and de-normalising floating point numbers for quick adds. There was one function that the typical Taylor series used a /3 + /5 + /7 type thing but there was an alternate that used /2 + /4 + /8 but took more steps but an integer CPU can divide by a power of 2 something like 50 times faster than dividing by an odd number so the doing the extra steps was faster. My library took advantage of precision shortcuts like simply not dealing with some of the low order bits when the precision was going to be lost in the next step or two which are things that you simply can't do efficiently with current floating point hardware.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...