Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
GUI Software Linux

Linux Gets Kernel-Based Modesetting 81

An anonymous reader writes "Next month when Fedora 9 is released it will be the first Linux distribution with support for kernel mode-setting, which is (surprisingly) a feature end-users should take note over. Kernel-based modesetting provides a flicker-free boot process, faster and more reliable VT switching, a Linux BSOD, and of most interest is much-improved suspend/resume support! The process of moving the modesetting code from the X.Org video driver into the Linux kernel isn't easy, but it should become official with the Linux 2.6.27 kernel, and the Intel video driver can already use this technology. Phoronix has a preview of kernel-based modesetting covering more of this new Linux feature accompanied by videos showing the dramatic improvements in virtual terminal switching."
This discussion has been archived. No new comments can be posted.

Linux Gets Kernel-Based Modesetting

Comments Filter:
  • Oh boy! (Score:5, Funny)

    by Malevolyn ( 776946 ) * <{signedlongint} {at} {gmail.com}> on Saturday April 19, 2008 @08:08PM (#23131094) Homepage
    Just what we need: a Linux BSOD!
    • Re: (Score:3, Interesting)

      by PenisLands ( 930247 )
      Indeed. It would be nice to actually have some helpful error messages so users can work out what on earth has gone wrong.
    • BSOD?!? So what happened? All the underpaid Micro$haft coders decide to play with Linux? The next thing you'll see is all Linux apps starting with i... iKonqueror, emac.. err imacs, even ililo? If I ever see Tux in a rainbow jacket munchin' an apple, I quit!
      • by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Sunday April 20, 2008 @01:52AM (#23132852) Homepage Journal

        iKonqueror
        A Mac-style web browser using a KHTML fork? We already have that, except the i is at the other side of "Safari".

        emac.. err imacs
        Apple has made both iMac and eMac computers. The eMac was introduced when some teachers found the Luxo Jr. style iMac G4 not durable enough for the K-12 market.

        even ililo?
        You'll get iLilo only if you buy Apple's embroidery machine, the iStitch. Thanks to a deal between Apple and Disney, brokered with the help of Disney plurality shareholder Steve Jobs, the iStitch comes preloaded with Disney patterns.
      • by jcast ( 461910 )
        No, no, no. All Linux apps start with g!
    • by mrmeval ( 662166 )
      Yea what moron wanted that?
    • Just what we need: a Linux BSOD!
      But we have for ages. System>Preferences>Look and feel>Screensavers...
  • Waitasec... (Score:5, Funny)

    by jamstar7 ( 694492 ) on Saturday April 19, 2008 @08:12PM (#23131124)
    A BSOD? Lemme guess, that patch came from Novell, right?
    • Funny as you may have been, I for one am looking forward to the BSOD patch. It would be quite helpful to actually be able to see a panic message even though an X server is running. Not that I'm encountering more than one every two years or so, but nonetheless.
  • by Bloater ( 12932 ) on Saturday April 19, 2008 @08:13PM (#23131130) Homepage Journal
    It's about time, KGI was a patch to Linux many many years ago to enhance Linux graphics support just like combining this kernel modesetting with DRI (except that KGI had decent security measures designed in right from the start).

    As usual the old guard says something like "Graphics isn't relevant" and holds back progress for years on end.
    • by jd ( 1658 ) <imipak@ y a hoo.com> on Saturday April 19, 2008 @08:31PM (#23131258) Homepage Journal
      KGI was a damn good system - somewhat overshaddowed by GGI and other similar efforts, though, as the argument of the time was that the kernel shouldn't do what userspace can do. KGI might have stood a better chance if development had been faster, or some significant card could not be made to work correctly in userspace, or there was a demonstrable vulnerability implied.

      As I recall, there was also the arument that grahics in the kernel risked instability that would impact the system and be hard to trace. I can sympathize with this argument a bit more, but in the end it is true of all hardware drivers - hence the efforts of microkernel and exokernel developers to move such stuff into isolatable containers. It's a good idea, not terribly efficient because of all the message passing, but I can understand the reasoning.

      • by techno-vampire ( 666512 ) on Saturday April 19, 2008 @10:16PM (#23131906) Homepage
        As I recall, there was also the arument that grahics in the kernel risked instability that would impact the system...


        How true that is! I once worked at a shop where everybody was on NT4, and my box kept blue-screening because of a bug in the graphics driver. Putting drivers like that in kernel space just to get a little more speed is downright stupid, especially when you consider that NT 4 was largely marketed as a server OS where graphics weren't exactly important. I can't help but feel, given my experience, that this isn't exactly the best of ideas.

        • by Bloater ( 12932 ) on Saturday April 19, 2008 @10:50PM (#23132066) Homepage Journal
          KGI never put graphics into the kernel, it only put mode setting into the kernel and provided a means to communicate with graphics hardware other than dumb MMIO to userspace. Individual drivers could do graphics in the kernel, but most cards could do either dump mapping if it is secure, or userspace could fill a buffer with a list of writes to be done and the driver would check them for safety and then just perform the described writes. Most of the cards that would need a full kernel graphics driver were slower than software rendering.
          • Thank you. I'm a software geek, not hardware, but I'm not into systems programming, more luser support. I pick things up, here and there that are useful, but I'd never claim to know everything. Live and learn, that's my motto!
        • by Bert64 ( 520050 )
          And how many NT4 "servers" were sat permanently displaying one of the crappy built in opengl screensavers and using 100% cpu to do it?
      • by RAMMS+EIN ( 578166 ) on Sunday April 20, 2008 @12:19AM (#23132522) Homepage Journal
        ``KGI was a damn good system - somewhat overshaddowed by GGI and other similar efforts, though, as the argument of the time was that the kernel shouldn't do what userspace can do.''

        There is a point to that. On the other hand, it is questionable whether, in Linux, userspace _should_ be able to do all the things needed to drive the graphics card. Userspace directly accessing hardware and reading and writing arbitrary memory locations?

        On the gripping hand, the reason this is unsafe is only that the languages we use are unsafe. They don't gurantee that processes don't access things that aren't theirs. Essentially, we solve this by imposing a sort of dynamic type checking: we run these unsafe processes in a restricted mode, where the hardware limits their access to memory and I/O ports. Of course, sometimes, you _do_ need more than this restricted access, and that's what the kernel does. We trust the kernel to do it right. But now we've used indirection into the process: to access the hardware, a process needs to go through the kernel, which (hopefully) restricts the process to only doing benign things to the hardware. This is, of course, slower than it could be. Especially on x86, where switching from user mode to kernel mode is quite an expensive operation. This is the real reason why microkernels are slow.

        An alternative would be to have the compiler perform or insert the checks that, in current systems, are performed by the kernel and the hardware at run-time. This way, processes don't have to run in restricted mode and go through the kernel anymore, because they aren't going to do any of the things the kernel would prevent them from doing anyway. Of course, this requires a rather safer type system than C's, and it shifts trust from the kernel to the compiler - which raises issues about how you can know that the code you want to run was indeed compiled by a trustworthy compiler. However, these issues can be solved, and you end up with a system that can be more modular _and_ more efficient.
        • by jd ( 1658 )
          It doesn't help that GCC 4.2.x is getting a bad rap for potentially optimizing out some bounds-checking. A third option would be to require three rings, rather than two, and have the third ring perform potentially hazardous (to the OS) operations. This would require much faster context switching and better communications. Or maybe not. If you have a 4 core CPU and dedicate 1 core to the kernel, 1 core to hazardous operations, and leave the other 2 open to the user, you have no context switches, you only hav
        • by tepples ( 727027 )

          Userspace directly accessing hardware and reading and writing arbitrary memory locations?

          They're not so arbitrary. The kernel lets a display server process mmap two specific parts of memory, namely VRAM and the I/O registers. How is it any different from implementing a file system in user space [sourceforge.net]?

          Of course, this requires a rather safer type system than C's, and it shifts trust from the kernel to the compiler - which raises issues about how you can know that the code you want to run was indeed compiled by a trustworthy compiler.

          Unfortunately, the common way to do this is to require that all code executing on a system be signed by the system's manufacturer. This is the case for TiVo, iPhone, and all video game consoles other than the HTPC, and such manufacturers tend to require contractual terms that shut out hobbyist develope

        • Re: (Score:3, Insightful)

          by ThePhilips ( 752041 )

          [...] the reason this is unsafe is only that the languages we use are unsafe.

          Yeeees. Right. Absolutely.

          This has nothing to do with sloppy programming and bunch of incompetent monkeys who managed to get to keyboard because it is cool.

          The difference between good developer and bad developer is that good developer always listens to user feed back. To system developers that's application developers are users. To application developers that's end-users are users. To hardware developers that's system de

        • by Excors ( 807434 )

          An alternative would be to have the compiler perform or insert the checks that, in current systems, are performed by the kernel and the hardware at run-time. This way, processes don't have to run in restricted mode and go through the kernel anymore, because they aren't going to do any of the things the kernel would prevent them from doing anyway. Of course, this requires a rather safer type system than C's, and it shifts trust from the kernel to the compiler - which raises issues about how you can know that

        • Run time dynamic "type checking" versus a user mode to kernel mode context switch...I think there are probably ways to speed graphics up without ripping out the MMU and designing a new language to write the operating system in. You could, for example, use a soft real time scheduler, and guarantee that the process writing to mmio()'ed video memory be executed before refresh with enough time to actually write out. If your scheduling is good enough you can do away with the context switch altogether and just
      • by Kjella ( 173770 ) on Sunday April 20, 2008 @01:13AM (#23132708) Homepage

        as the argument of the time was that the kernel shouldn't do what userspace can do.
        Well, from what I can read out of the description this has absolutely zero benefits for servers so I figure the discussion in 1998 went a little differently. KDE 1.0 was released in july 1998 and Gnome 1.0 wasn't out either, and things like "smooth graphical booting process" probably wasn't a major issue to say the least. There's always a balance between creating layers and hindering features, like for example ZFS which breaks the traditional file system model. At the time, I think it was probably right for Linux as they had more important things to focus on.
        • Re: (Score:3, Informative)

          by Bloater ( 12932 )
          "They" (one group of kernel devs) didn't have more important things to work on than security in the face of 3d accelerated application support - which is why that group of kernel devs wrote KGI.

          Unfortunately "They" (a larger group of kernel devs who only switched out of EGA mode for multiple terminals on one screen, a group that seems to have included Linus Torvalds) thought that companies who were paid to provide realtime 3d rendered displays of data/media for whatever reasons (eg medical visualisation, pi
  • by Enleth ( 947766 ) <enleth@enleth.com> on Saturday April 19, 2008 @08:30PM (#23131250) Homepage
    I've been trying it out since it became usable at all in the relevant git trees, with Intel driver of course - and it works wonders. Probably one of the best inventions after sliced bread. Well, seriously, it will definitely help the authors of graphics drivers, providing a unified framework for all modesetting kludges and simplify the actual drivers, especially direct rendering. AFAIK all the new Radeon drivers (those made with the specifications AMD released) will be using it, as well as DRI2, so not only Intel GMA users will benefit very soon.
    • Re: (Score:3, Funny)

      by JackieBrown ( 987087 )
      I wish I hadn't just bought a nvidia card.

      Being a Linux user, I never thought I'd say that.
      • by makomk ( 752139 )
        Nouveau (the reverse engineered open source driver) is making nice progress with pre-8000 series GeForce cards, and I think they're planning to eventually move to kernel modesetting. Of course, you don't get 3D support, and support for 8000-series and up still isn't in a usable state.

        This is all no thanks to NVidia, who haven't released any sort of specs (not even for modesetting, which they can't easily argue would help their competitors or reveal any big secrets).
  • by jensend ( 71114 ) on Saturday April 19, 2008 @08:36PM (#23131306)
    Does anybody have some insights on how this will affect those not using Linux kernels with this patch?
    Are the *BSDs and commercial Unices planning on similar work? Will support for modesetting eventually be dropped from X drivers?
  • Let's see:

    Left hand: Better Suspend/Resume Support
    Right hand: Microsoft-style reliability, blue screens, and wierd crash codes.

    Did someone seriously think about this and decide it was a good idea? Rarely have I had a desire to criticize what Red Hat is doing, but between SELinux and BSOD's, they really have me wondering what is going on over there.

    Here's to hoping this is one of those weekend articles that's just plain wrong.
    • BSOD explanation (Score:2, Informative)

      by Anonymous Coward
      BSOD here does not mean "Microsoft-style reliability".

      Currently, if the kernel panic, and X is shown, the machine just locks up.

      With kernel mode-setting, the kernel will be able to switch out of X and print panic to the screen. This is very helpful to developers, and for bug reports.

      The downside is not decreased reliability, but that the normal user will panic too (and not just the kernel).

      Of course, the more code we have in the kernel, the more reasons to oops, but that hardly happens on distribution kerne
  • by daffmeister ( 602502 ) on Sunday April 20, 2008 @07:08AM (#23133688) Homepage
    The final impediment removed to allow "the year of linux on the desktop".
  • This is an important feature for improving user experience, if nothing else, but I worry. New things are untested. Untested things have bugs. Buggy things don't always work well. VT switching is flawless, even if it's a bit ugly. I worry about the reliability of this in crash scenarios.

    Suppose X is locked up. Today I can perform a VT switch and get a pretty responsive console, if the keyboard still responds. From there the offending program, or X, can be killed and the system recovered. Can I still do that?
    • Can I still do that? Probably, but I worry.

      Why do you worry? Why don't you simply find out whether what you want can still be done and, if the answer is no, then worry?...

    • by makomk ( 752139 )
      If anything, this should improve VT switching when X locks up. Currently, only X itself knows how to VT switch out of X, so if it really, totally locks up, you can't VT switch. With kernel modesetting, the kernel knows how to switch the mode back correctly even if X has died totally.
  • Why is there a motherboard obscuring half the screen in those videos? Did he just put the camera down without even looking to see what it was looking at? Be a little more professional, for goodness sake.
  • I still compile my own kernels and it's quite time consuming, but it'll be worth it to turn these features off. I'm sure all the major distros will offer kernel packages with these misfeatures disabled.

    • by feld ( 980784 )
      there's this cool thing called modules. you should read about them.
      • by turgid ( 580780 )

        there's this cool thing called modules. you should read about them.

        Yes, I've written a couple. Some parts of the kernel are enabled/disabled by a compile-time option. They are bigger than modules. Part of the infrastructure.

  • Ever tried out a new game and it borked up so bad you had to ssh in from somewhere else and do a reboot? Well, in my opinion, the best part about this feature is that from now on, you should be able to just bring up a VT with Ctrl-Alt-Fx and just kill the rogue process! Of course putting more into the kernel, like complex rendering operations, would be a mistake, but setting the video mode should have been in there years ago. Now all we need is a better system for getting notifications of the vertical re

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...