×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

KDE and Canonical Developers Disagree Over Display Server

samzenpus posted about 9 months ago | from the no-meeting-of-the-minds dept.

KDE 202

sfcrazy (1542989) writes "Robert Ancell, a Canonical software engineer, wrote a blog titled 'Why the display server doesn't matter', arguing that: 'Display servers are the component in the display stack that seems to hog a lot of the limelight. I think this is a bit of a mistake, as it’s actually probably the least important component, at least to a user.' KDE developers, who do have long experience with Qt (something Canonical is moving towards for its mobile ambitions), have refuted Bob's claims and said that display server does matter."

Sorry! There are no comments related to the filter you selected.

oh good (-1, Flamebait)

Anonymous Coward | about 9 months ago | (#46565543)

I guess Linux will continue to have a shitty ui and amateurish graphics.

Re:oh good (1)

Teun (17872) | about 9 months ago | (#46565757)

Too easy to downmod you.

From your comment about a shitty UI one can only conclude you have never used KDE.
Although better graphics would be nice calling them amateurish is rather silly.

I've used KDE... (-1)

Anonymous Coward | about 9 months ago | (#46565839)

.. And it IS indeed shitty and amateurish. Why do you think most distros choose something slightly less shitty by default?

Re:oh good (-1)

drinkypoo (153816) | about 9 months ago | (#46565959)

Although better graphics would be nice calling them amateurish is rather silly.

Why? The KDE desktop looks like the state-of-the-art from say 1993. If I wanted my desktop to look like Xaw3d, I'd just fall through a time warp and go back there. At least the music was better.

trolololol overrated. (-1, Troll)

drinkypoo (153816) | about 9 months ago | (#46566023)

If you don't want people to make fun of your crappy desktop, don't make it so crappy. Fanboys with modpoints are always sad, but KDE fanboys are extra sad.

Re:oh good (4, Insightful)

hawguy (1600213) | about 9 months ago | (#46566671)

Although better graphics would be nice calling them amateurish is rather silly.

Why? The KDE desktop looks like the state-of-the-art from say 1993. If I wanted my desktop to look like Xaw3d, I'd just fall through a time warp and go back there. At least the music was better.

I'm pretty happy with my KDE desktop, but I use it as a tool to get work done, not because it looks pretty.

I bought a hammer from the hardware store that looks almost exactly like the 1920's era hammer my great grandfather used (though the handle is fiberglass instead of wood), but it works well and gets the job done. Just because a desktop "looks" old doesn't make it useless. I tried Unity and Windows Metro and found them to be much less usable for my developer/operations tasks.

Re:oh good (1)

MightyYar (622222) | about 9 months ago | (#46567723)

But the professionals have moved on to nail guns.

Re:oh good (2)

GumphMaster (772693) | about 9 months ago | (#46568405)

Good luck extracting a nail with your nail gun Mr. Professional.

Re:oh good (0)

Anonymous Coward | about 9 months ago | (#46568521)

Good luck extracting a nail with your nail gun Mr. Professional.

pro gets it right first time, different tools for different fools said mr t

Re:oh good (1)

JoeMerchant (803320) | about 9 months ago | (#46568847)

Miss once, try again... miss twice, keep on shootin' - pulling nails is a waste of time, time is money, professionals aren't paid to save nails.

Actually, "professionals" will miss the stud with 3 of 4 nails that are supposed to hold the sheathing, and "keep on rollin' 'till the day is done." This is why owner-built houses survive hurricanes and the same design built by a contractor doesn't.

Re:oh good (1)

MightyYar (622222) | about 9 months ago | (#46568985)

I don't know nuthin' about construction, but I know the sound of a hammer and the sound of a nail gun, and it's been a long time since I've walked by a construction site and heard the former. I have a feeling they have it worked out.

Re:oh good (1)

GumphMaster (772693) | about 9 months ago | (#46569247)

Next time look at what is hanging from the tool belt of nearly every woodworker on the site. They don't carry a hammer for decoration.

@MightyYar - Re:oh good (1)

nukenerd (172703) | about 9 months ago | (#46569287)

and it's been a long time since I've walked by a construction site and heard [a hammer]

You should have walked past my place 4 weeks ago. Contractors were re-roofing my house and they used hammers. Made a good job too.

Re:oh good (-1, Troll)

BitZtream (692029) | about 9 months ago | (#46566273)

While I can appreciate you have an opinion, acting like KDE is that great kind of ruins any credit you might have had.

KDE is only impressive when you compare it to the other crap in its class. You're comparing it to Gnome, not Aqua or the Win32 API, sure its impressive compared to Gnome, what isn't?

Re:oh good (2)

Megol (3135005) | about 9 months ago | (#46567003)

Compared to the Win32 API the GEOS one looks clean and nice. I mean the 8 bit GEOS BTW.

Re:oh good (0)

Anonymous Coward | about 9 months ago | (#46567017)

Also impressive compared to McOS, and Windoze ate.

And odds are you're a few years behind.

Re:oh good (4, Interesting)

jones_supa (887896) | about 9 months ago | (#46566593)

Too easy to downmod you.
From your comment about a shitty UI one can only conclude you have never used KDE.
Although better graphics would be nice calling them amateurish is rather silly.

I actually see KDE as the best Linux desktop right now: fast, feature-rich and stable. However I recently watched an interesting criticism piece [youtube.com] regarding some funky and misleading behavior of this desktop environment. The user experience could be improved.

Re:oh good (5, Insightful)

Tough Love (215404) | about 9 months ago | (#46566691)

KDE 4 is great except for Akonadi, which killed Kmail.

Re:oh good (1)

richlv (778496) | about 9 months ago | (#46567883)

it (kde) could have a bit less bugs, though ;)

Re:oh good (1)

devent (1627873) | about 9 months ago | (#46568525)

1. New Folder
Already fixed, the dialog shows now "New Folder 1", "New Folder 2", etc
2. New Text File
Fixed, see 1.
3. Rename Dialog
Not fixed, behaves the same way as in the video.
4. Copy file
Fixed, it says now "Paste one file"
5. Dialog
Not fixed, behaves the same way as in the video.
6. Trash/delete dialog
I disagree. Show the user all options in the menu, that is a good thing. But was fixed anyway. Now Shift+Menu will show "Delete". Without Shift it shows "Move to trash". Bad change.
7. Open and Edit Trashed file
Fixed.
8. Trash moving/error
Not fixed.
9. Trash Create new File/Folder/Rename File
Not fixed.

Re:oh good (0)

MightyMartian (840721) | about 9 months ago | (#46565867)

Because Metro sure knocked the socks off of everything...

It points to a new direction; you know one where UI designers cut the tops off their skulls, take an ice cream scooper and remove about two thirds of the brains, put the top of the skull back on.

Re:oh good (1)

Anonymous Coward | about 9 months ago | (#46566397)

Metro may not be good, but e.g. Windows 7 sure has a better working and cleaner UI than most Linux distros I've used. I mostly use XFCE (Xubuntu) nowadays, but it's still far from good even though it's the best one out there IMHO.

Re:oh good (2)

denmarkw00t (892627) | about 9 months ago | (#46566901)

Windows 7 sure has a better working and cleaner UI

Eh, maybe? If I turn off all the Aero or whatever and make it look like 9x, I can live with that. The translucent borders, Big Button start menu and "pins," the god-awful switcher...7 is probably my only choice for Windows moving forward, if I have to have it, but it won't look like it does after a fresh install for very long.

Re:oh good (3, Interesting)

David_Hart (1184661) | about 9 months ago | (#46566595)

Because Metro sure knocked the socks off of everything...

It points to a new direction; you know one where UI designers cut the tops off their skulls, take an ice cream scooper and remove about two thirds of the brains, put the top of the skull back on.

Metro UI concepts are actually showing up in more and more places. The biggest problem with it was that Microsoft, in their wisdom, tried to force a touchscreen interface on desktop users. The interface itself isn't the problem, the lack of choice for the primary user was...

Re:oh good (0)

Anonymous Coward | about 9 months ago | (#46567211)

This is pure inertia. Metro concepts are showing up because "UI (or is it UX) experts" only use Windows as their operating system and think that since it's on their Microsoft desktop it must be good. Had Apple won the desktop early on, the concept of "right-clicking" would not be so pervasive.

Re:oh good (1)

willoughby (1367773) | about 9 months ago | (#46566903)

They do that deliberately just to keep you away.

logic (5, Insightful)

Anonymous Coward | about 9 months ago | (#46565545)

If they don't matter, why mir?

Re:logic (3, Insightful)

batkiwi (137781) | about 9 months ago | (#46568903)

They're saying that it doesn't matter to an app developer if you're using a middleware framework, as most developers do, because the eventual output on the display will be the same.

The reasons for introducing mir are performance, ability to run on low footprint devices, and cross device compatability.

So their point is that X11 vs wayland vs mir vs framebuffer vs blakjsrelhasifdj doesn't matter to a developer using the full QT stack. Their write their app to QT, and then developers on QT write the backend to talk to whatever the end user is using. It's more work for QT/other frameworks, but "should" be "no" more work for an app developer.

Re:logic (1)

phoenix_rizzen (256998) | about 9 months ago | (#46569251)

The reasons for introducing mir are performance, ability to run on low footprint devices, and cross device compatability.

Jolla would like to know why the need for Mir when they have a Wayland compositor and window manager running on low-end/mid-range mobile devices with excellent (compared to other similar-spec devices) performance.

Personal blog (2, Informative)

Severus Snape (2376318) | about 9 months ago | (#46565571)

NOTHING to do with Canonical at all. Yay for the let's all hate Canonical bandwagon.

Re:Personal blog (4, Insightful)

sfcrazy (1542989) | about 9 months ago | (#46565677)

He is a Canonical developer and its not a post about his family cat.

Re:Personal blog (0)

Anonymous Coward | about 9 months ago | (#46565743)

And the KDEvelopers write an official response to a non-story blog post, get it posted to Slashdot (probably cost them less than a drink at Starbucks), and generate more attention to the original blog than it would've had before.

Clearly KDE's crowd is terrified of people actually believing Robert's claim (of the 4 Roberts I have met, none liked the name Bob), and have accidentally Streisand-effected this.

My stance: I don't trust KDE or Canonical to develop a useful UI, one is too stuck on supporting fringe uses at the cost of any possible performance and the other has shown a hostility toward any user customization. (Ok, for full disclosure, I installed KDE after accidentally upgrading to a 'Unity' Ubuntu build. The system went from ugly to crashing more than Windows ME. LXDEd it later and everything is back to useful.)

Re:Personal blog (2)

Bill, Shooter of Bul (629286) | about 9 months ago | (#46565879)

They are terrified, because it would mean more work for them and less advancement of the linux graphics stack. Having three display servers ( Xorg, Wayland, Mir) increases the amount of code paths everything and everyone has to deal with.

Its not trivial as Robert suggested, and more importantly, it doesn't increase Robert's workload.

If there is one thing that's really annoying, its someone telling you how easy your really difficult job is. So I understand the frustration apparent in the kde blogs.

Re:Personal blog (2)

geek (5680) | about 9 months ago | (#46567169)

They are terrified, because it would mean more work for them and less advancement of the linux graphics stack. Having three display servers ( Xorg, Wayland, Mir) increases the amount of code paths everything and everyone has to deal with.

No it doesn't. No one but Canonical will be supporting Mir and Xorg will go away. Leaving Wayland for the adults. No one besides Canonical gives two shits about Mir and once Wayland is stable enough for primary use people will switch to it faster than they did to systemd.

Re:Personal blog (1)

Bill, Shooter of Bul (629286) | about 9 months ago | (#46568933)

Users, using applications on ubuntu will care when those applications break because of the Mir backend. They'll care. A number of them will probably report that the apps don't work to the application writers, when the real issue is in the MIr support for the toolkits that Ubuntu will have to write. Thus, app developers will have to spend some time trouble shooting the problem.

This is the argument the KDE guys are advancing. It makes sense to me, but I must admit, I don't know the guts, nuts or bolts of Mir, Wayland, GTK, QT, xorg or the like.

Re:Personal blog (1, Insightful)

houstonbofh (602064) | about 9 months ago | (#46565881)

My stance: I don't trust KDE or Canonical to develop a useful UI, one is too stuck on supporting fringe uses at the cost of any possible performance and the other has shown a hostility toward any user customization. (Ok, for full disclosure, I installed KDE after accidentally upgrading to a 'Unity' Ubuntu build. The system went from ugly to crashing more than Windows ME. LXDEd it later and everything is back to useful.)

I do not trust any "GUI UI Designer" to develop a useful UI. Why? Because their job depends on constant change, regardless of if it is better or not. Take cars, for example. Can you Imagen what the Unity team or KDE would do to you car? I can bet it would not have a wheel in the middle wipers on the right, turn signals on the left, ignition on the dash on the right, headlights on the dash on the left, and gear selector on the right in the center. (For left had drive) Nor would the peddles be clutch brake gas from left to right... Why? Because leaving something that works well alone does not validate your existence and contribution to the project.

Re:Personal blog (0)

Anonymous Coward | about 9 months ago | (#46566245)

Plus 1000 Insightful!

This is exactly the problem with the Linux community - there is too much self-choice and self-validation and developers/companies only develop what they find personally interesting, rather than what actually needs to be done.

Yes I can imagine. (0)

Anonymous Coward | about 9 months ago | (#46566357)

I do not trust any "GUI UI Designer" to develop a useful UI. Why? Because their job depends on constant change, regardless of if it is better or not. Take cars, for example. Can you Imagen what the Unity team or KDE would do to you car? I can bet it would not have a wheel in the middle wipers on the right, turn signals on the left, ignition on the dash on the right, headlights on the dash on the left, and gear selector on the right in the center. (For left had drive) Nor would the peddles be clutch brake gas from left to right...

I see you've driven a Toyota.

Push the shift lever FORWARD to go BACK, and BACKWARDS to go FORWARD! It's intuitive! Just stand on your head (and that way you'll be able to see all the controls, too).

And let's just not even talk about the "flying bridge" in the 2012+ Priapus... or any recent model of "infotainment" system from any car vendor....

Re:Yes I can imagine. (1)

Megol (3135005) | about 9 months ago | (#46567037)

Why is it not intuitive? It's easier to move something towards oneself than away so it is clearly better to have the common action (that of driving forward) mapped to the easiest move.

Re:Personal blog (1, Troll)

Tailhook (98486) | about 9 months ago | (#46565835)

NOTHING to do with Canonical at all.

Yet there is Mark Shuttleworth, replying the same day [google.com] to this supposedly "personal" blog with:

It was amazing to me that competitors would take potshots at the fantastic free software work of the Mir team

But hey... that's Google+, not ubuntu.com or whatever, so that's got nothing to do with Canonical either. Right?

Ubuntu is for losers (-1)

Anonymous Coward | about 9 months ago | (#46565599)

Ubuntu is for losers, why does anyone care what Canonical has to say?

Of course it matters (0)

Anonymous Coward | about 9 months ago | (#46565607)

I need to know who to blame when they screw it up again because the open-source management by committee model ensures they end up bloated mockeries of their original design goals.

Re:Of course it matters (5, Funny)

houstonbofh (602064) | about 9 months ago | (#46565801)

Hey! No need to bring systemd into this...

Open source pissing contest! Yay! (0)

Anonymous Coward | about 9 months ago | (#46565631)

I think the biggest advantage of open source is we get the entertainment of seeing the pissing contests.

Gotta love it when Linus cusses out someone on the LKML who crossed him*.

* - OT, but I love Linus when he does that. I hope it makes some poor butthurt pussy bleed. This world is way too full of overly sensitive pussies and I'm so glad Linus is doing his best to make sure they remove themselves from the gene pool.

Now go fuck yourself, your family dog, and the gopher hole out back. ;-)

Re:Open source pissing contest! Yay! (0)

Anonymous Coward | about 9 months ago | (#46565687)

Linus doesn't do that as AC.

Pussy.

Re:Open source pissing contest! Yay! (1)

jones_supa (887896) | about 9 months ago | (#46566033)

Well, you would be a meta-pussy as you also posted that as AC.

Re:Open source pissing contest! Yay! (1)

houstonbofh (602064) | about 9 months ago | (#46565815)

Ever since you started that no-fap pledge, you have been terribly stressed...

Closed source Canonical (0)

Anonymous Coward | about 9 months ago | (#46565717)

Obviously they are supervillains! Open source means no need to burn the witch to prove it's a witch!

I do like Ubuntu though. No witch burnings today.

no, really? (1, Funny)

X0563511 (793323) | about 9 months ago | (#46565745)

Interesting how KDE and those responsible for Unity have differing perspectives... who would have thought?

Bollocks (3, Insightful)

drinkypoo (153816) | about 9 months ago | (#46565771)

The display server is hugely important. The fact that the user doesn't know they're using it is irrelevant, because they're using it at all times.

Shh... (3, Insightful)

GameMaster (148118) | about 9 months ago | (#46565807)

You heard the man, it's not important. Now stop talking about it! That way Canonical can more easily save face when they cancel their failed cluster-fuck of a display server and switch back to Wayland...

Re:Shh... (2, Insightful)

squiggleslash (241428) | about 9 months ago | (#46566009)

X.org, not Wayland. Wayland is still under development. Wayland devs must be elated that Mir has made the debate "Wayland vs Mir" rather than "Tried, trusted, works, and feature complete X.org vs Wayland."

Re:Shh... (5, Insightful)

JDG1980 (2438906) | about 9 months ago | (#46566521)

X.org, not Wayland. Wayland is still under development. Wayland devs must be elated that Mir has made the debate "Wayland vs Mir" rather than "Tried, trusted, works, and feature complete X.org vs Wayland."

X.org is not "feature complete" in any meaningful sense. It is incapable of doing the kind of GPU-accelerated, alpha-blended compositing that is expected on a modern user interface. Sure, you can get around most of this by ignoring all the X11 primitives and using X.org to blit bitmaps for everything, with all the real work done by other toolkits. But in that case, it's those other toolkits doing the heavy lifting, and X.org is just a vestigal wart taking up system resources unnecessarily.

Re:Shh... (0, Troll)

jedidiah (1196) | about 9 months ago | (#46567021)

> X.org is not "feature complete" in any meaningful sense. It is incapable of doing the kind of GPU-accelerated, alpha-blended compositing

It's fascinating what the eggheads decide to fixate on. This is really where the problem starts. Instead of focusing on practical features, you're fixating on the most trivial sort of nonsense and eye candy possible. Most normal people ignore that stuff or turn it off completely.

It's nice that you are finally noticing these things only about 15 years since the Englightement window manager was created but some of us actually have work to do.

Re:Shh... (4, Informative)

Eravnrekaree (467752) | about 9 months ago | (#46567249)

This is all wrong. X has something called GLX which allows you to do hardware accelerated OpenGL graphics. GLX allows OpenGL commands to be sent over the X protocol connection. X protocol is sent over Unix Domain Sockets when both client and server are on the same system, this uses shared memory so it is very fast, there is no latency of network transparency when X is used locally in this manner. MIT SHM also supports forms of shared memory for transmission of image data. Only when Applications when they are being used over a network, do they need to fall back to send data over TCP/IP. Given this, the benefits of having network transparency are many, but there is no downside because where an application is run locally, it can use the Unix domain sockets, the MIT SHM and DRI.

X has also had DRI for years which has allowed an X application direct access to video hardware.

As for support for traditional X graphics primatives, these have no negative impact on the performance of applications which do not use them and use a GLX or DRI channel instead. Its not as if hardware accelerated DRI commands have to pass through XDrawCircle, so the existance of XDrawCircle does not impact a DRI operation in any significant way. The amount of memory that this code consumes is insignificant, especially when compared to the amount used by Firefox. Maybe back in 1984 a few kilobytes was a lot of RAM, that is when many of these misconceptions started, but the fact is, these issues were generally found with any GUI that would run on 1980s hardware. People are just mindlessly repeating some myth started in the 1980s which has little relevance today. Today, X uses far less memory than Windows 8 does and the traditional graphics commands consume an insignificant amount that is not worth being worried about, and which is needed to support the multitude of X applications that still use them.

Re:Shh... (4, Insightful)

BitZtream (692029) | about 9 months ago | (#46567441)

Today, X uses far less memory than Windows 8

Nice, you just compared a single process on one OS to the entire OS and its subprocesses of another. Totally fair.

How about you compare X to the Win32 Desktop Window Manager instead? Which is a lot closer, though still not exact since Windows has this mentality that GUI in the kernel is a good idea.

My point however is that your comparison is not really a comparison.

Re:Shh... (2)

Eravnrekaree (467752) | about 9 months ago | (#46567453)

I also forgot to mention X has had the X Composition Extension and X Render Extension which have allowed for alpha blending operations for quite some time. Your information is a bit out of date.

Re:Shh... (1)

Eravnrekaree (467752) | about 9 months ago | (#46567601)

You did mention hardware accelerated compositing, and I wanted to clarify that X protocols can indeed support this, it is mainly internal improvements in the X server that may be needed to support them. You dont really need an entirely new windowing system for this.

Re:Shh... (1)

Kjella (173770) | about 9 months ago | (#46568277)

Also tear-free video seems to be one god awfully big workaround for limitations in X. The stated goal of Wayland was a system in which "every frame is perfect, by which I mean that applications will be able to control the rendering enough that we'll never see tearing, lag, redrawing or flicker." I doubt he'd say that if X had no tearing lag, redrawing or flicker which seems like rather huge deficiencies to me.

Re:Shh... (0)

Anonymous Coward | about 9 months ago | (#46568445)

Wayland is also not feature complete, and neither is Mir. We should be fixing one of them instead of arguing over who's fault it is.

Re:Shh... (1)

gizmo2199 (458329) | about 9 months ago | (#46567729)

Apropos, does Wayland support hardware accel: vdpau, vaapi? No point in having a newish gpu if you can't use those.

NIH forever (1)

Anonymous Coward | about 9 months ago | (#46565847)

The NIH-ness and "let's completely rewrite something for fun" mentality spawned this display server debacle, plus the wonderful systemd/upstart mess. How about we follow the unix's KISS principle, and the traditional modularity and openness it gave us.

- cynical FreeBSD user

Re:NIH forever (0)

Anonymous Coward | about 9 months ago | (#46567095)

Oh yeah, everything was completely good enough before those Linux upstarts came along. That's why all the free BSD operating systems are slavishly copying Linux's 3d rendering stack, and attempting to develop the features required to support Weston and modern desktop environments.

How are these things related? (1)

Rafael Jaimes III (3430609) | about 9 months ago | (#46565863)

Qt is a widget toolbox used for designing UIs. What does this have to do with a display server? As far as I know, display servers are X and Wayland.

Re:How are these things related? (1)

armanox (826486) | about 9 months ago | (#46566209)

Well, there is also SurfaceFlinger and Quartz...

Re:How are these things related? (2)

DarkOx (621550) | about 9 months ago | (#46566255)

Because the tool boxes (QT, GTK, and others) don't provide a complete abstraction layer, at least not when your project gets to the point of doing anything 'fancy' if all your application does is display some forms fine, but more complex stuff, window managers, media players with odd shapes and over lays etc; you have to interact with the display server directly or its APIs anyway.

More display servers more code paths and its not easy for one developer to test all that, sure they can have a bunch of VMs but now he has to know how work with multiple systems etc. It would be PITA. Its going to be bad enough for some with just X and Wayland to support.

My guess is many won't end up supporting multiple display servers and if there are to many it will just fragment the Linux desktop worse than even in the bad old days.

Re:How are these things related? (5, Informative)

slack_justyb (862874) | about 9 months ago | (#46567045)

The whole point about all of this, X/Wayland/MIR, is getting closer to the video card without having to yank one's hair out whilst doing it. Why would one need a little close interaction with the bare metal? If you've ever used Linux and saw tearing while moving windows around, then you've hit on one of the points for why closer to metal is a bit more ideal.

With that said, let's not fool ourselves and think, "OMG, they just want access to the direct buffers!" That wouldn't be correct. However, developers want to have an ensured level of functionality with their applications visual appearance. If the app shows whited out menus for half a second, blink, and then there is your menu options, then there is something very wrong.

It was pretty clear that with X, politically speaking, that developers couldn't fix a lot of the problems due to legacy and the foaming at the mouth hordes that would call said developer out for ruining their precious X. You can already see those hordes from all the "take X and my network transparency from my cold dead hands" comments. It is to a degree those people, and a few other reasons, that provided the impetus for Wayland. You just cannot fix X the way it should be fixed.

Toolkits understand that display servers and pretty much the whole display stack in general suck. Granted there is a few moments of awesome, but they are largely out weighted by the suck factor, usually when you code an application, you'll note that sometimes you'll gravitate to the "winning" parts of the toolkit being used versus the pure suck ones. Qt has a multitude for all the OSes/Display Servers it supports. Be that Windows, Mac, X11, and so on. Likewise for GTK+ but to a lesser extent, but that is what make GTK+ a pretty cool toolkit. Because let's face it, no display stack is perfect in delivering every single developer's wish to the monitor. Likewise, no toolkit is perfect either. The GNOME and KDE people know this, they write specific code to get around some of the "weirdness" that comes with GTK+ or Qt. Obviously, that task is made slightly easier with Wayland and the way it allows a developer to send specifics to the display stack or even to the metal itself.

Projects like KDE and GNOME have to write window managers and a lot of times those window managers have to get around some of the most sucktacular parts of the underlying display server. However, once those parts are isolated, the bulk of the work left is done in the toolkit. So display servers matter a bit to the desktop environments because they need to find all of the pitfalls of using said display server and work around them. Sometimes, it can be as simple as a patch to the toolkit or the display server upstream. Sometimes it can be as painful as a kludge that looks like it was the dream of a madman, all depends on how much upstream a patch is needed to be effective and how effective it would be for other projects all around.

That leads into the problem with MIR. MIR seems pretty gravitated to its own means. If KDE has a problem with MIR that can be easily fixed with a patch to MIR or horribly fixed by a kludge in KDE's code base, it currently seems that the MIR team wouldn't be as happy go lucky to accept the patch if it meant that could potentially delay Ubuntu or break some future unknown to anyone else outside of MIR feature. Additionally, you have the duplicated work argument as well, which I think honestly holds a bit of water. I fondly remember the debates of aRts and Tomboy. While I think it's awesome that Ubuntu is developing their own display server, I pepper that thought with, "don't be surprised if everyone finds this whole endeavor a fools errand."

I think the NIH argument gets tossed around way too much, like its FOSS McCarthyism. Every team has their own goals and by their very nature, that would classify them as NIH heretics. Canonical's idea is this mobile/desktop nexus of funafication, MIR helps them drive that in a way that is better suited to them. That being said, a few changes to their underpinning technology would help them do the exact same thing on Wayland. I'll add to the previous statement, while it is a few changes, those would be very large changes, changes that might not sit well in the stomach of Canonical. However, I'd say the idea for using MIR versus Wayland comes not from technical matters but by ripping a page out of the Google playbook on how to write a display server. Making the display server theirs and not subject to the, as someone in one of the comments above said, "open-source management by committee model ensures they end up bloated mockeries" flux, helps them woo would-be vendors. Because let's face it, when subject to committee, don't expect anything crystal clear to emerge, (ooo, burn on XML).

X11 is legacy. I know everyone's going to be a hater, but X11 is just so huge. There just is no turning this ship from the iceberg, it has become by its most feverish supporters, unfixable. Wayland is the obvious choice since it is trying to apply a broad approach to the problems that exist in X11 and at the same time give enough outs to developers to ensure we can undo some of the problems that Wayland has yet to invent for us, all the while giving developers the one thing they've honestly been asking for. A more consistent experience with applications. MIR serves that too, to an extent, but pretty much for Canonical's goals. Qt and GTK+ developers, specifically KDE and the variety of GNOMEish DEs, like the appeal of Wayland because if there are parts they don't like sending a patch upstream has thus far proved to be pretty painless, additionally, they have a couple of means to get around Wayland fairly easily. MIR hasn't really had such a test, at least to speak of but that's not saying that haven't already, of DE developers asking for patches to be sent upstream. However, some of those DE developers are basing it off of previous experience with dealing with the Ubuntu developers, who haven't been the most friendly bunch. Granted, the Fedora and RedHat people aren't the shit that smells like roses either.

So I know this has been pretty long winded, but this whole debate is a pretty complicated one because it has less to do with technical reasons and more political reasons. The toolkits are always working around the brain dead assumptions that display servers make, desktop developers are always working around the crazy assumptions that toolkits are making. Making the ability to easily bypass all of that has been a pretty big goal for everyone and Wayland/MIR stand to bang the drum on that pretty strongly. The main difference between Wayland and MIR is that they take different approaches for doing just that and trying to have code that works reasonably well on both would be a pain in the rear to support and having code that "just works" defeats the whole purpose of going to Wayland/MIR in the first place. That in turn is the reason for the big scream in this debate. Supporting both is either a no-go or defeats the whole point of leaving X.

Re:How are these things related? (2)

Uecker (1842596) | about 9 months ago | (#46568625)

You: "You just cannot fix X the way it should be fixed."
Reality: "... It's entirely possible to incorporate the buffer exchange and update models that Wayland is built on into X..." (Wayland FAQ)

And now?

Re:How are these things related? (0)

Anonymous Coward | about 9 months ago | (#46567891)

Because it doubles the work that they have to do to support both servers, and massively increases the complexity and bugs that come from working around inconsistencies between the two servers.

It's like supporting IE6, then suddenly you need to support the new IE and Webkit. And IE6, too, for legacy reasons. And they all use a non-standard document language.

Just one question... (3, Insightful)

Dcnjoe60 (682885) | about 9 months ago | (#46565909)

Just one question. If the display server is of such minimal importance in the big scheme of things, then why did Canonical develop their own?

Display server matters for some people (1)

loonycyborg (1262242) | about 9 months ago | (#46565977)

Namely gui toolkit developers, driver developers and DE developers. All of the above aren't very fond of MIR..

True and false. (1)

Kremmy (793693) | about 9 months ago | (#46566045)

Fact is that I can remotely control almost any computer running on almost any platform utilizing countless different variations of the theme of 'render to the display - wait, let's render to an image instead then send it over the wire'

The reason so much attention is being put on display servers is as a distraction from the real problems, such as the fact that so much attention is being put on the display servers. They're not the weak point, there are a lot of them, and one exercise that remains THROUGHOUT COMPUTING HISTORY, is the task of updating software and porting it to another display server, because at the end of the day you're drawing colored rectangles on a screen.

Re:True and false. (1)

amorsen (7485) | about 9 months ago | (#46567243)

I believe the main difference is that remote X is rootless. People like that. Somehow they forget that remote X is non-persistent, uselessly slow, and that session integration is almost entirely missing.

Do not misunderstand me, I would love a persistent rootless remote display with decent performance and session integration. Alas, X is not it.

So why did Apple and Google toss it? (4, Interesting)

timeOday (582209) | about 9 months ago | (#46566161)

The most significant transition of a unix-style OS to the desktop is OSX. The most significant transition of a unix-style OS to handhelds is Android. X was left behind both times. Why did they re-invent the wheel if there was no need to do so?

Re:So why did Apple and Google toss it? (3, Insightful)

Anonymous Coward | about 9 months ago | (#46566361)

Not only that, but each example (NeXT/OSX and Android) are undeniable success stories.

X11 has severe limitations, like a cramped network abstraction layer that can't share windows or desktops with multiple people. Supposedly the NX server gets around this, but the X11 people haven't shown any interest in adopting the NX features.

People need displays that look like they computer is operating smoothly (instead of barfing text-mode logs here and there when transitioning between users, runlevels, etc).

People need to share their windows (efficiently, not with VNC) for teleconferencing.

Both OS X and Windows achieved these by focusing on the display server. So, as much as I respect Canonical's work, I think this blogger/dev is somewhat clueless.

Re:So why did Apple and Google toss it? (-1, Troll)

jedidiah (1196) | about 9 months ago | (#46567191)

> People need to share their windows (efficiently, not with VNC) for teleconferencing.
>
> Both OS X and Windows achieved these by focusing on the display server.

No. MacOS did not achieve this. Quit trying the bullshit of associating your crap with Windows and expecting no one to call you out on it. Screen sharing on MacOS is a big festering pile.

Having used it is why I don't want Apple wannabes any where near by user experience. They're idiots blindly following even bigger idiots.

X may need replaced. Except Wayland and Mir are no X replacement. Neither is Quartz.

Re:So why did Apple and Google toss it? (0)

Anonymous Coward | about 9 months ago | (#46567959)

MacOS != OSX. If you think that then your screen sharing experience probably dates back to the days Timbuktu.

Although the screen sharing service on OSX has a compatability layer for connecting VNC clients the native clients do not use the VNC protocol, they use ARDP (Apple Remote Desktop Protocol). ARDP is not a framebuffer-based protocol like VNC, it's graphic object-oriented like Microsoft RDP and Citrix. It works extremely well over remote links, complete with mutli-monitor support.

Don't believe me? Try connecting to a remote OSX server (e.g.: over ADSL) using an OSX client, then try the same thing using a VNC client (such as Tiger, Tight, rdektop, etc.). VNC clients are so crap it takes them about 2 minutes just to paint the 2560x1440 login screen on a 27-inch iMac. The OSX client is sub-second.

Re:So why did Apple and Google toss it? (3, Interesting)

Uecker (1842596) | about 9 months ago | (#46566405)

And both are now incompatible ecosystems. Do we want to repeat this nonsense?

Re:So why did Apple and Google toss it? (1)

Anonymous Coward | about 9 months ago | (#46566885)

Do we want to witness another hugely successful deployment of Linux?

Why yes. Yes we do.

Re:So why did Apple and Google toss it? (1, Troll)

jedidiah (1196) | about 9 months ago | (#46567265)

Single digit market share really isn't "hugely successful". MacOS based on Unix really isn't that much more successful than MacOS NOT based on Unix. Whatever "success" this alleged Unix has had really has nothing to do with it's Unix-ness. What meagre success it has had has been being tied to a well established brand name that's about as far away from Unix as you can get.

What's the point of a "successful Linux" if it abandons all of the useful design ideas of Unix?

At best, something like that is redundant. You can go somewhere else and buy that if you really want that. There's no need to pervert someone else's platform.

Re:So why did Apple and Google toss it? (3, Interesting)

garyebickford (222422) | about 9 months ago | (#46566415)

WRT to OSX, there is history. Back in the days of NeXT, Jobs & co. decided to use Display Postscript for a variety of reasons. A few of the reasons: X back then was huge, ungainly and a total beast to work with using the limited memory and cycles available (The NeXTstation used a 25MHz 68000); their team were not ever going to be able to morph X into an object-oriented platform, which NeXT definitely was; Display Postscript was Adobe's new Hotness; the NeXT folks could write drivers for DP that worked with the Texas Instruments signal processor (TM-9900? I forget), which was truly amazingly fast at screen manipulation; and the X architecture didn't fit well with either Display Postscript or the TM-9900.

In 2001 I had a NeXTstation that I added some memory and a bigger disk to. The machine was by then more than 10 years old. For normal workstation duties, it was faster than my brand new desktop machine due entirely to the display architecture. But compiling almost anything on that 25MHz CPU was an overnight task - I had one compile that ran three days.

Re:So why did Apple and Google toss it? (0)

Anonymous Coward | about 9 months ago | (#46567067)

Nice history lesson, but you realize that Apple threw away DPS and created Quartz for OS X, right?

Re:So why did Apple and Google toss it? (1)

Anonymous Coward | about 9 months ago | (#46566613)

Because X is a hassle if you are a commercial entity and want to retain control on the GUI. Much less of a hassle if you develop openly in collaboration with xorg. They also probably don't want their applications being a couple of compiler flags away from a linux desktop, or their partners could pull a Valve whenever they felt like it.

Re:So why did Apple and Google toss it? (2)

Eravnrekaree (467752) | about 9 months ago | (#46567691)

Its the not made here syndrome, plus the fact that Google and Apple want to create a fleet of applications that are totally incompatable with other platforms in order to create user lock in to their respective platforms. Obviously, business and political reasons and nothing to do with technical issues. X would have been a fine display platform for either but, then the platforms would be compatable with mainstream Linux distros and you would have portable applications so your users wouldnt be locked into your OS.

Doesn't matter (0)

Anonymous Coward | about 9 months ago | (#46566163)

Whether display server matters or not, this debate doesn't matter. You need to support them both, or else you're doing a poor job.

Slow News Day (1)

SeaFox (739806) | about 9 months ago | (#46566191)

Is there an actual story here, or it just about two different groups of open-source developers having a difference of opinion on whether display servers are important or not? The summary doesn't suggest this disagreement is having any real ramifications on Ubuntu/Kubuntu.

He's Right (3, Insightful)

Luthair (847766) | about 9 months ago | (#46566239)

The canonical developer said that users don't care which I think is pretty accurate. The majority of users won't care as long as applications run and are responsive.

Re:He's Right (0)

geek (5680) | about 9 months ago | (#46566973)

I'm a user and I care.

Re:He's Right (0)

Anonymous Coward | about 9 months ago | (#46567069)

thing is, users *will* care once "applications run and are responsive" stops being true for the many applications which are not developed against or extensively tested against the display server they use.

Re:He's Right (0)

Anonymous Coward | about 9 months ago | (#46568505)

This already happened. The applications I use don't work in Wayland or Mir, so I don't care about either of them.

Never really believed in Mir (0)

Anonymous Coward | about 9 months ago | (#46566327)

Back when Canonical announced they were working on Mir, my first reaction was like "are you guys serious?". Writing your own display server is not a trivial thing. I mean, it's not as simple as providing an API with a few classes and methods. It goes much further than that. You have to interface with kernel, display drivers, input devices, provide drawing context for 2D/3D graphics, manage things like video playback, provide framework for IPC between X applications, etc. And all this, is not achieved simply by writing a few lines of C code. It requires deep understanding of what is needed and properly designing architecture, protocols, APIs, etc. So basically, you need a dedicated team of people who really know what they're doing and have plenty of time and resources. I'm not convinced that Canonical has such a team or a big stash of money to pay them.

Ubuntu and Canonical speak with fork tongue (1)

Anonymous Coward | about 9 months ago | (#46566427)

Ubuntu and Canonical say a lot but you have to wonder if they forgot their roots. KDE is a rich and robust desktop eco-system. It has everything from windowing, widgets, plasmons, wallpapers and artwork to grandly complex integrated applications. KDE to this developer is nothing short of Cool on steroids.

On the other hand, the desktop of a stock Ubuntu is crippling, lacking, stripped of options and toyish. Unity? Its nothing more than a program loader.

 

KDE vs. Who? (1)

s.petry (762400) | about 9 months ago | (#46566513)

I've been a KDE user for a very long time, hated Gnome. Frankly I hate Unity even more Gnome (which is a lot). I've seen KDE do things that Microsoft can't, using less CPU and overall better performance, and it's always been compatible with X. So now we have a nextgen X and Canonical want's to disperse the market. Nothing new there, they did it with Unity. Fragmentation is good for some people, and I have to wonder if Canonical gets paid to cause fragmentation? Sure, they have a product that is "theirs" too, but who really likes "theirs" and why is theirs better for consumers?

Look, if you happen to love Gnome you should have the same issues with this fragmentation as me being pro-KDE does. Years of getting Gnome X.X working properly and enough traction from users, and then a company creates a rift. Ubuntu makes some things easier for people not experienced with Linux, but they don't do UIs as Unity clearly demonstrates.

If they are unhappy with Gnome or KDE why not put devs on the project instead of back door'ing their own?

wayland, systemd (5, Interesting)

bzipitidoo (647217) | about 9 months ago | (#46566719)

Figured systemd would get dragged into this.

One of the biggest problems with systemd is simply documentation. System administrators have a lot of learning invested in SysV and BSD, and systemd changes nearly everything. Changing everything may be okay, may be good, but to do it without explanation is bad no matter how good the changes. I'd like to see some succinct explanation, with data and analysis to back it up. Likely there is such an explanation, and I just don't know about it. But the official systemd site doesn't seem to have much, I'd also like to see a list with common system admin commands on one side, and systemd equivalents on the other, like this one [fedoraproject.org] but with more. For example, to look at the system log, "less /var/log/syslog" might be one way, and in systemd, it is "journalctl". To restart networking it might be "/etc/rc.d/net restart", and in systemd it's "systemctl restart network.service". Or maybe the adapter is wrongly configured, DHCP didn't work or received the wrong info, in which case it may be something like "ifconfig eth0 down" followed by an "up" with corrected IP addresses and gateway info.

When information is not available, it looks suspicious. How can we judge if systemd is ready for production? Is well designed? And that the designers aren't trying to hide problems, aren't letting their egos blind them to problems? To be brusquely told that we shouldn't judge it we should just accept it and indeed ought to stop whining and complaining and be grateful someone is generously spending their free time on this problem, because we haven't invested the time to really learn it ourselves and don't know what we're talking about, doesn't sit well with me.

Same goes for Wayland and MIR. Improving X sounds like a fine idea. But these arguments the different camps are having-- get some solid data, and let's see some resolution. Otherwise, they're just guessing and flinging mud. Makes great copy, but I'd rather see the differences carefully examined and decisions made, not more shouting.

I've been using Wayland and systemd for nearly a m (1)

Phil Urich (841393) | about 9 months ago | (#46567247)

SailfishOS, running on the current Jolla device, is quite smooth and nice, in a way that my N9 (despite the slickness of the design of the UI) never was. Both were underpowered hardware for their times, but Wayland allows the kinds of GPU-accelerated and compositing-oriented display that allow for what people are increasingly used to from other OSes.

Now, in terms of systemd I'm more on your side, there's certainly a baseline of arrogance that the primary devs have shown. On the other hand, they seem sometimes to be justified, and while there was some shouting and mudflinging in the recent Debian decision, there were also some extremely thoughtful and thorough considerations that I read from Debian developers which convinced me that, despite some of its shortcomings, systemd is a needed improvement and is well thought out. Err, I can't seem to find any of them right now, but from a system administration perspective I do see this blog [utoronto.ca] as a fairly succinct list of reasons why systemd is good for sysadmins. As one myself, who until now has worked merely on SysV or Upstart systems, many of those reasons do seem pretty compelling to me. So far I've only toyed with systemd in the phone that now resides in my pocket, however, so I certainly can't speak from direct experience yet. But I'm very interested to try it out.

Re:I've been using Wayland and systemd for nearly (3, Insightful)

Golthur (754920) | about 9 months ago | (#46567887)

My main issue with systemd is that it is monolithic; it violates the fundamental Unix philosophy in a most egregious way, and whenever anyone comments on this, we are (to quote the GP) "brusquely told that we shouldn't judge it we should just accept it and indeed ought to stop whining and complaining and be grateful someone is generously spending their free time on this problem, because we haven't invested the time to really learn it ourselves and don't know what we're talking about".

We used to have separate, replaceable systems for each aspect of systemd - e.g. if you didn't like syslog, there was syslog-ng, or metalog, or rsyslog; each different and meant for a different purpose. Now, it's "all or nothing" - except that it's becoming progressively more difficult to opt for "nothing" because it's integrating itself into fundamental bits like the kernel and udev.

displays have a server? (0)

Anonymous Coward | about 9 months ago | (#46566761)

i'm not familiar with Linux but i guess the displays are connected to the internet? I tried reading the article but I got confused.

Pathetic, sfcrazy. (0)

Anonymous Coward | about 9 months ago | (#46567435)

For including a link to Muktware, the only linux site sleazier and more willing than publish sensationalistic crap than Phoronix.

Bikeshed (0)

Anonymous Coward | about 9 months ago | (#46568971)

What color would you like the bikeshed?

Display server does matter (4, Interesting)

Eravnrekaree (467752) | about 9 months ago | (#46569223)

Obviously, display server does matter to users. If users cannot use a whole set of applications because they are not compatable with Distro Xs display server, that is a problem for users. This can be addressed by distros standardizing around display servers that uses the same protocol. Its also possible, but more complex, is if distros using different display protocols support each others display protocols by running a copy of a rootless display server that supports the others display protocol. Relying on widget sets to support all display protocols is too unreliable as we are bound to end up with widget sets which do not support some display protocols. Needless to say, it is best to have a single standard, it would have been easiest and best if Canonical had gone with Wayland and actually worked with Wayland to address whatever needs they had.

Its also true a new display protocol wasnt really necessary. The issue with X was the lack of vertical syncronisation. X already has DRI, Xrender, Xcomposite, MIT SHM, and so on for other purposes. An X extension could have been created to allow a way for an application to get the timing of the display, the milliseconds between refreshes, the time of the next refresh, etc.. X applications could then use this timing information, starting its graphics operations just after the last refresh, X applications could then use an X command to place its finished graphics pixmap for a window into a "current completed buffer" for the window, allowing for double buffering to be used. This could be either a command to provide the memory address, or a shared memory location where the address would be placed. All of the current completed buffers for all windows are then composited in the server to generate the master video buffer for drawing to screen. There is a critical section during which the assembly of the master video buffer would occur, any current completed buffer swap by an application during that time by an application would have to be deferred for the next refresh cycle. A new XSetCompletedBuffer could be created which would provide a pointer to a pixmap, this is somewhat similar to XPutPixmap or setting the background of an X Window, but provided that XPutPixmap might do a memory copy it may not be appropriate, since the point is to provide a pointer to the pixmap that the X server would use in the next screen redraw. Said pixmaps would be used as drawables for opengl operations, traditional X primatives, and such. This scheme would work with all of the existing X drawing methods. the pixmaps are of course transferred using MIT SHM, its also possible to use GLX to do rendering server side, for use of x clients over the network, GLX is preferable, otherwise the entire pixmap for the window would have to be sent over the network. The GLX implementation already allows GL graphics to be rendered into a shared memory pixmap. Currently however, some drivers do not support GL rendering into a pixmap, only a pbuffer, which is not available in client memory at all, however, the DRI/GEM stuff is supposed to fix this and the X server should be updated to support GLX drawing to a pixmap with all such DRI drivers.

Another issue is window position and visibility in how it relates to vertical synchronization. Simplistically the refresh cycle can be broken into an application render period and a master render period. If the X server has a whole pixmap buffer of a window, it grabs at a snapshot of the display window visibility/position state the beginning of the master rendering period and uses that to generate the final master pixmap by copying visible regions of windows into the master buffer.

It can be a good idea to allow the option for applications to only render areas of their windows that are visible, this saves on CPU resources and also avoid needless rasterization of offscreen vector data. In order to do this, applications would need to access visibility data at the beginning of the application render period. Applications would then have to, instead of providing a single pixmap region for the entire window, would instead provide memory addresses for pixmaps of each visible rectangle of the window it has rendered (could be in the same re-used mmaped area), with vector coordinates. A snapshot of the window position state would need to be taken at the beginning of the application render period for use by apps and used in the master render period as well. This could introduce a delay between a window visibility/position change appearing on screen that is longer than the former method.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?