Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Software Linux

Zero Install: The Future of Linux on the Desktop? 718

SiegeX writes "Zero Install ,which is apart of the ROX desktop environment is not just a new packaging system, it's a whole new way of thinking; a way that I believe is exactly what Linux needs to become a serious contender for Joe User's desktop. Zero Install uses an NFS to both run *and* install apps from. The apps are all self-contained in their own directory; binaries, docs, source code and all. Once the app has been downloaded its kept in a cache from that point on to minimize delay. The beauty becomes apparent when Zero Install is combined with ROX which runs the application by just clicking on the directory it was installed to. Deleting the application along with all the other misc files is as simple as removing the directory it's contained in. This method of partitioning applications in their own directories also allows installing multiple versions of any application trivial. This is something even the greatest of technophobes could understand and use with ease."
This discussion has been archived. No new comments can be posted.

Zero Install: The Future of Linux on the Desktop?

Comments Filter:
  • by SeanTobin ( 138474 ) * <byrdhuntr AT hotmail DOT com> on Saturday April 03, 2004 @01:33PM (#8756393)
    Someone should really point this out to Steve. I think using this type on installation on Macs would increase useability by leaps and bounds.
    • by expro ( 597113 ) on Saturday April 03, 2004 @01:43PM (#8756466)

      Could there be a difference here? Hopefully they are not putting code into virus-writable directories, as often happens on Apple.

      • Moderator trolls. (Score:5, Insightful)

        by expro ( 597113 ) on Saturday April 03, 2004 @03:18PM (#8757084)

        It is easier for moderators to mark things as a troll than to accept an OBVIOUS fact, since it flies in the face of the religion surrounding infallability of Apple.

        But for a critical thinker who uses a personal OS X machine, (especially who has installed a fair amount of software):

        Go to to your Applications directory and ls -la to see just how many are owned by the primary user instead of root. And then see if the primary user happens to also be a member of the Admin group, which has write access to all the files there owned by root/admin. This also applies to the Applications directory itself.

        On my powerbook, taking installation defaults, over 95% of the apps installed in the Applications directory are writable by the primary user.

        This seems inexcusable from a virus security perspective.

        On Linux, 0% of my apps are writable by the primary user.

        • Partial solution (Score:4, Insightful)

          by plj ( 673710 ) on Saturday April 03, 2004 @04:04PM (#8757338)
          Remove your account from admin group, but keep it in wheel group. That way Finder* will ask you for admin/root passwd when you drag a new app bundle to Applications folder, so you can no longer put just anything there. This is what I'm doing.

          However, it still seems that the folders created there are owned by you, so this is rather imperfect solution.

          Fast user switching is theoretically a better one, but not on my 12" PBook with 1024x768 resolution due to an almost Dock-class UI design failure.

          *) A Panther feature. In Jaguar you're forced to use a terminal in this case.
        • by .com b4 .storm ( 581701 ) on Saturday April 03, 2004 @05:13PM (#8757765)

          On my powerbook, taking installation defaults, over 95% of the apps installed in the Applications directory are writable by the primary user.

          This seems inexcusable from a virus security perspective.

          That sounds reasonable, until you remember that such places are writable only after the user authenticates. This means entering the administrator password, allowing installer X or operation Y in the Finder to go ahead and write to that directory. I don't see how that's any less secure than what most moderately experienced Linux users do - ./configure ; make ; sudo make install

    • by hak1du ( 761835 ) on Saturday April 03, 2004 @01:47PM (#8756497) Journal
      Yes, someone should indeed point that out to Steve Jobs. Many Mac applications these days come with installers that drop bits all over the file system, and many of those don't come with clean uninstallers, making the problem worse.
      • by SweetAndSourJesus ( 555410 ) <.moc.oohay. .ta. .toboRehTdnAsuseJ.> on Saturday April 03, 2004 @01:51PM (#8756519)
        Bitch to whoever decided that that app should have an installer.

        If MS Office can be a drag and drop install, almost anything can.
        • Bitch to whoever decided that that app should have an installer.

          You think people write installers for fun? They usually write them because they don't have a choice, because the OS lacks some piece of functionality or other that lets the system adapt dynamically.

          Besides, Mac-style drag-and-drop installs have their own problems: they don't get updated properly and they don't verify or deal with dependencies on install; they just dump the mess into the user's lap.

          If MS Office can be a drag and drop insta
    • by Daniel Dvorkin ( 106857 ) * on Saturday April 03, 2004 @01:54PM (#8756537) Homepage Journal
      [snicker]

      Seriously, as an Apple user, I'm glad to see a Linux desktop system copying the MacOS instead of Windows. I've felt for some time that it is a huge mistake for KDE and GNOME to try so hard to make themselves look like Windows when, in OS X, there is a much better example of a Unix-based desktop. Why waste your time copying less than the best?

      Yeah, yeah, user familiarity, etc. Look, folks, I guarantee you that if all you've ever used is Windows, if you sit down at a good OS X machine, it will take you about half an hour to get used to the differences and be up to speed -- and after that you'll be discovering new and better ways to do things and saying, "That's so cool! Why didn't Microsoft ever think of that?" If a Linux desktop can have some nifty non-Windows features too (and I really don't care if the developers rip them off from Apple or come up with them on their own) it will do a lot more to enhance Linux desktop growth than just coming up with a system that's "like Windows, only not exactly."

      Next response I anticipate: "Yeah, well, if Mac OS X is so much better, how come it hasn't beat Windows in the marketplace?" The answer, of course, is that there is a lot of mindless anti-Apple prejudice, and regrettably I don't expect that to change any time soon. But anti-Linux prejudice is much milder, I think. A good Linux desktop with Mac OS X's best features (and maybe some of its own) especially if it were backed by IBM, could be the best shot at breaking the Windows stranglehold on the corporate desktop.
      • by Cthefuture ( 665326 ) on Saturday April 03, 2004 @02:18PM (#8756703)
        Look, folks, I guarantee you that if all you've ever used is Windows, if you sit down at a good OS X machine, it will take you about half an hour to get used to the differences and be up to speed

        True.

        and after that you'll be discovering new and better ways to do things and saying, "That's so cool! Why didn't Microsoft ever think of that?"

        Uh, I had the opposite response. "Why in the f*&k did they do it that way?!" Keyboard control is practically nonexistant. And Finder blows... Damn I hate the way it works. It doesn't allow to see where you are in the filesystem very well. It's just awkward and slow to use.

        The killer feature Apple has is that they have a GUI for everything in the system and it hides a lot of complex stuff so the end-user doesn't have to worry about every little detail (of course sometimes that backfires when I simply can't do something because it's hiding the details).

        Back on topic...

        I've found ROX is very similar to Finder. I hate it also.

        The ROX "all in one directory" is the exact same concept as Apple bundles. I can't believe how many people don't even know what they are. I guess that's what happens when you hide all the details. Anyway, the bundle concept is pretty cool. Just copy/move a directory to install your application.
      • Or if you're like me and suffered through Windows 98 for years, you don't what to use anything remotely similar Windows.

        I think Linux would win a lot more converts if KDE and GNOME where less like Windows. Especially in regards to Lindows, I think it will eventually end up making Linux a generic Window in the eyes of potential users. Just go into any $1 or less store and you'll see what I mean, a great deal of the packaging resembles name brand packaging found in grocery stores. Sure you might get some lo

    • Apple has already implemented this archetecture... albeit without the nfs... but since ftp servers mount as any remote disk would (in 10.3) the nfs is pretty much there too.
      all the various bits and pieces of a well formed Mac app go into a 'package' (not RPM). See it here [apple.com]. a double-click on that folder launches the app, deleting the folder removes the app, and updates need only to replace affected files within the app.

      When developers take the time to use this structure, it works really really well. Unfo

  • waste? (Score:3, Insightful)

    by pholower ( 739868 ) <longwoodtrail@NosPam.yahoo.com> on Saturday April 03, 2004 @01:34PM (#8756398) Homepage Journal
    I like the idea, but I worry that about speed regardless of what they say will occur. To me, it would be better to have to package load onto the HDD, and if there are any missing libraries, have that go and fetch them as well. This just seems like a waste of internet traffic to me.

    Isn't this already being done with apt_get? I just think Linux needs a more user friendly updating service. I hate to say it, but windows is much better at taking completely computer stupid people and having them screw up their own pc's, instead of having to call a family member to do it for them.

    • Re:waste? (Score:5, Informative)

      by JaredOfEuropa ( 526365 ) on Saturday April 03, 2004 @01:37PM (#8756422) Journal
      To me, it would be better to have to package load onto the HDD, and if there are any missing libraries, have that go and fetch them as well.
      That's exactly what is happening: the software is cached. From their website: "I've only got dial-up; can I still use Zero Install? Yes! Run each program you want while on-line and it will be cached. When you're off-line, the cached copy is used automatically."
      • by Adolph_Hitler ( 713286 ) on Saturday April 03, 2004 @01:44PM (#8756473)
        Also consider this, for the average person not only is this a more secure form of distribution, its more efficient, its easier, and for 99% of files people will download it just works. Unless you are going to compile your kernel or do serious changing to your machine you wont need apt get. Just to download GAIM, or KWORD or whatever, you only really need to drag drop and run, or even just click and run. I see nothing wrong with this, and you could give the browser enhanced UI features to embed some of the apps into it in the future.
      • Re:waste? (Score:3, Interesting)

        That's exactly what is happening: the software is cached. From their website: "I've only got dial-up; can I still use Zero Install? Yes! Run each program you want while on-line and it will be cached. When you're off-line, the cached copy is used automatically."


        Sounds a lot like Java Net Start to me.
      • Re:waste? (Score:5, Interesting)

        by Lumpy ( 12016 ) on Saturday April 03, 2004 @03:15PM (#8757062) Homepage
        or simply run a STATICALLY compiled binary and not worry about dependancies...

        All my binaries are statically compiled for the downloaders... and I NEVER get a complaint how my apps dont work, in fact I get more comments on how my linux apps binaries work every time no matter what and is a stark difference to most of the other linux stuff out there.

        the typical response is "your binaires work every time... why can't other OSS developers do that?"

    • by Adolph_Hitler ( 713286 ) on Saturday April 03, 2004 @01:41PM (#8756447)
      After you download it, its cached. Basically you have to download the app anyway to run it. If you download say the new version of say GAIM it would be fantastic. I'd just drag it from the browser onto my desktop and then click it. Apt-Get is for nerds like you. Regular people want to accomplish a task in the least amount of steps. If you can bring the task to two steps, click n run, or drag n drop, this is what people want.
  • by tepples ( 727027 ) * <tepplesNO@SPAMgmail.com> on Saturday April 03, 2004 @01:34PM (#8756402) Homepage Journal

    Slashdot has previously covered Rox here [slashdot.org].

    But one thing I wonder about Zero Install: what if you launch an application, it needs a piece that you don't have cached, and the server hosting it is down? Is it possible for a maintainer to unpublish an application?

    • by tal197 ( 144614 ) on Saturday April 03, 2004 @03:55PM (#8757291) Homepage Journal
      But one thing I wonder about Zero Install: what if you launch an application, it needs a piece that you don't have cached, and the server hosting it is down? Is it possible for a maintainer to unpublish an application?

      Zero Install can download from mirrors, peer-to-peer, etc, provided it gets the master index with the GPG signature from the main server.

      If you want to get the master index from a backup server, you need manual intervention (root needs to indicate that the backup server can be trusted).

      However, since the signature part is small (about 1K), a single trusted backup site (debian.org?) could easily host every index in the world. The rest of the data can come through peer-to-peer, etc.

  • by gleffler ( 540281 ) * on Saturday April 03, 2004 @01:34PM (#8756403) Journal
    For anybody. It emulates the best aspects of the mac's "packaging" system (bundles) while also making it easy to get new stuff.

    Hopefully, this takes off in more of the 'newbie oriented' distros so that we can say "Just type cp /software/openoffice /usr/software to install" instead of ./configure && make && make install. :)

    I still would like to know how they plan on fixing library dependencies, but ... assuming they get over that, I'll be very happy once this is released.
  • This is why... (Score:5, Insightful)

    by ghettoboy22 ( 723339 ) * <scott.a.johnson@gmail.com> on Saturday April 03, 2004 @01:35PM (#8756410) Homepage
    I *love* my PowerBook G4. Seriously, Apple has had this for years going back to the old System 9, 8, 7, etc.... it's nice to see someone major is finally trying to copy 'ole Steve Jobs's team. If you ever wondered what life would be like without the Windows Registry, this is it.

    • The full name is "Windows Registry Copy Protection and OS Degradation Scheme". It's part of the "Treat all customers like criminals because some are criminals" Initiative.
    • by SuperBanana ( 662181 ) on Saturday April 03, 2004 @02:33PM (#8756789)
      Apple has had this for years going back to the old System 9, 8, 7, etc

      Actually, it hasn't. Ask any Mac pro; applications started making "library" files that went into the System folder(or worse, programs like Norton Utilities insisted on putting libraries into the Extensions folder, which was not what Apple told developers it was for). Apple caved in and 9.x started sprouting "Application Support" folders, a "Libraries" folder, etc. Developers just couldn't wrap their brains around the single-file, applications-don't-mess-with-the-system-folder model. Often times, commercial programs would blatantly disregard Apple's filesystem guidelines. Often times extensions has such weird names, Cassidy&Greene developed an extension manager with a database of all the known files so you could figure out what the hell stuff was.

      While you tout OS X as better than Linux or Windows, as an experienced long-time Mac user I saw OS X as a step down from the old MacOS with regards to filesystem simplicity. Applications now install stuff into zillions of different places. Virtually none of their installers ask if you want to install just for your user(ie using your Library, Application etc folders), or install system-wide(a few- VERY few- do). Application installers that have no business needing my password ask for it; why does Acrobat reader need sudo to install itself into Applications? Answer- it doesn't, but it's probably saving some prefs file somewhere it shouldn't.

      Even worse...you can install packages using a "package system", but Apple will be damned if they'll give you a way to UNINSTALL a package, system or otherwise. Want to remove all the localization crap you forgot to turn off during system install? You have to download a third-party app to remove almost a gigabyte of files from your system, instead of just going into a "Software" panel and clicking remove. Windows has had it for years, with its only flaw being that it calls the developer's uninstall program, which often times doesn't work, especially if you've deleted the app folder but nothing else.

      Another side effect of the multiple-files problem is added complexity; the # of files in the filesystem has ballooned enormously, because instead of an application being one big file with a resource fork, it's now at least 3 folders, and often times hundreds(or even thousands) of files. Moving an application used to be easy- you moved one big file, the Finder just did a straight copy very efficiently. Now it has to copy hundreds of small files, so it takes forever(and amusingly, copying just a bunch of raw non-app files takes about 5 times longer in the Finder than it does via cp or ditto).

      Don't get too uppity about not having a registry. OS X uses a number of preference files, and even though they've changed to XML and the like, users are seeing the same problems with OS 9- corrupt preference files causing odd behavior. Remove the naughty pref file, things start working again. There are now third party utils that specialize in checking these prefs; if they can do it, why can't it be part of the bootup process?

      Oh, and lastly- Apple has made it even more difficult to make a boot disk for your mac to do disk maintenance. It used to be you just copied over your system folder, removed all the extensions, control panels, prefs, etc you knew you didn't need. Now? You need some stupid shareware program to do it, and half of 'em still haven't been updated for 10.3.

      • by jc42 ( 318812 ) on Saturday April 03, 2004 @03:21PM (#8757094) Homepage Journal
        Application installers that have no business needing my password ask for it; why does Acrobat reader need sudo to install itself into Applications? Answer- it doesn't, but it's probably saving some prefs file somewhere it shouldn't.

        Exactly. And you have no way of knowing what it's doing with that password. If you're hooked up to the Net, chances are it's (then or later) being cached somewhere inside apple.com, too. Do you know of a way to convince me otherwise? If not, a sensible person would just assume that the password is now known to Apple.

        Similarly, when I first got my Powerbook, it had to be sent in for repairs after about a week. (The screen wouldn't come to life.) They wanted my password, of course, and I gave it to them. No problem, I thought; when I got it back, I'd just change my password.

        Lotta good that did me. Yeah, I can use my new password when I log in. But nearly everything in the system that asks me for my password will only accept the original one. I've found a few places that packages cache the password, and changed those. It lasts for a while, then one day it wants my original password again. I've found it necessary to keep a record of all the passwords I've used, because I generally have to try them one at a time until I find which one works with a given app.

        Your password is cached all over the place by OSX packages, so the only sensible approach is to assume that it's public knowledge, at least to Apple insiders.

        This is one reason that I'd never use OSX for any sensitive applications. I have to assume, from the way it handles passwords, that OSX systems are open to anyone at Apple, and to anyone able to bribe the right people at Apple, and to any intruder who knows where they are cached on my machine.

        I'd like to be proved wrong. But a mere assertion that I shouldn't worry my little head about it won't convince me. I want proof.

        So far, I've never seen linux software playing fast and loose with passwords like this. I mean, mozilla will cache passwords for you, but it asks,you can say "No", and it apparently honors that. And it doesn't ask for local passwords, only those demanded by web sites.

        Also, there are some linux apps that ask you for the root password because they need to run something as root. But you never have to give the root password. You can always kill the app and start it again under sudo. Then it won't know the password, and won't ask for it because it already has the right permissions, so you know the password couldn't be cached.

        • by zhenlin ( 722930 ) on Saturday April 03, 2004 @11:21PM (#8759490)
          Applications do not cache passwords. If you see the nice prompt asking you for an administrative password, it should be coming from the system. (There are ways of verifying this... Set your account to use the Blue colour scheme, set your (real) root account to use the Graphite colour scheme. Any dialogue boxes or windows with Graphite widgets are running as root)

          As for asking for the original password, that is because of Keychain. That one is encrypted with your original password.

          As for apple.com caching the password... Well, it is quite simple to prove/disprove that: put the OS X machine behind a firewall, and log any attempts to connect to a machine in apple.com network.
    • Re:This is why... (Score:5, Insightful)

      by erikharrison ( 633719 ) on Saturday April 03, 2004 @02:38PM (#8756820)
      Uh . . . . no.

      Classic Mac OS used the resource fork for storing associated files, but still had an OS wide location for preference files (MacHD:System:Preference). Sure, no registry, but frankly, LOTS of OS's don't have a registry.

      Mac OS X has bundles which resemble AppDir's (That Rox uses) a great deal, but OS X got them from NeXT not OS 9, and NeXT got it from RISC, which is the OS that Rox is trying to emulate in the first place. Mac OS emulation is the farthest thing from Thomas's mind, I assure you.

      The real interesting technology isn't the AppDir's anyway, it's ZeroInstall, which allows you to view the internet as a file system from which you can directly run applications.
  • by nurb432 ( 527695 ) on Saturday April 03, 2004 @01:37PM (#8756423) Homepage Journal
    This is the true power of Unix + X.. Running applications remote from the workstation..

    This aids in system management, resource control, data security, platform independence, .. most everything that a data center does, this improves it..

    It *is* the future.. ( and ironically the past.. remember VT100's and 3270's ? ) as is its the right way to do computing..
    • by Inspector Lopez ( 466767 ) on Saturday April 03, 2004 @02:00PM (#8756583) Journal
      As long as CPUs are so fast, RAM is so cheap, and disks are big ... and the net is relatively slow, thin clients will have only thin application.

      I *do* remember the good old days of VT100s, and they worked great; the thing that displaced VT100s in our research group was *Macintosh* --- those wascally little SEs and the occasional MacII had such nice software onboard, they were a delight to use. The Macs were in turn partially displaced by DEC RISC machines, which cost more but brought a lot of horsepower to the desktop.

      We used to use a Beowulf in our current project, but the blasted Pentia got so fast there was no point. Our real-time processor now relaxes on a single machine.

      It's not so hard to imagine the pendulum swinging back to thin clients (perhaps in the guise of wireless PDAs, or in a more sinister form via .NET), but there is no need for a thin client to run a word processor or mail client or www browser. Religious wars aside, our desktop software is quite capable, and getting more so.
    • by starseeker ( 141897 ) on Saturday April 03, 2004 @02:30PM (#8756778) Homepage
      I note the rather skeptical responses, and while I agree with you in broad I think the picture is going to vary slightly.

      As has been pointed out, the primary weakness of a network based thin client system is there exists a single point of failure. So I would propose the following for a corporate computer network:

      Two mainframe systems, in physically remote locations, each completely capable of handling the corporate network. The mainframe's will serve as the core of the network, but the thin clients will be slightly more than just monitors.

      A powerful thin client (I suppose thin client might not be the proper term) is what is needed to handle the reality of an iffy network. The thin client needs to be able to function independantly in the short term. It needs to be able to hold all of it's currently-in-use software and data in memory, and be engineered to make an emergency dump to some local nonvolital memory in case of a power failure. The key benefits to this "thick client" setup are a) because it is not an independant PC with the ability to boot and load software on its own, it is not a candidate for theft b) All data is preserved automatically at a central location except in the case of an emergency, and even then it is recoverable c) software updates only have to be performed one place to be deployed company wide d) maintainance is simpler since the thin clients can in theory be made without moving parts (i.e. hard drive) if they use solid state memory for the Gig or two of non-volital emergency storage they will need. They will be more expensive than a true thin client, but I rather suspect in bulk the economics would work and certainly maintainance costs would provide more than enough incentive.
  • Well duh... (Score:5, Interesting)

    by Enonu ( 129798 ) on Saturday April 03, 2004 @01:39PM (#8756431)
    Sorry folks, we have the technology right now to support multiple version of libraries at the same time, disk space is no longer an issue, and it just makes logical sense to keep everything related to an application together in a logical unit that can be administrated with minimal effort. The /bin, /lib, /usr structure has to go. Applications locking in to configuration files across the file-system has to go. It's simply painful to use, and something like Rox here is the first step in the right direction.

    Not like this step hasn't been taken in the past by multiple other software solutions ...
    • Re:Well duh... (Score:4, Insightful)

      by 0x0d0a ( 568518 ) on Saturday April 03, 2004 @01:51PM (#8756518) Journal
      Sorry folks, we have the technology right now to support multiple version of libraries at the same time

      Why would you want to do that?

      On my Fedora box, if I upgrade glibc to fix a bug, I want *all* my applications to benefit.

      Oh, and disk space is not the reason for having shared libraries -- memory usage is.
      • Re:Well duh... (Score:5, Insightful)

        by rusty0101 ( 565565 ) on Saturday April 03, 2004 @02:16PM (#8756691) Homepage Journal
        One reason you might want to support multiple versions of a library is that when a major upgrade to a library occurs, say going from libc2.3 to libc3, backwards compatibility is not assured. If one of the applications you are using can not use libc3, you as the user get to have the joy of re-compiling it to see if it will work with the new libc. If it doesn't, having a copy of the old libc2.3 lying around to run that application against would be handy. No?

        Granted as the last application requiring the old library is upgraded to being able to use the new library, the old library should be eliminated, but when a major upgrade breaking backwards compatibility happens, most people do not want to wait days or months for the application they have been using to be upgraded. They usually want to be able to continue to do the work that they need to do.

        Then again, I could be wrong. Perhaps most other people are happy to sit around on their thumbs.

        -Rusty
    • Re:Well duh... (Score:4, Insightful)

      by lpontiac ( 173839 ) on Saturday April 03, 2004 @02:59PM (#8756966)
      and it just makes logical sense to keep everything related to an application together in a logical unit

      I propose we set aside a location on the system to hold subdirectories each dedicated to a single software package. Let's call it /opt.

    • If every application has it's own directory, your PATH will be huge and the system will have to search each one of the PATH entries until it finds your application when you try to run it. How many searches before it finds it? 10,20,200?

      Oh, sorry, you *only* use a GUI and so click on the application. Well not everyone solely uses a GUI or wants to go searching through dozens of application directories for the specific binary which runs an application.

    • Re:Well duh... (Score:4, Informative)

      by deblau ( 68023 ) <slashdot.25.flickboy@spamgourmet.com> on Saturday April 03, 2004 @06:02PM (#8758055) Journal
      The /bin, /lib, /usr structure has to go.

      This kind of proposal about scrapping the current directory structure has been discussed ad nauseum on the Filesystem Hierarchy Standard [pathname.com] mailing lists. Here is the Standard Rebuttal against scrapping /bin and /usr/bin:

      With each app in its own directory, your $PATH becomes a mile long, and too difficult to maintain.
      You can't have your cake and eat it too. Some have suggested the use of symbolic links in /bin and /usr/bin, but then you run into this Standard Counterargument:
      Different application packages can have identically-named binaries. Upgraded packages
      always have the same binary names.
      The best combination seems to be symbolic links to the most recently-installed apps, but overriding your $PATH in ~/.bash_profile for legacy versions.

      The Standard Rebuttal against scrapping /lib:

      Apps which depend on other apps for libraries won't know where to look. This is especially true if each installed version of a required app is stored in its own numbered directory.
      Another argument involves the use of 32-bit vs 64-bit libraries. Best practice seems to be making copies of the most recently installed libs in /lib and /usr/lib, and using environment variables ($LD_LIBRARY_PATH, e.g.) to run older apps.

      Rebuttals for getting rid of /usr (i.e., having a One (Partition) Size Fits All approach):

      #1: Some boxes have read-only disks for security (CD-ROM firewalls come to mind). Now you can't install new applications.

      #2: You have one 100GB partition and you get a power spike. Now you have to wait for the fsck to finish before you can troubleshoot the damage.
      #3: You're in a diskless environment with centralized, NFS-mounted applications. With no /usr, you have no suitable mount point.
      #3 is especially common in large enterprise and government environments. If you've ever talked to someone who admins 1,000 desktops for their department, you'll know what I mean.

      On the mailing lists, the use of /package (or /pkg) also has been discussed ad nauseum. Keep in mind that the filesystem hierarchy is designed so that non-local (commercial) packages don't step all over each other when installing. Local (enterprise) software installation can happen wherever the hell you want it to, as long as it doesn't have to play nice with COTS software.

      Executive summary: you can run whatever directory structure you want -- I won't stop you. Just expect to hear lots of complaints from your developers and sysadmins. The reason things are the way they are is partially due to industry inertia, but mostly due to the fact that they just work better that way. If you don't like it, go contribute [sourceforge.net].

  • by heyitsme ( 472683 ) on Saturday April 03, 2004 @01:40PM (#8756443) Homepage
    It has been implemented in OS X. This is what happens when you drag a .app file (really, a folder. try to cd into one sometime) and copy it to any point on your hard disk (typically /Applications).

    Reminds me of an old joke...

    Microsoft: Where do you want to go today?
    Linux: Where do you want to go tomorrow?
    BSD (in this case, OS X): Are you guys coming or what?!?
  • by MajorDick ( 735308 ) on Saturday April 03, 2004 @01:42PM (#8756451)
    "a way that I believe is exactly what Linux needs to become a serious contender for Joe User's desktop"

    While I appreciate the posters enthusiasm this is not a panacea for oe User putting Linux on the desktop. What is in my opinion is a scale of compatibilty with both hardware and software. I mean Joe User (or Joe Six Pack) Only cares if he can do what he need to with apps he wants to. NOT what someone else tells him is a better application. He wants to play his games, surf the web, doodle with his digital checks and balance his checkbook, Tell me of any GOOD applications the average computer illeterate could use to do his checkbook, edit his pictures etc that is as brainless as developers make them for Windows/Mac ? ZIP , There are GREAT apps for doing all those things but in general they are for much more sophisticated users. When Jp can go to CompUsa and buy anything he wants , games, tools, etc, that will run on Linux and has some support number he can call when he breaks shit THEN Joe will use Linux on the desktop.


    • And those are the worlds most popular games. Games is not the major issue. The major issue is being able to download your porn, being able to surf the web, being able to burn pirated software, movies and DVDs, being able to get on AIM or some IM client, and occassionally use a word processor.

      This is what 99% of internet users do. They don't run some esoteric application by Microsoft, 99% of people don't use all the features of word or office. Most of them wouldnt know the difference between Word Perfect,
  • Yes (Score:5, Informative)

    by mrsev ( 664367 ) <mrsev&spymac,com> on Saturday April 03, 2004 @01:44PM (#8756471)
    This sounds great. Im no linux guru and the hardest thing I find is to install a programme that requires other files but one version is required for one app and the other for another. In this age disk space is trivial and stability and ease of use much more important. Granted many people like tinkering with their systems but for me I just want to get my work done..(and then play games).
  • by Koyaanisqatsi ( 581196 ) on Saturday April 03, 2004 @01:46PM (#8756482)
    Flame as you want, but .Net assemblies not published to the GAC (Global Assembly Cache) are exactely like that: all of the application files are kept under a single directory and all you need to setup the app is a "xcopy" of its files.

    Delete the directoty and the app is gone.

    This is here now, and altough .Net still have to catch on into the desktop, it is very much real on the server side. Gotta love it!
    • by FreeLinux ( 555387 ) on Saturday April 03, 2004 @02:34PM (#8756791)
      Microsoft had this in the very beginning. It was called DOS and DOS applications were completely self contained. When an application was installed all of its files remained in the applications own directory. To move an application, even to another PC, you simply copied the directory. To delete the application you simply deleted the directory.

      Then Microsoft got smart (too smart for their own good) and decided it was more "efficient" to use shared libraries and that all such libraries should be kept in the %SYSTEMROOT% folder. This meant that applications stored files in one directory, libraries in the system directory and configuration files who knows where. That's better, isn't it?

      After that Microsoft decided that it was too "troublesome" to have all of these separate configuration text files. They got smart here too (again too smart for their own good) and decided that it would be so much "better" to have all the settings in a single monolithic and monumentally fragile registry. (Watch out Gnome)

      After all that, installing and removing applications became a nightmare. So they decided that it would be best to have a package management system that managed all installations and removals. They established standards that required the proper use of this package management system for the application to be "Windows certified". Unfortunately for them the package management system isn't so great, especially when it comes to the registry and while many vendors do obey the "Microsoft standard", many do not. In fact, the worst offender for not properly using the package management system, and there by polluting PCs with monumental amounts of cruft, is Microsoft themselves.

      So, now Microsoft is trying to implement an "even better" system with their .NET strategy. One that installs applications into their own directory for easy management and removal. A new system that they conveniently choose to forget, is just like the system they used in 1982! Ooh, ahh. Consider me un-impressed!

  • by mst76 ( 629405 ) on Saturday April 03, 2004 @01:51PM (#8756525)
    Most of the time you're assumed to have root access, especially with rpm and deb. This is supposed to be a multi-user system, right? What if I want to give users the ability to install end-user apps in their own /home to try out? Should I tell them to download the source and tweak the makefiles so make install will behave correctly? Is there no better way to do this?
  • by hak1du ( 761835 ) on Saturday April 03, 2004 @01:55PM (#8756545) Journal
    Yes, it's nice to include all the dependencies in a single directory. However, there is a reason why not every Gnome desktop accessory includes 500M of Gnome libraries--disk space is cheap, but it isn't that cheap.

    Something like Zero Install should be combined with some form of duplicate file detection or duplicate block detection and sharing. Furthermore, to avoid a lot of tricky bookkeeping, there should be copy-on-write. And that kind of functionality really is best implemented in the file system itself. So, something to think about for the next major release of "ext". (Note that Microsoft is implementing something like this, but they certainly weren't the first to come up with it.)

    Note that the same thing should also happen on downloads: you only download application components you don't already have locally. NFS isn't a good protocol for that, but WebDAV could handle it.
  • Like the DOS days (Score:4, Insightful)

    by superpulpsicle ( 533373 ) on Saturday April 03, 2004 @02:06PM (#8756621)
    Everything was just a matter of folder installs during the DOS days. You copy a binary and run a binary and delete a binary.

    Believe it or not part of the reason why M$ went with the setup.exe installation was because software was harder to distribute around requiring the setup binaries.

    Funny how things come around full circle.
  • by bfree ( 113420 ) on Saturday April 03, 2004 @02:08PM (#8756632)
    In the last few months klik [knoppix.net] came into being. klik is a point and click software store for Knoppix which uses AppDir (quoting from the architecture description):
    Mainly a philosophy about making each app package "self contained" (at least relative to some defined base system, Knoppix in our case).
    If you have a recent (say from last November or so) version of Knoppix fire it up and give it a go! You can even install software while running from the liveCD and retain it in a persistent home.
  • by pigpogm ( 70382 ) <michael@pigpog.com> on Saturday April 03, 2004 @02:08PM (#8756638) Homepage
    This sounds to me as though it has some similarities to the way the old Acorn Archimedes used to work (What? Oh, it was quite big over here in the UK ;)

    An 'application' looked like a single file that started with a '!'. It ran as though it was one file, copied and moved as though it was one file. If you used a modifier to open it (Ctrl-click, or something similar), though, it actually opened up as a folder. The app was really made of a number of files - the icon that the application/folder would have, the actual programs, any config files, a script that was run when the program was launched, and another script that would be run as soon as the OS 'saw' the app.

    Part of the config would tell the OS what file types the app could handle, so as long as the app had been 'seen' (ie, it's parent folder had been opened), the filetypes would be recognised until the next reboot.
  • by Jameth ( 664111 ) on Saturday April 03, 2004 @02:19PM (#8756712)
    With fully self-contained apps, we could do away with those silly shared libraries, and we could also just pitch reusing simple programs. Maybe, maybe if we ditched the fifo, we would have finally removed all the flaws in UNIX!
  • by tal197 ( 144614 ) on Saturday April 03, 2004 @02:36PM (#8756807) Homepage Journal
    I'm the author of Zero Install (and much of ROX) so I'd better clear up a few points here.

    The main one is that there are actually two installation systems being discussed in the article:

    1. ROX uses application directories (bundles). That means that instead of downloading gimp.tgz and then copying the files inside it all over the place (/usr/bin, /usr/share, etc), they stay in a single directory and you access them from there. That allows drag-and-drop installing, and uninstalling by deleting the directory.
    2. Zero Install is a caching network filesystem, where all software is available at a fixed, globally unique, location (like web pages).

    ROX application directories can be made available via Zero Install. In that case, running the application is a lot like running a program from a network share (but more aggressively cached). Or, you can DnD them onto your local disk manually (without Zero Install).

    You can also use Zero Install for non-ROX type applications.

    Secondly, when we say that application directories are self-contained, we mean that a single .tgz download corresponds to a single installed directory. Application directories can (and do) still depend on shared libraries (possibly other application directories).

    Without Zero Install, after installing an application by drag-and-drop, running it may tell you that you need to install some other library before it will work.

    With Zero Install, the application just tries to access it from its fixed location (URI) and it gets fetched.

  • by renehollan ( 138013 ) <[rhollan] [at] [clearwire.net]> on Saturday April 03, 2004 @02:39PM (#8756835) Homepage Journal
    Like so many package management models, this one has benefits (simplicity, package isolation, multiple package version coresidence) and drawbacks, the biggest one being package isolation, also an obvious benefit.

    The trouble stems if you have some kind of base package, which is extensible via some kind of plug-in architecture, traditionally implemented with DLLs under Windows, or shared object library repositories under Unix and varients. Do the plugins form their own "application" or are they part of the application which they extend? What if I want to manage groups of plugins from a common source, independent of the applications extended? Do all applications have to be so isolated that they can only rely on a common base operating system that can't be extended by third parties (which would then be locked into their own application spaces)? What about multiple users sharing the same applications: will their saved files be intermingled?

    Blech. Sounds like the cure is worse than the disease.

    But, nevertheless, the idea of organizing independent applications in a convenient hierarchy is a desirable one. The trouble is that the traditional filesystem only offers a single hierarchy in which to organize them and so we struggle to determine the best hierarchy to use. We really need to organize sets of files that compromise a related unit ("file set", if you will, and "application file set", for the specific case of end-user applications) in multiple hierarchies: a new one created for the file set being added, and existing ones that the file set affects.

    "Symlinks!"

    What's that?

    "Symlinks!"

    Well, O.K. symlinks kind of solve this problem: pick a cannonical location in the file system for your file set and symlink secondary links to the appropriate files. This is a good idea, and has been used for ages to separate the reference to a file in the filesystem from where it is actually stored, but there are drawbacks:

    1. Symlinks are one-way. Typically you'll have an application directory full of files and subdirectories, and a bunch of links into that directory tree. What happens if you move or delete entries? Oh, woe to the who has broken symlinks.

    2. The context in which the symlink is interpreted may restrict where the target may be. Consider startup scripts added under /etc/rc.d/... They' don't do much good if they link to files in filesystems that haven't yet been mounted. Some restriction to where things have to be canonically installed depending on how and when they will be used is apparent. Fortunately, we generally don't have complicated hierarchies of what parts of the filesystem are mounted, but rather just a few: boot, locally mounted, remotely mounted. So, this problem is managable: we can inagine /opt and /usr/opt: the former available on the root filesystem.

    3. Application interaction. The trouble with having one application extend the capabilities of another (and the base O/S can be considered as "one application" from the perspective of third party software providers, other than the O/S provider) is that adding, moving, or removing files can or should affect running applications. Ideally, an action which would leave a symlink dangling should be picked up by any running applications that might care and either delayed until the application can cope, or vetoed. (And, I suppose, --force and --async are your friends here). Current practice in most package managers is to have pre-install, post-install, pre-deinstall, and post-deinstall scripts that try to deal with this inter-application issue. The problem is two fold: (1) the things necessary to be communicated to other applications are varied, and (2) the manner in which they are communicated differ between applications (never mind different versions of the same application). Ideally, the inter-application interface that deals with new, removed, or relocated external files should be (a) thin, and (b) supported by t

  • OS X package? (Score:5, Interesting)

    by zpok ( 604055 ) on Saturday April 03, 2004 @03:01PM (#8756975) Homepage
    So if I'm right, part of this article is about something akin to OS X packages? You install an application by dragging-dropping it somewhere (preferably the Applications directory) and un-install by unceremoniously dropping it in the waste basket.

    And if I get it, just like in OS X, this doesn't mean your application can't use or install other resources in the Library.

    Pretty cool, that's 90% of my Linux gripes gone in one big swipe. I hope this can become mainstream. It also means I can stop posting on the importance of simple installers ;-)
  • by rollingcalf ( 605357 ) on Saturday April 03, 2004 @03:07PM (#8757008)
    I like the concept of keeping all the files for each app fully contained in its own directory, even if it means some libraries will be redundantly duplicated across the disk. Disks nowadays have huge gobs of space and are cheap.

    However, memory isn't so abundant. When loading up an app, is the system intelligent enough to recognize that a given library was already loaded into memory from a different directory, and therefore it won't load another copy of the same library?
    • Probably not, but I recall reading that modern virtual memory systems are so good, they reduce the actual benefits of dynamic libraries down almost nil.

      I think future versions of Windows know how to scan the disk periodically, find redundant files, and essentially link them together automatically. That's pretty cool - you deliver your app with FOO.DLL version whatever and drop it in your app's directory. If someone else installs a FOO.DLL in their app's directory that matches the exact same bits, the sy

      • I'd like to see all apps ship with whatever library they came with, located in the apps directory. When an app launches, have the OS check to see if the libraries required are loaded. If they are, don't load the local libraries, just the unique ones. Make it a run-time option for all apps of something like "Always load local libraries" in the off case that there should be a problem.

        This way you could still have systemwide shared libraries that are updatable, but it wouldn't be manditory to use them if i
  • by spun ( 1352 ) <loverevolutionary&yahoo,com> on Saturday April 03, 2004 @06:04PM (#8758070) Journal
    Seriously, this rocks. Yeah, yeah, sure. Other projects have done things like this before. But I love this idea even more than Gentoo's system, which also rocks. So I read some of the site to try to answer some of my own first asked questions.

    Q. Do I have to add a bunch of crap to my $PATH?
    A. No, you just use a shell that is application directory aware, and it will find the binary just fine if the application directory is in a directory in $PATH.

    Q. Will it let me recompile critical applications, either to patch them or optimize them?
    A Sure. Keep three different verions of Apache around, one with mod_perl, one with mod_rewrite, another with mod_php. Optimize for your new Sexium X CPU. Turn on full foo support, even though it's not recommended!

    Q. What about apps with hardcoded pathnames?
    A. Edit and recompile. HAND.

    Q. What about libraries?
    A. (From this page [sourceforge.net] on the ROX Application directory system.) Applications link to libraries in /uri/0install. If the required version isn't there, then instead of reporting an error (as traditional applications do), they run 0refresh. Software can be uncached when it hasn't been accessed for a long time (eg, months or years). If it's needed again, it gets refetched.

    Q. What about versioning?
    A. You can keep different versions of an application around in different directories. I couldn't find any information regarding library versioning. Hopefully libraries in /uri/0install have directories by major version number, and ROX applications are linked correctly. Prepare to have much fun with compiler and linker flags finding all your include files and libraries when you convert your application to ROX.

    Q. DND Saving? What's that?
    A. Rox aware apps support dragging files from a save box to a directory in a file browser to save. Finally, someone does this right.
  • by alizard ( 107678 ) <alizard&ecis,com> on Saturday April 03, 2004 @07:11PM (#8758446) Homepage
    Combine this with a way to read Windows peripheral drivers and this could solve Linux's worst usability problems. Why not?

    The big problems are to make it possible for an average user to install and deinstall first applications, then, peripherals.

    In general, any OS is going to need the same kind of information from any class of peripherals. Why can't someone write software to decode the Windows driver information formats and turn the information into something that can be used to configure Linux to use these peripherals?

    If someone plugs a USB scanner or digital camera or printer in, why shouldn't Linux ask for, first a native Linux driver, and if this isn't available, a Windows driver disk?

    Wouldn't it be nice to be able to buy peripherals based on price and performance and not have to worry if it's usable with Linux or not?

    Wouldn't it be easier to write a translation application or several than for the Open Source community to write thousands of drivers individually and for the rest of us to attempt to find them and then try to figure out if that driver will actually work with the distro one is running?

  • encaps (Score:3, Insightful)

    by menscher ( 597856 ) <menscher+slashdot@u i u c . e du> on Saturday April 03, 2004 @07:47PM (#8758593) Homepage Journal
    Whoever thought this was new has obviously never heard of encaps. Basically the same idea, but it's been around for about 5 years longer. Look at www.encap.org [encap.org] for starters. (I'm not going to write a lot since nobody will read this anyway.)

It is easier to write an incorrect program than understand a correct one.

Working...