Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Linux Software

Designing Good Linux Applications 209

An Anonymous Coward writes: "A guy from IBM's Linux Impact Team in Brazil has written a guest column on Linux and Main describing how applications should integrate with Linux. It's Red Hat-centric, but there is a lot of material about the FHS and LSB that most users probably don't know."
This discussion has been archived. No new comments can be posted.

Designing Good Linux Applications

Comments Filter:
  • by WIAKywbfatw ( 307557 ) on Monday March 25, 2002 @05:21AM (#3219989) Journal
    ...'cos the common abbreviation for a Compaq Linux Impact Team would be interesting.

    (Apologies in advance to all /. readers who have no Y chromosone and/or who don't appreciate South Park-style humour.)
  • First of all, (Score:4, Insightful)

    by ultrapenguin ( 2643 ) on Monday March 25, 2002 @05:22AM (#3219994)
    before going to design NEW linux applications,
    PLEASE take your time and DEBUG the current ones.
    The collection of half-abandoned software that has tons of bugs that nobody uses (perhaps of those bugs) is absolutely huge.
    • like maybe the HTML used for the article. it renders horribly on ie 5.5.
    • Maybe, if you have time, you can read the article.
    • Re:First of all, (Score:5, Interesting)

      by ianezz ( 31449 ) on Monday March 25, 2002 @06:44AM (#3220216) Homepage
      As much as I share your desire, I think there is something deeper going on:

      IMHO, to attract OSS developers, a piece of software has to be:

      • Useful to the developer, or at least it should offer the perspective to be useful in the near future. Otherwise there is very little motivation do mantain/debug software (sorry, I don't buy at all the idea that a OSS developer puts his time and brain in doing something exclusively for the good of humanity or users).
      • Easy to understand, extend and debug: if it isn't easy to grasp the whole picture, or at least grasp the picture of a whole subsystem, OSS developers will leave the project in frustration after a while, starting their own. The fact that large (successful) projects like Gnome, KDE, Mozilla and OpenOffice are divided in several smaller components, and the Linux kernel itself, although monolithic, is divided in several subsystems, should tell something on the subject.
      • Well documented (for developers): because it's hard to grasp the big picture only by looking at the sources when the codebase is large: you end just seeing a lot of trees, but you lose yourself in the forest. Sources tell a developer how something is implemented and how it is supposed to be extended, but usually they tell very little on why things have been implemented that way. Intelligent comments in the code are good, but when a concept spans on several source files, a README on the subject or a tutorial are definitively needed.

      By any way, I don't pretend that these are anything more than a few rules of thumb, but in the end I'm sure that, for OSS software having the characteristics above, developers willing to do maintenance will show up by themselves without needing to preach them.

      • Re:First of all, (Score:2, Insightful)

        by winchester ( 265873 )
        Well documented (for developers): because it's hard to grasp the big picture only by looking at the sources when the codebase is large: you end just seeing a lot of trees, but you lose yourself in the forest. Sources tell a developer how something is implemented and how it is supposed to be extended, but usually they tell very little on why things have been implemented that way. Intelligent comments in the code are good, but when a concept spans on several source files, a README on the subject or a tutorial are definitively needed.

        I always thought design documents were supposed to tell me this. I guess I must have been building too much software in corporate environments.

        On a more serious note, I see a disturbing lack of design documentation in open source software. This is in my opinion one area open source definitely should improve, togehter with project management. But that would make oss development a lot more formal, and a lot of people probably do not want that. Choices, choices.

    • Re:First of all, (Score:4, Insightful)

      by CanadaDave ( 544515 ) on Monday March 25, 2002 @06:50AM (#3220226) Homepage
      Programs that were abandonned were abandonned for a reason. Either there were too many bugs, the design was poor, or there is just no demand for it. There's no sense in working on an application just because it doesn't work. It's the natural selection process of Linux programs. The strongest survive. If a program is buggy and people really want to see it work and happen, it will get some attention eventually. Linux is a perfect supply demand scenario in most cases. When developers want Linux to do something they just make an application. The advantage of course over Windows is that other people usually come in to help out, and the code is all over the internet.

      There are more and more stable applications out there now, however. Take Mozilla for example. The long awaited 1.0.0 should be out in a month or so. An XMMS the MP3 player which is as good as they get (thanks of course to huge demand for a good MP3 player), OpenOffice.org which is slowly creeping towards their 1.0 release and beyond, KDE3/Koffice(and KOffice doesn't have many developers, partly due to low demand, but I think that will change soon). Things have really improved in the last year I think, and 2002 will be a big year as well.

  • by Bronster ( 13157 ) <slashdot@brong.net> on Monday March 25, 2002 @05:29AM (#3220017) Homepage
    While he doesn't mention Debian [debian.org] at all, it's clear that the article is strong on packaging. I actually prefer Debian's approach, having a list of sources from which you obtain software, and providing search tools for that list.

    The other important thing is that programs often don't work very nicely with each other, or need certain versions to work. This is where having a central system for controlling dependencies is rather important. I don't actually think Debian goes far enough at the moment (not really handling Recommends with apt), but it's getting there.

    The other important part of packaging is handling upgrades automatically. Packages have security problems, they have new features added. If you have to work out (a couple of months later) which --gnu-long-opts-enable --with-features --without-bugs you had to put on the ./configure command line to get a build that did what you wanted, you're likely to put off upgrading.

    # echo "http://debian.brong.net/personal personal main" >> /etc/apt/sources.lists
    # apt-get update
    # apt-get install bron-config

    Whee ;)

    (note - that URL doesn't exist yet, but it's my plan for the future).

    (note:2 - no ssh private keys in that ;)
    • The other important thing is that programs often don't work very nicely with each other, or need certain versions to work. This is where having a central system for controlling dependencies is rather important.

      You mean, something like a Registry? ;)
    • Anyone running Red Hat 7.2 or many other RPM based distributions can easily install apt (or a similar tool, like urpmi, tho I prefer apt) to do the same thing.

      The advantage there is that RPM is a standard - currently the older RPM (version 3) is included in the Linux Standards Base, but once Maximum RPM is updated for RPM 4, its extremely likely that RPM 4 will become the standard.

      If you're using Red Hat I highly recommend installing it.

      rpm -Uvh http://enigma.freshrpms.net/pub/apt/apt-0.3.19cnc5 5-fr7.i386.rpm

      apt-get check
      apt-get update
      apt-get install

    • While he doesn't mention Debian at all, it's clear that the article is strong on packaging. I actually prefer Debian's approach, having a list of sources from which you obtain software, and providing search tools for that list.

      The guy who made that doc signs there he works for IBM.
      His intention clearly was to do straight points about "how to do this", and from a basic standpoint. Thus he mentioned everything a lot RHS-like, probably targeting the newbies.
      Linux experts won't have such a problem to assimilate his guide and to make adaptations for their needs.
  • by Osty ( 16825 ) on Monday March 25, 2002 @05:35AM (#3220034)

    From the article:

    /usr/local, /opt

    These are obsolete folders. When UNIX didn't have a package system (like RPM), sysadmins needed to separate an optional (or local) application from the main OS. These were the directories used for that.


    I understand that this is directly from the FHS, and not some evil concoction from the mind of the author, but dammit, I think it's wrong. Perhaps /usr/local is obsolete with respect to package managers, and that makes some sense (because the package manager should handle proper management of placed files, though in practice that's not always the case), but as long as open source is around, there will always be software that is compiled rather than installed through a package manager. There will also always be applications that are not distributed in your package format of choice (as long as there is more than one package management system, this will always hold true). In these cases, it's still a good idea to keep around /usr/local and /opt. Personally, I'll have /usr/local on my systems for a long time to come, because I prefer to use the Encap [encap.org] management system.

    • by Tet ( 2721 ) <.ku.oc.enydartsa. .ta. .todhsals.> on Monday March 25, 2002 @05:58AM (#3220097) Homepage Journal
      I understand that this is directly from the FHS, and not some evil concoction from the mind of the author, but dammit, I think it's wrong.

      Actually, no. It is from the diseased mind of the author of the article. He first cites the FHS, and explains how good it is to have a standard like that, and then proceeds to ignore everything it says. /usr/local is explicitly reserved for local use and therefore no package should *ever* install itself there (my /usr/local, for example was NFS mounted, and RPMs that tried to install there would fail because root didn't have write access to it). So far, so good, and we're in agreement with the article. But then he goes on to say that /opt should never be used. What? According to the FHS, /opt is exactly where IBM should be installing stuff. Quite how he's decided that the two directories are obsolete is beyond me. Both have well defined and useful purposes, both in common usage, and in the latest FHS spec (see http://www.pathname.com/fhs/ [pathname.com]). I'm afriad IBM have just lots a lot of respect from me for this...

      • Actually, no. It is from the diseased mind of the author of the article.

        My bad, then. I'm not 100% familiar with the FHS myself, so I made the (poor) assumption that when the author said that's what the FHS defines, he was speaking authoritively. Apparently not. If slashcode would allow editing of comments, I'd fix this assumption.

      • The latest version of the IBM JRE (v1.3) installs in /opt. So they don't seem to take his ideas too serious.
    • by ggeens ( 53767 )

      I understand that this is directly from the FHS.

      Not true. This is what the FHS [linuxdoc.org] says about /usr/local:

      The place for locally installed software and other files. Distributions may not install anything in here. It is reserved solely for the use of the local administrator. This way he can be absolutely certain that no updates or upgrades to his distribution will overwrite any extra software he has installed locally.

      /opt is not mentioned as far as I can see. I remember reading that it was deprecated.

      /usr/local is not obsolete, and won't be. The only rule is that a package manager (dpkg, rpm,...) should never touch that directory (beyond creating it on a new install).

      • by cy ( 22200 )
        /opt is not mentioned as far as I can see. I remember reading that it was deprecated.

        /opt is not deprecated. In fact it is required that LSB compliant applications are installed under /opt You can download the latest version of the FHS here [pathname.com] and the LSB specification here [linuxbase.org].

      • /opt is in FHS (Score:5, Informative)

        by Skapare ( 16644 ) on Monday March 25, 2002 @07:35AM (#3220307) Homepage

        /opt is in FHS 2.2 [pathname.com] at secton 3.12. It begins:

        3.12.1 Purpose

        /opt is reserved for the installation of add-on application software packages.

        A package to be installed in /opt must locate its static files in a separate /opt/<package> directory tree, where <package> is a name that describes the software package.

        Doesn't look very depricated to me. I think the problem is your FHS link isn't really the FHS; it is the SAG (Systems Administrator Guide) [linuxdoc.org], which in section 4.1 [linuxdoc.org] clearly says it is loosely based on the FHS.

        As for /usr/local, I do agree it should be off-limits to the distribution (besides setting it up if not already present). And packages in the package format of the distribution (e.g. RPM for Redhat, Mandrake, SuSE, etc ... DEB for Debian and any like it ... TGZ for Slackware ... and so on) really should stay out of /usr/local. What /usr/local should be is whatever is local policy (FHS doesn't say it this way). Packages that the administrator really wants to be separate from the package management system, stuff compiled from source, stuff locally developed, all is eligible to be in /usr/local. My guess is the author of the article has no experience doing system administration combined with a decision making role where he might have to choose to do something slightly different than what everone else does.

        • The full quote goes something like this...

          /opt is reserved for the installation of add-on application software packages.
          (snip)
          Distributions may install software in /opt, but must not modify or delete software installed by the local system administrator without the assent of the local system administrator.

          This strongly discourages distribution use of /opt, as they must ask for specific permission of the local sysadmin to install files there.

          Since almost all software could be part of a distribution, and Unix has traditionally sorted its files by their type before their application, /opt use by packages is logically quite rare (and in my own opinion rightly so).
          • It depends on the package. If you need multiple versions of the same package to be present, then /opt is an advantage. But I do agree the distribution (e.g. Redhat, Debian, whatever) should not stuff there (setting it up empty is fine). However, a package itself may need to be there for some reason, such as being able to find version specific resources based on which version was executed. In this case a script in /usr/bin to run the package might be wise. The UNIX tradition of separating files by type and usage works in most cases, and has an advantage for the sysadmin (like making common files shared over a network, and platform specific files grouped by platform, and machine specific configurations distinct for each machine). But that isn't 100%, so flexibility is needed. A package should avoid /opt unless really needed.

        • /opt considered evil (Score:2, Interesting)

          by BetaJim ( 140649 )
          :) I really wish that /opt didn't exist.


          When trying to partition the different mount points /opt prevents using a small / partition. Usually, what I do is postpone installing some software (KDE) until I can ln -s /opt /usr/opt. /opt should really be one more level up from /. At the very least, I think that /usr/opt is a better place for this type of directory. Since /usr is usually allocated a lot of space (and / is kept small) it makes more sense to have opt under /usr.


          Hopefully, the folks in charge of the FHS will consider this.

          • I do "ln -s usr/opt /opt". Maybe that's what you meant. But I also do it before things are installed, so I don't have to skip by package. OTOH, I pre-install Slackware to a single partition under chroot first, to get the file tree as "installed". Then to install a new machine I boot it with my rescue CD, dd the drive to zero (personal preference, but not really needed), partition, format, mount, replicate the file tree, run lilo with -r, remove CDROM, and reboot. It's all scripted and takes about 4 minutes over 100 mbps ethernet for a server (no X) setup, or 9 minutes for a workstation setup (with X, Gnome, KDE, and the works). The tree already includes all my general local changes, and the script also hunts for host specific changes.

    • but as long as open source is around, there will always be software that is compiled rather than installed through a package manager.

      Source sode should be installed through a package manager. If you're a systems administerator and don't know how to package applications, you need to learn because you need it to do your job.

      If you have the brains to compile from source, you have the brains to make a source package. I'm tired of inheriting somebodies backyard apache installed, with a bunch of forced packages and non packaged apps. I can't repeat that install on other systems (especially annoying when testing) the install optiosn used aren't documented, and as the author didn't include an `uninstall' target in his Makefile, I can't uninstall it properly (unless I use something like stow, but in that case I may as well package the goddamned app).

      Because there's missed dependencies, I find out when something neeeds somethign else when it breaks, rather than before I install it. How it breaks is different with each app. Same with finding out if that app is installed, and how various files on the system got there. In other words, non packaged systems are an absolute mess and I have little time for them.

      Learn to package. It's simple, and you and the machines you will manage will be happier for it.
      • dependency hell (Score:3, Interesting)

        by Skapare ( 16644 )

        After using many versions of Slackware, I finally tried Redhat at version 5.1. Actually I had tried it at a way earlier version and it never successfully installed. But 5.1 worked OK. The reason I tried it was I bought a Sun Sparc 5 and wanted to try Linux on it. Redhat seemed to be OK, so I later tried it on a couple other i386 systems, and that was working OK ... for a while. As it turns out, I needed to make upgrades before RPMs became available (see next paragraph). I also needed to make some changes in how things were built. The RPM system started getting out of sync with what was actually installed. The system ran just fine, but soon it got to a point where some packages I was installing with RPM would not install because the RPM database thought things were not installed which actually were (but weren't installed from RPM, so I can understand why it didn't know this). So I ended up having to do forced installs. And that ended up making it more out of sync. By the time I had gotten to Redhat version 6.0, I was getting fed up with it. I switched back to Slackware (and Splack for Sun Sparc eventually came out and I use that, too) and am happy again, with well running systems. And I am now exploring LFS [linuxfromscratch.org].

        You say the system administrator should know how to package applications? Why the system administrator? I'd have thought you'd have expected the programmer to do that. If I get some package which is just a TGZ source file tree (because the developer was writing good portable code, but not using Linux to develop on), why should I, in the system administrator role, have to be one to make a package out of it? I'll agree it doesn't take more brains than needed to properly install the majority of source code, but I won't agree that it is easy (in terms of time spent) to do. At least I have the brains to actually check the requirements of what a given package I'm compiling needs, and make sure it is there by the time it is actually needed. The dependency may not be needed until it is run, so I have the flexibility of installing in whatever order I like. Also, some "dependencies" are option, and don't need to exist unless a feature is to be used that needs it. For example, if I'm not using LDAP for web site user logins, why would I need to make sure LDAP is installed if some module that would otherwise use it is smart enough to work right when I'm not using LDAP.

        • Re:dependency hell (Score:5, Interesting)

          by Nailer ( 69468 ) on Monday March 25, 2002 @08:56AM (#3220511)
          The system ran just fine, but soon it got to a point where some packages I was installing with RPM would not install because the RPM database thought things were not installed which actually were (but weren't installed from RPM, so I can understand why it didn't know this). So I ended up having to do forced installs.

          That's not the solution to the problem. Any management system ceases to become effective as soon it ceases to be ubiquitous. If your Apache is locally built, and you made the mistake of not packaging it, then you've nullified the effectiveness of the package manager for anything which touches apache.

          You say the system administrator should know how to package applications? Why the system administrator? I'd have thought you'd have expected the programmer to do that.

          Good point - ideally the programmer should, but its a simple enough case for SysAdmins to learn if they do encounter an unpackaged app.

          Have you tried making RPMs? I'm not a programmer by any means but its amazingly simple. Check www.freshrpms.net for a few good tutorials.

          Also, some "dependencies" are option, and don't need to exist unless a feature is to be used that needs it. For example, if I'm not using LDAP for web site user logins, why would I need to make sure LDAP is installed if some module that would otherwise use it is smart enough to work right when I'm not using LDAP.

          Another good point. This should be handled by a system similar to debs excellent required / suggested / recommended dependency system, which could fairly easily be ported to RPM from what I understand of it.

          Finding out a dependency exists when something breaks is no way to manage a system. Knowing what software has been installed on a machine is vital tomaintaining the security of your machines, and having proper uninstalls stops your hard disk from filling with crap. And there's a stack of other benefits.

          I find most people who dislike RPM haven't used the system. its very much similar to building an app. Inside the RPM itself is the original tarball of the app (plus maybe a couple of patches) and the spec file which is comprised of:
          • Metadata, like the name of the app, version, package version, app description, group, copyright
          • Instructions on how to patch, configure, compile, install, and uninstall software (with extra nifty stuff, like triggers for when other software is installed, able to be added at your own descretion).

          Its pretty muc hthe same as if you'd compiled the app without a package manager. RPM just standardizes your build process. You can easily rebuild source RPM for your local architectecture, and RPM will take compiler flags for your own custom configuration options. I like compiling a lot of apps from source too: I just take a few extra moments do it in a standardized fashion. This pays off repeatedly when I'm administering the machine infuture (or if I need to repeat thsi work on another machine).

          • Re:dependency hell (Score:5, Insightful)

            by Skapare ( 16644 ) on Monday March 25, 2002 @10:21AM (#3220813) Homepage

            It's the creation of the spec file that's a chore. I have to know what dependencies the package has to make it. If I know already, such as by RTFM's the original source package docs, then I know all I need to know to manage it without RPM. I still see making an RPM here as a redundant step.

            I do some programming, but I still don't RPM-ify those programs ... yet. But when someone comes up with an "autospec" "autorpm" program which figures out everything to make the RPM file so it becomes as trivial to make it as to install it, I might be more interested. Right now I'll still with "./configure" and "make install" which work just fine for me.

            • It's the creation of the spec file that's a chore. I have to know what dependencies the package has to make it. If I know already, such as by RTFM's the original source package docs, then I know all I need to know to manage it without RPM.

              Well, then you're not a system administrator. You're some guy who may (or may not) administer a few Unix boxes.

              For a real sys admin, most of the work goes into standardization and documentation. She's working for a company that loses valuable time and money when its systems go down, and it loses vital flexibility if its not able to replace the sys admin on a moments notice (like when the sys admin gets hit by a bus). She recognizes this, and makes every effort to make herself replaceable.

              In the real world, the very worse sys admins are always the "irreplaceable" ones -- the ones with so much specialized knowledge that only they have. The horrible sys admins are the ones who can't be bothered to keep a standardized list of everything installed on the systems, and the prerequisites for each of those installations. That's not administration, it's voodoo. If you work at a company with an admin like that, get him removed from his job, immediately. If you are an admin like that, grow up, immediately.
              • I am a system administrator, and I do keep things standardized and documented. I've been doing it since long before Linux (and therefore long before RPM) even existed. I've been doing it before SunOS became Solaris. The definition of being a system administrator is not Linux specific. Although I now do mostly Linux, it's most definitely not RPM based. Just because I don't do it the way you like it done, doesn't mean it doesn't accomplish the task.

            • I still see making an RPM here as a redundant step.

              This is useful because it allows preple installin that package to

              Install in a uniform, non interactive way. This way you can install your package as part of a automated update or rollout to your machines. At my workplace, `apt-get install cybersource-workstation' pulls down every RPM package needed to do work on a cyber workstation, plus config files for printers and similar items, and installs a couple of hundred pieces software automatically across each machine. Doing this without packaging is difficult.

              Intelligently deal with configuration files during upgrades

              Install, uninstall, and more importantly be queried using the same mechanism, so other admins know what you've done (this can be saved with a lot of documentation, but you'd spend more time documenting the machine than adminning it).

              Uninstall the package cleanly (make uninstall is unforunately rare)

              But when someone comes up with an "autospec" "autorpm" program which figures out everything to make the RPM file so it becomes as trivial to make it as to install it, I might be more interested.

              Its nice that you're open minded. RPM pretty much already comes with something like that already, which automatically adds the libraries an application relies on to its dependencies when creating the package. Besides that, most apps generally only have a couple of dependencies anyway, and they're quite simple ("my printing config needs lpr installed, what package own lpr, add that package to the dependencies list" - its pretty easy).

              • Install in a uniform, non interactive way. This way you can install your package as part of a automated update or rollout to your machines. At my workplace, `apt-get install cybersource-workstation' pulls down every RPM package needed to do work on a cyber workstation, plus config files for printers and similar items, and installs a couple of hundred pieces software automatically across each machine. Doing this without packaging is difficult.

                Having read the Maximum RPM book, I found that the steps involved in building an RPM package out of a source tarball is definitely NOT uniform, and most definitely is very interactive. So doing that means I have to be taking an interactive approach somewhere. RPM has to build the package from source, as would I.

                I see value in having distributed packages in RPM when those packages are built right, and when they are available when needed. I don't see the value in building them myself, as that appears to take a lot of time. And time is the crucial factor. Every time I did an emergency security upgrade on a Redhat box, there were no RPMs, and I had no time to make one.

                Also, I just don't have dependency problems on my Slackware based systems. Things do work. The rare (maybe 2 or 3 at most) times I've had to download something else in addition to the package I was downloading, it was clearly obvious after RTFMing the README and INSTALL files. In most cases my custom made source installer script for each package just works with the new version already. When it doesn't this is fixable after RTFM and/or one compile.

                The Linux From Scratch project supposedly has someone working on making a setup that builds the whole thing from source and produces a big pile of RPM packages as a result. Maybe that might be something to look into when it becomes ready for prime time.

                Uninstall the package cleanly (make uninstall is unforunately rare)

                If "make uninstall" is not available, then how is RPM going to figure it out? Is it going to just see what packages are installed by "make install" and list them? What if a file is not actually installed by the Makefile because it's already present (e.g. it hasn't changed since the previous version)? What if a file is merely modified by the Makefile, but previously existed? (This would be considered to be a bad practice, but unfortunately is very real, and has to be dealt with)

                Actually, I developed a set of system administration patterns around mid 1980's which I still practice. Back then some of these things were hard to do, but were important. Now days they are less difficult. One of them is that packages are simply NOT trivially uninstalled. This means a careful analysis in advance as to what needs to be installed, or else I just live with the wasted space (disk drives these days are unlikely to be filled up due to uninstalled packages that I was previously sure I needed). So basically, I don't uninstall, unless it's a security issue in which case "rm" is a nice tool.

                RPM pretty much already comes with something like that already, which automatically adds the libraries an application relies on to its dependencies when creating the package.

                If I have all the RPM tools installed, and bring in the tarball (not extracted, yet), how many commands are involved in making an RPM package? How many edit sessions? Would this be scriptable (a different script for each package) to make it all happen in a single command? If the answer to that last one is yes, then perhaps there's some value here, such as integrating it with Linux From Scratch.

      • If you're a systems administerator and don't know how to package applications, you need to learn because you need it to do your job.
        Linux makes every home user a systems administrator. Are you saying every home user needs to learn how to package applications? I personally have no interest in doing that.
    • I'm getting real tired of repeating this every few weeks.

      When I recompile a standard package with different options (e.g., to match my environment, to be more secure by disabling some standard servers, etc.), intending to redistribute the packages to others, where the fsck am I supposed to put the results?

      Hint: put it in the standard places and expect to be burned at the stake. I'll bring the burning torch. Non-standard builds that aren't clearly identified as non-standard tend to waste a *huge* amount of time because people reasonably, but erroneously, think that the package is official one.

      To be blunt, the decision of where to put files is simple and well-established:

      1) The standard packages (from Red Hat, Debian, whoever) loads into the standard locations.

      2) Any modified packages distributed to others load into /opt. It's worth noting that it's /etc/opt, /lib/opt..., not /opt/etc, /opt/lib.... A lot of people (including me) tend to get this wrong, but it's in the FHS.

      3) Any modified packages that are not distributed to others load into /usr/local. In this case it's /usr/local/etc, /usr/local/lib....

      4) Any original package not distributed by the OS has historically gone into /opt, but with PM/CM it could load into the standard locations. It should never load into /usr/local.

      5) Finally, "depot" style builds use their own trees, probably following the /opt practices.

      As an aside, I've even been experimenting with a tool that rewrites Debian packages so the load into /opt instead the standard locations. Relocating the files is trivial - it's a rewrite of the data.tar.gz headers and some standard control.tar.gz files, but automatically fixing installation scripts is still problematic.

      The thing about the article that really pisses me off is that *all* of his advice can be applied equally well in all four scenarios. The fact that I can mechanically change a package to use a different installation target really drives this home. Yet out of nowhere he makes an uninformed comment that makes life difficult for those of us distributing modified standard files. (Comment deleted for profanity)
  • by slasho81 ( 455509 ) on Monday March 25, 2002 @05:39AM (#3220045)
    Designing Good Linux Applications

    The 'Linux' word is completely unnecessary - "Designing Good Applications" should suffice.
    Application design couldn't care less of the OS that the application is planned to run on.
    • Not totally true... (Score:4, Interesting)

      by NanoGator ( 522640 ) on Monday March 25, 2002 @06:10AM (#3220133) Homepage Journal
      I do agree that some of what they talk about in this article would apply to most applications, but not everybody uses an OS the same way. Take this except, for example:

      "Everybody loves graphical interfaces. Many times they make our lives easier, and in this way help to popularize software, because the learning curve becomes shallower. But for everyday use, a command at the console prompt, with many options and a good manual, becomes much more practical, making scripts easy, allowing for remote access, etc. So the suggestion is, whenever is possible, to provide both interfaces: graphical for the beginners, and the powerful command line for the expert."

      This is wonderful advice in the Linux world. However, most Windows and Mac users, sadly, don't know what a command prompt is, let alone how to script it. This is a native concept to a Linux user.

      I have no doubt that even in the Windows/Mac world a really powerful Command Line feature for any given app would be super useful, but it is only so for those who have climed that learning curve. In that case, it's better to focus on making the App do what it needs to do.

      In any case, I'm sure I'll draw criticism for that comment. I'd prefer you didn't, though. The point I'm making is that slasho81's comment that all software should be the same despite the OS isn't quite so black and white.
      • by Osty ( 16825 )

        have no doubt that even in the Windows/Mac world a really powerful Command Line feature for any given app would be super useful, but it is only so for those who have climed that learning curve. In that case, it's better to focus on making the App do what it needs to do.

        In the Windows world, many applications do have powerful commandline features, as well as GUIs. However, you're trying to impose a unix-style of automation (shell script, tying a bunch of small commands together) on a system with its own methods of automation. Let me first say that there are tools you can install on Windows to do unix-style scripting, like Cygwin. I'm ignoring that for now. Typically, when you want to script something in Windows, you'll end up writing some vbscript or jscript that instantiates a COM object and does what it needs through that rather than running an app with some params and catching/piping stdin/stdout. I won't say which method is better, simply that they're different.


        This is why *nix administration knowledge doesn't translate to NT administration knowledge, and vice versa. Too often people complain about NT admins trying to use linux or some other unix without ever thinking of the reverse scenario. Try writing a script to force a change of password on next login for some number of NT users. Now make sure it works for local users, NT4 domain users, and Win2K AD users. This is quite doable, but most unix admins look for a passwd-like app, find none, and give up, complaining that NT sucks because they have to go through a GUI to modify 50,000 accounts.

  • Looks like they should consider designing good webpages first. The big tables make the text scroll of the right side of the page for me. Really annoying.
  • With any luck... (Score:5, Interesting)

    by Nailer ( 69468 ) on Monday March 25, 2002 @06:27AM (#3220174)
    This will get round to people making the applications. I'm absolutely fed up with people, especially vendors of proprietary software, making nonstandard software. In my book, standard (LSB) Linux apps are the *only* Linux apps, this means:
    • They are packaged as RPM 3 files, to allow standard installation, deinstallation, auditing, and management of relationships with other necessary software. Not some interactive self extracting tarball I can only use once unless I do the vendors job and package it myself (which unfortunately is necessary for modern sysadmins if they want to do their job properly).
    • They use SysV init scripts which live in /etc/init.d. Again, I often have to do the vendors job for them and write the initscript myself. This sucks, I paid my money for a Linux app and I want a Linux app. This means you Sophos Mailmonitor.
    • General FHS compliance. I should be able to mount / readonly and /var read write and your app should work, once I have configured it. This is too often not the case. This means you StarOffice.
    • Man pages should always exist (no, not `Debian tells me I need a man page so this is it, I have no actual useful content, write me!' man pages but actual real no-bullshit man pages. Man pages go in /usr/share/man.

    • Documentation in /usr/share/doc. Not in /usr/lib. Yes, the FHS says you can install non-user executed binaries in /usr/lib, but documentation is not libraries or binaries, never was, and never will be. This means you Citrix.
    • Die symlinks, die. Linking correct locations to their incorrect locations should be as short term as possible. Yes, this means you Red Hat. Reverse the /etc/init.d -> /etc/rc.d/init.d symlinks now.
    • UNLESS YOU ASK MY EXPLICIT PERMISSION TO INSTALL EACH FILE SUSE OR ANY OTHER DISTRO HAS NO RIGHT TO DO THINGS TO MY /OPT. Aaaarrgggh Suse!

    There will be things you don't like about the LSB and FHS. Personally, I reckon initscripts aren't config files and should live in /sbin. But I put them in /etc/init.d because the FHS says I should. Likewise, if you have a problem with RPM, make it better (apt-get's already a basis for all my Red Hat installs and updates thanks to Freshrpms).
    • I had Red Hat 6.2 loaded with an obsolete RPM version that prevented me to install several packages.

      Is amazing that Red Hat distributions were a *bit* similar to Windows sometimes.

      An app which needs to be updated for making other apps to work.

      Oh, maybe I'm exaggerating too much. No matter if you're using LSB or RHS you've to deal with libc's, gcc's and glibs' versions.

      Thank God GIMP works with plug-ins!
      • I had Red Hat 6.2 loaded with an obsolete RPM version that prevented me to install several packages.

        You can run `up2date -u' to download the newer version of RPM and all necessary security / bugfixes for Red Hat 7.2, plus their dependencies.
        • You can run `up2date -u' to download the newer version of RPM and all necessary security / bugfixes for Red Hat 7.2, plus their dependencies.

          I meant 6.2 (and yes, up2date works in 6.2).
    • > # They are packaged as RPM 3 files, to allow
      > standard installation, deinstallation,
      > auditing, and management of relationships with
      > other necessary software. Not some interactive
      > self extracting tarball I can only use once
      > unless I do the vendors job and package it
      > myself (which unfortunately is necessary for
      > modern sysadmins if they want to do their job
      > properly).

      No *nix is an island. RPM isn't the norm on even all Linux systems, let alone the rest of the *nix world. Don't forget that the x86 BSDs run Linux binaries too. Chaining dependencies like package managers and package management databases makes it more difficult for a lot of people who don't really need the overhead. The point of distributions is to allow someone to package software for it- so it's really the distributor's job to package for a specific package manager, not the vendor.

      Paul
      • RPM isn't the norm on even all Linux systems

        Yes it is - it is the standard way of installing applications on Linux according to the LSB, which almost every Linux distribution you've heard of, with the notable exception of Slackware, aims to conform to.
    • Not some interactive self extracting tarball I can only use once unless I do the vendors job and package it myself (which unfortunately is necessary for modern sysadmins if they want to do their job properly).

      If you already have the tarball, why are you trying to make a different package out of it? Don't forget that many applications are made for a variety of different systems. Linux isn't everything out there.

      They use SysV init scripts which live in /etc/init.d. Again, I often have to do the vendors job for them and write the initscript myself. This sucks, I paid my money for a Linux app and I want a Linux app. This means you Sophos Mailmonitor.

      I'll agree that it is annoying to have packages making assumptions about where they put boot scripts. But Linux is about choice. There is more than one standard to choose from. Sounds to me like you are trying to make cookie cutters.

      Die symlinks, die. Linking correct locations to their incorrect locations should be as short term as possible. Yes, this means you Red Hat. Reverse the /etc/init.d -> /etc/rc.d/init.d symlinks now.

      I hate symlinks, too. I want to get rid of all the symlinks in the whole /etc/rc.d. And guess what ... I did!

    • So there are no apps for Debian Linux? Maybe that explains why software is so easy to install, uninstall, and update on Debian... its because none of it really exists.

      Lets face it, the LSB is not an objective standard but a crappy attempt at a standard that has succeeded in nothing more than giving Redhat a supposed stamp of approval as not only the defacto Linux standard, but the dejure Linux standard. Why not just ditch the LSB and replace it with a sentence that says, "Must be Redhat compatible"? At least people wouldn't be kidding themselves.
      • You're missing the point of the LSB.

        Given your Debian comments, I think you are referring to RPM as the default package manager. In all other respects, Debian is perhaps closer to the LSB than Red Hat.

        What you and a lot of other people seem to miss is that there is no requirement to actually use RPM as the system package manager. The LSB requirement is to be able to install RPM packages. If those packages comply to the LSB, then there is no reason why Debian users shouldn't be able to install RPMs using alien. After all, the package itself should already conform to Debian Policy, as Debian Policy is merely an LSB implementation.

        I'm a Debian user myself, but I'm getting a little tired of the Red Hat bashing that goes around.

        Mart
      • RPM is designed to test for specific files and programs on your computer and .deb is designed to test whether you have specific debian packages installed. Since the LSB is designed to work across many distros it makes sense to use RPMs which (theoritically) install on any distro rather than something Debian specific.

        The LSB guys are smarter than you'd think. And Debian contributed too btw.

        Of course, it might be nice if people wrote packages for each distribution and release but that doesn't happen in the "Real World." The LSB is a compromise that most people can live with.


      • Lets face it, the LSB is not an objective standard but a crappy attempt at a standard that has succeeded in nothing more than giving Redhat a supposed stamp of approval as not only the defacto Linux standard, but the dejure (sic) Linux standard.


        The standard itself seems to speak otherwise.
        • Red Hat has had to move the location of its initscripts and documentation
        • The same way suse has also had to move its initscripts and start thinking about not using /opt for distribution packages.
        • Red Hat didn't join the LSB for two years, while most of these decisions were made, precisely to avoid the exact claims your levelling against it - that its market share is having an undue influence on the LSB. Most of the LSB was built by other distributions, including Debian, who have been doing the LSB for much longer than Red Hat has.
        • Good standards codify existing practices - the decision to use RPM as a packaging standard reflects the popularity of this packaging system, and according to most (eg, Netcraft's surveys) the vast majority of Linux systems use some kind of RPM based distribution (and this would still be the case without Red Hat). Seeing as there's very little but minor differences between the different packaging systems (apparently RPMs package signing and verification is better than Debs, Debs required / suggested / reecommended dependency scheme is better than Red Hat's, but their basic function is the same and frointends like apt and urpmi run on either).
        • In fact, the LSB has been quite lenient with Debian, allowing the Debian folk to say that `alien' provides all the RPM support they need to be LSB compliant.

          I don't think you're very aware of the LSB, its content, or its history.
      • Maybe that explains why software is so easy to install, uninstall, and update on Debian... its because none of it really exists.

        Asides from the lack of logic in yoru sarcasm there (where did I indictae that there aren't any apps for Debian), there's very little difference in ease of use installing software on either Deb or RPM based distro's. Many Debian folk seem unaware that tools like up2date, urmpi and apt exist and come with most RPM based linux distributions. Personally, I apt-get update my Red Hat 7.2 machine from Freshrpms each day.
    • RH has always used /etc/rc.d/init.d. At least they now provide links... It's one less thing for me to have to remember. I for one applaud it.
      • RH has always used /etc/rc.d/init.d.

        Yes, but its not standard (according to the LSB). That's why the links from the correct location to the incorrect one now exist. I agree that links are good, but symlinks don't solve every problem, and RH have indicated they will hopefully move to the correct location in future.
  • My two cents: (Score:2, Informative)

    by colmore ( 56499 )
    So I'm not a developer, and I don't know that much about programming. (Teaching myself QT 3 for the heck of it, but can't really do much worthwhile)

    Anyway here would be my two suggestions:

    1) Quit ripping off Microsoft and Apple. or at least think before you do. Using any Linux GUI you can immediately see the areas where the team said "lets make this more like Windows." on the one hand, this makes things more familiar and easy for the new users, but on the other hand, it repeats a bunch of bad and arbitrary GUI conventions that should be re-examined. For instance, in Mozilla by default, there's the same irritating password remember feature as in IE. This should not be a default-on option, the security risk is huge, whoever made that mistake at MS ought to be fired. Why do we continue it?

    2) Drop the in-jokes please. Calling everything "GNU-" putting funny little things in the help files etc. etc. etc. we want to convince people that we're making a professional quality product. And nothing spoils that faster than giving the appearance of a hack.

    and my suggestion to the non-developing members of the community would be:

    spending some of your time filling out bug reports and posting (well thought out, politely worded) suggestions is much more effective than posting "linux roolz" on public news services.

    here on Slashdot we like to speculate that Microsoft has hired a group of people to spread anti-opensource FUD in our midsts. the lamers who do nothing but insult "Micro$oft" all the time are the free equivalent.
    • "Quit ripping off Microsoft and Apple"

      People always talk about this, but me, being a former Windows user, found switching to Linux so much easier because of the similarities. I think KDE for example has copied Windows a heck of a lot, but they've also done their own thing in many respects. I think making GUI's even more customizable would solve this problem. But Linux is already very customizable. If you try hard, you can make it look nothing like Windows. Or use one of the older UNIX-style window managers.

      "Drop the in-jokes please"

      I kind of like that kind of thing. It makes me think that real people actually made the stuff. On those same lines, I also like being able to e-mail the developer that made such and such a program. If there is a major bug that I spot, I can let him know, or I can just say what I like and don't like about his program.

      "spending some of your time filling out bug reports and posting"

      Yes! This is super important. I just got involved with doing this, mostly with Mozilla and OpenOffice. It takes a lot of my time away from studying, but it's fun!

      • Certainly *good* ideas from Windows/Apple should be copied. I think the complaint is when they copy bad or arbitrary decisions by those companies in an attempt to make things as comfortable as possible, and defeat any possibility of Linux actually being better.

        Best example is the complete breakage of point-to-type because the window managers default to having this turned off, and don't ever test it. Point to type is obviously superior (try to find anybody who uses it for a week that wants to switch back, and you can try this on Windows as well, it is a registry switch). Yet the things that make point-to-type frustrating on Windows are copied in KDE and Gnome: raising windows on click, the inability to drag a window without raising it, the raising of "parent" windows when you raise a dialog box, and a lot of other little frustrations that were solved in the simpler window managers of ten years ago.

        It is great to see them copying good ideas, but it is really sad that the ease of use of Linux is being thrown away in an attempt to make an exact clone.

  • I agree with the main thesis of the article. I just wish more packages follow the ideas expounded, and specially the FHS.

    For example, gcc when installed from source defaults to putting itself into /usr/local/ which is quite understandable, because it was locally installed. Unfortunately libgcc_s.so should have placed itself in /lib instead of /usr/local/lib because some boot-time binaries need it. (modutils if I recall correctly.) The first time I installed gcc from .tar.gz, my sysinit crashed because /usr wasn't mounted yet.

    Other packages have this problem too: fileutils, bash and modutils come to mind. The default configuration is to install themselves into /usr/local/ despite the fact they are needed during boot. (init's message of "rm: no such file" puzzled me the first time I saw it.) Now, I know that ./configure --prefix=/ fixes those things, but my point is, the user shouldn't have to learn from experience how to correctly install those packages. The packages should help him.

    • I think the reason GNU stuff defaults to /usr/local is because it comes from a background where most people would be installing the GNU utilities on UNIX systems that had vendor supplied utilities like rm, etc.

      john

  • It was GIL AMELIO who nearly killed Apple! It was all Power Computing to do to keep people buying any kind of Mac at all!!

    And Power did their own R&D, thank you. Sure, they ripped off most of the early MLB layouts, but after Alchemy, the boards were all Power's own. And they were adding the features Mac users wanted -- like faster bus speeds and modern RAM. Not to mention decent video performance. Power was doing the Mac community a favor by getting the RAM ceilings out of the double digits.

    If Apple was happy with less than 1% of the total PC market, then fine. Because when it comes right down to it, to hell with Apple. I go to Apple's computers because they're the best, but at that time, they weren't. The best you could get was some 8100 piece of shit and THEN what, you're stuck with Nubus expansion and a lot of proprietary video hardware. Meanwhile, Power was producing cutting-edge machines... some of which had hardware on them that wasn't even available for PCs yet.

    Power gave half a shit about producing USEABLE machines, made they way they were supposed to be made. Meanwhile, Apple was sitting around being weak and spineless. They got scared when the market was getting away from them, and so they yanked the licenses and killed the baby.

    I know a guy who was at the top levels of Power's Technical Response department. (His business card said "Grand Technical Czar."). I know at an intimate level what was going on at Power, and it was not any kind of plotting effort to undermine Apple's success. They just didn't give a *fuck* about all the pissy little things that were wasting Apple's time. Most of the people working for my friend were recruited from Apple, where they were disgruntled and lethargic. But at Power, they found renewed energy for not Apple, but the Macintosh platform. And they made it better than any other out there. By the time Power closed, their machines were running not just MacOS, but BeOS and LinuxPPC as well. Would that have happened with Apple getting in the way on things like bus speeds and cache sizes? While Apple was making machines that didn't have caches, Power was redeveloping the whole concept. We have Power to thank for the Backside Level 2 Cache technology, don't forget that.

    The clones were all that keps Apple alive through its darkest time. Thanks to power in particular, there are now more Mac die-hards than ever, and the mac has made tremendous progress in its technology and features thanks to people like those who used to work at Power.

    If anyone's to blame for Apple's problems, it's Apple.
    • I would argue that it was STEVE JOBS that nearly killed Apple - Dr. Amelio made the necessary step to get more Macs on desktops by allowing low-cost/reasonable quality clones of the Mac to be produced.

      Had the cloning efforts gone through, we'd all be bitching about Apple's industry dominance, instead of Microsoft (or at least bitching more about it).
  • This article is actually really good ! Read it !

    Strangely enough, all IBM software that I've had the pleasure to deal with (DB2, IBMHTTP and Websphere) try to install themselves by default to /opt..
  • /opt vs. RPM (Score:4, Insightful)

    by HalfFlat ( 121672 ) on Monday March 25, 2002 @07:05AM (#3220247)

    The author states that /opt is obsolete, and that everything should use RPM and install in /usr. Maybe this is the ideal in a system where everything is binaries-only, but I firmly believe it is poor administration practice.

    The RPM database is binary and fragile. Once it is corrupted, the data describing what belongs to what goes out the window. RPM-packages have to be trusted not to clobber existing files or make changes to configuration files that one wants left alone. The alternative is per-application directories and symlinks (or a long PATH variable); there are tools which automate this, such as stow. The advantage is that the file system is - or at least should be - the most stable thing in the system. One can just examine a symbolic link to see what package it belongs to. This makes removing and updating applications very easy, and also makes it easy to see if there are any links left around from older installations. Removing an application is typically as simple as removing the corresponding application directory.

    RPMs which install in the /usr tree will require root priviledges, whereas applications that can work from a self-contained directory can be installed by a non-priviledged user in their own directory, Also, /usr in principle can be mounted read-only. This will certainly slow down any attempts at installing software in it!

    I have had Redhat's installer corrupt the RPM database on multiple occasions; and I've had to override the dependancy checking innumerable times in attempts to update packages under both Redhat and SuSE, thus rendering useless the other purported benefit of RPM. New software typically comes in source form before RPMs; and the RPMs that do become available are almost always going to be third-party ones that don't necessarily play well with your system. By the time a vendor-created RPM becomes available, the distribution version you are using is no longer actively supported, and you'll need 300MB of updates to other packages just to satisfy dependencies. I've been there, it's horrid.

  • by mmusn ( 567069 ) on Monday March 25, 2002 @07:11AM (#3220259)
    is a command line application.

    Seriously, a lot of Linux applications try to duplicate the Windows world and end up being just as bad. For example, for audio software, a monolithic executable with GUI is a Windows-style application--hard to reuse, hard to extend. A bunch of command line applications that can be piped together and come with a simple scripted GUI, that's a good Linux application because its bits and pieces can actually be reused.

    • Some command line apps could use some consistency though, for example, to login as 'user' to a box is 'ssh -l user ftp.site.com', but to use it through ncftp, its 'ncftp -u user ftp.site.com'.

      I always mix up the switches, I wish the command line tools would get some attention too - unless there are specific reasons why there isn't? Some use -f, some use --force, pick one, or better at least support both.
      • 'ssh -l user ftp.site.com'

        I always run ssh user@ftp.site.com since its the same as my email address on that box, or better yet, create and use an identical account on the local machine and just run ssh ftp.site.com.

        Its like people running zcat somefile.tar.gz | tar xvf - when you can run tar zxvf somefile.tar.gz. Same for bzcat and what, 'j' instead of 'z'? Maybe it helps some people keep it straight in their heads that they're literally piping the output of a zcat into a tar, like peeling layers of an onion. Me.. I just like to save typing.
        • ... which totally misses the point. That ssh and ncftp (and every other damn command line program) uses a different command vocabulary is *lame*. It's a *bad thing*. It's why Unix CLI is so hard to learn; not because a CLI is more difficult than a GUI, but because the damn Unix CLI is *inconsistent*.

          peace,
          (jfb)

          PS: The reason some people zcat into tar is because not every tar is gnu-tar. Not every Unix user uses Linux, you know.
    • you've clearly never tried to do anything complex in real time in the audio software realm. the model of "all data is a stream of bytes" and "we can hook it all together with pipes" misses one critical dimension: time. the unix model of interacting processes doesn't take time into account in any way, and as a result it fails to be useful for realtime manipulation of large streaming data sets.
  • I have great ambitions for the Linux Quality Database [sunsite.dk] which are so far mostly unfulfilled, but for now I have some articles which you may find worthwhile reading:

    Also if you program in C++, these articles may be useful:

  • by Oink.NET ( 551861 ) on Monday March 25, 2002 @07:19AM (#3220279) Homepage
    This guy's ideas would be way more useful if he could think outside the stereotypical structure of today's Linux apps.

    Ditch the concept of spreading pieces of your app all around the FHS. This is organizationally similar to Microsoft's registry. It becomes a maintenance nightmare. Yes, RPM keeps track of some pesky details that let us get away with a messier install. Yes, the FHS does impose a common structure on what is an otherwise unstructured mess. But programmers are human beings, subject to the whims of ego, ignorance, and yes, even creativity and sheer brilliance. We're going to deviate from the suggested standards if given the opportunity, for one reason or another.

    Give me one main point of access to everything the application does. If you need to use config files, give me the option of manipulating them through the application itself, preferably in the context of my current task. Give me one place to go looking for all the bits and pieces of the app. No, the FHS isn't simple enough. Give me context-sensitive documentation so I don't have to wander outside the app to get my job done. Don't make me wade through a spaghetti-code config file, with the documentation propped open on a separate screen to keep from getting lost.

    Programmers are lazy. I should know, I am one. The last thing I want to do when I'm getting ready to release a program to non-techie users is tie up all the loose ends that seem ok to me, but not to the non-techie user. I'd rather document how to get a tricky task done than write the code that automates the tricky parts. I'd rather tell the user how to go tweak the flaky data in the database by hand than add another error-correcting routine. And it's more work to give the user one simple, full-featured point of entry to each piece of a complex application. But that additional work will make the application more usable, for the expert and the novice alike.

    • You may think its an organisational nightmare, but I think it allows us to use the system in 'smart' ways.

      First off, I have no problem with _some_ applications being under their own directories, especially if they're pre-designed to run chrooted as services. However, I _will_ demand that I can log my app logs to a partition mounted on /var/log and that my root partition be read-only mountable.

      There are lots of considerations that go into good filesystem use and several 'problems' we have now can be fixed by some more discourse, not by just taking an option and doing it because that's what you like as a developer -- developers are not the target market; users are (although many users may be developpers).
      • Just to expound some concepts:

        My configuration files are (almost) all under /etc and are part of the root file system which is mounted read-only. For programs that have lots of configuration files, I use additional sub-directories like /etc/qmail or /etc/httpd.

        I keep all state information under /var which is mounted seperately, on its own disk with sync'd writes and without access time logging.

        Software I manage with RPM is all under /usr in whatever directories it came in. For system software I'm specifically interested in keeping updated, I manage it myself with the source and install it under /usr/local/bin, /usr/local/man, etc. (but with the configuration files still under /etc and state information still under /var).

        Application-specific binaries, however, I sometimes keep under /usr/[local/]libexec/appname.

        Many sysadmins with different needs are prone to NFS mount system binaries and/or home dirtectories which necessitates further classification work.
  • by Faux_Pseudo ( 141152 ) <Faux.Pseudo@gmail.cFREEBSDom minus bsd> on Monday March 25, 2002 @07:31AM (#3220301)
    That's the thing about people who think they hate computers. What they really hate is lousy programmers. - Larry Niven and Jerry Pournelle in "Oath of Fealty"

    That is what I found in the fortune at the bottem of the this thread.

  • Designing good Linux applications is easy as 1, 2, 3. Just follow these simple steps.

    1) Command line only. We all know real users only use command line.
    2) Don't comment your source code. Ever. It just wastes valuable programming time.
    3) No installation/usage documentation. If they deserved to use your app, they can go figure it out themselves. What are you, tech support?

    If you follow these simple instructions, you are guaranteed a rabid cult following, or at the very least a feeling of superiority over your users.
    </sarcasm>

  • The article is still partly in Portuguese:

    sobretudo = above all

    destacado = detached (apparently he means that KDE is the most developed)

    I emailed the author about the HTML problems.
  • Dear Avi: In your recent essay, "Creating Integrated High Quality Linux Applications" you engage in HTML-composition practices that seriously create a Low Quality Web Application: you format the entire article as a TABLE containing a nested TABLE, in such a fashion as to force the main text display to an average text-line length of 130 characters in a web browser.

    This line-length is quite hostile to the reader; human factors experts say that line-length optimally should be on the order of 60 characters; much longer lines--such as yours--make the text very difficult to read. This principle is even evident in the HTML source for your article, which (one observes) uses indentation for readability, together with an evident right margin column of 75, and a mean line length of 41.0485 characters. You have preserved readability for *yourself* but have seriously com promised it for others.

    Please reconsider!

    Thank you

  • Am I the only one who thinks Unix used to be friendlier than it is now?

    Do you remember...

    • When all commands had a manpage? And it actually describe the program! And it would have all the proper sections like "Synopsis" and most oftenly it would include an example or two.
    • When all kinds of other things had manpages too. "man nfs", "man kernel" etc. actually worked?
    • When the "learn" program was around? Remember that? (It was a program that did interactive little courses on newbie topics. You'd do "learn vi" or "learn files". If you stopped at some point it remembered how far you had gotten and would start there the next time. It was simple, but useful.)
    • When vendors actually shipped usable termcaps for common terminals? (OK, the linux vendors are pretty good at this, but some of the "real" unix vendors... Brr.)
    On the positive side, I really do like Debian's manpage policy. Maybe there's hope.

    /August, feeling old today.

  • Great article but I find it humorous that the article stresses user friendliness but the web page is WAY to wide forcing me to horizontally scroll almost every line that I read. :0)

    Just struck me as funny.
  • Like making sure that your HTML documentation doesn't cause a horizontal scroll bar to kick in?

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...