Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

timothy posted about 3 months ago | from the explain-your-system dept.

Linux 98

New submitter rongten (756490) writes I am managing a computer lab composed of various kinds of Linux workstations, from small desktops to powerful workstations with plenty of RAM and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past, the powerful workstations were reserved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. Is there a sort of resource management that would allow the following tasks? To forbid a same user to log graphically more than once (like UserLock); to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines, or even worse, running in parallel); to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage); and to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, and LDAP PAM authentication; it is Puppet-managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, and queuing system — but it is not an elegant solution, and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Do you know of a similar system, preferably open source? A commercial solution could be acceptable as well.

Sorry! There are no comments related to the filter you selected.

aversion therapy (2)

lyapunov (241045) | about 3 months ago | (#47509143)

I would do it up A Clockwork Orange style.

The original BOFH stories are a good guide: http://bofh.ntk.net/BOFH/ [ntk.net]

ulimit? (1)

William Robinson (875390) | about 3 months ago | (#47509153)

Try ulimit. It helps a lot keeping things under control.

systemd (1)

Anonymous Coward | about 3 months ago | (#47509159)

I believe you can do at least some of that with systemd user sessions and resource restrictions
http://0pointer.de/blog/projects/resources.html [0pointer.de]
User sessions are currently kind of beta-ish but they're getting better / more useful... I already launch emacs and a MIDI synth through it on login, and it works wonderfully (ironically, though, PulseAudio, the other Lennart project that got a lot of flak, doesn't launch through this mechanism yet).

Re:systemd (0)

Anonymous Coward | about 3 months ago | (#47513521)

Can I do this with Sys5 Init? Sorry, I don't trust systemd yet.

Trust your users (5, Funny)

Anonymous Coward | about 3 months ago | (#47509181)

Trust your users.

Re:Trust your users (1)

Culture20 (968837) | about 3 months ago | (#47511201)

This was modded funny, but it *is* a classroom computer lab, not a government installation. At some point, you have to let them learn by stepping on each others' toes. Protect the students' files from the other students. Protect the systems' secrets from the students. Beyond that, just institute a written policy of "don't be a jerk: nice your background processes". If a student uses up too many resources, use it as a teachable moment. Chances are, the students aren't trying to be jerks. They'll lp binary files by accident or forkbomb their machine. But if they really wanted to cause problems, nothing short of locking the door to the lab and only allowing remote access to one machine per student will do.

Good grief (0, Redundant)

sunking2 (521698) | about 3 months ago | (#47509215)

Is this 1988? The easiest/cheapest solution is spend a couple bucks on decent machines.

Re:Good grief (1)

Nimey (114278) | about 3 months ago | (#47509289)

Around here some of the public schools just got rid of Pentium IIIs. Not everyone can afford something decent.

Re:Good grief (0, Funny)

Anonymous Coward | about 3 months ago | (#47509373)

We know; that's why they are running Linux. However, from all the unexplained terms and acronyms in the post I would imagine the best bet would be to flamboozle the thingamajig and ACFG the OS with a blengle. Or manage via muppet instead of puppet? Perhaps the gizina is stuck in the hypercottle again?

Re:Good grief (2)

aitikin (909209) | about 3 months ago | (#47509743)

The acronyms are fairly common and the terms make perfect sense if you've ever admined (and even if not for many purposes) a Linux system.

Re:Good grief (1)

whitroth (9367) | about 3 months ago | (#47516429)

Do you still have the box your computer came in?
Good, please turn off your computer, disconnect it, and ship it back.
Why?
Becuase you're too fscking stupid and ignorant to use one. And as to why you even thought you should comment on something that you have no clue about, other than to display your gross ignorance in public, like a baboon's ass, I have no idea.

                    mark

Re:Good grief (1)

aaronb1138 (2035478) | about 3 months ago | (#47509825)

Old saying goes, "I can't afford to buy cheap crap."

I have yet to see a computing environment where the demand of computing power significantly outstripped supply due to antiquated technology except where the network administrators were practically tenured. In those cases they were gobbling up so much in salary and blowing time to keep fixing stuff mostly due to age.

The administrator even seems to point at that he is trying to fix problems that don't fully exist. "...and it is hacked a lot." Is one of those telling statements that maybe the problems are the administrator going overboard to justify his job.

Re:Good grief (0)

Anonymous Coward | about 3 months ago | (#47509359)

Great answer. Really. I see you want to donate to the OP. Please do so then.

Re:Good grief (1)

dissy (172727) | about 3 months ago | (#47509479)

Is this 1988? The easiest/cheapest solution is spend a couple bucks on decent machines.

Sweet, I've been needing an upgrade myself as well, but there seems to be a strange shortage of people insisting we speed more than a couple bucks on the problem and will pay for the upgrade. I'm glad I found you!

250 workstations upgraded to top tier is roughly $200000.00 or so. Better make it $250000 so we can get new LCDs too, these 10 year old 19" ones are getting a tiny bit of burn-in.

Just go ahead and paypal it to me, and I'll get right on implementing your suggestion!

Re:Good grief (2)

cdrudge (68377) | about 3 months ago | (#47509491)

Even if all the machines were identical top of the line machines, many of the things that was listed as requirements would still apply.

"Spend[ing] a couple bucks" isn't always fiscally possible in a education or non-profit environment which the computing lab is likely a part of.

Finally, given likely limited resources, it likely made a lot more sense to buy more lower end less expensive machines if they could adequately meet the needs of the majority of users while having just a couple of high end machines for those that need them. But they need mechanisms in place to prevent abuse between users and sessions.

Re:Good grief (1)

rongten (756490) | about 3 months ago | (#47511059)

Exactly the last point.

  What I dislike the most are users that take advantage of others due to their lack of knowledge. And this is either done intentionally or unintentionally when rules are not enforced.

I would like all the students (often coming in contact with linux, shell programming and clusters for the first time) to have a fair shot of using the available resources, and not to backstab each other.

  Before everyone could run on the cluster, until I discovered that certain students were giving their login to others: the first did not really need it (i.e. theoretical work) and the second would run on the cluster twice the amount of jobs of the others.

NFS + SSH is a security hole (0)

Anonymous Coward | about 3 months ago | (#47509235)

You are probably using NFSv3 (instead of NFSv4 or some other protocol) which does not have encryption nor authentication.

Your user's accounts can be hijacked via ssh key login.

See the YouTube video: Hack SSH : Default configuration of NFS server

Re: NFS + SSH is a security hole (0)

Anonymous Coward | about 3 months ago | (#47509869)

If you're stupid enough to export / to the world, writeable, and without root squash, then yes, you're asking for trouble.

Don't bother watching the video, that's a few minutes of my life I'm not getting back.

Re: NFS + SSH is a security hole (1)

goarilla (908067) | about 3 months ago | (#47510371)

I think he means you can spoof uid of some known user and get the private keys in his .ssh directory.

Re: NFS + SSH is a security hole (0)

Anonymous Coward | about 3 months ago | (#47511189)

No, he was putting public keys (not private) into a home directory. Specifically, the user was root which was only possible because a) /root was exported (via exporting /), b) root squash wasn't enabled. Yes, nfs3 is fundamentally insecure. Any vaguely competent sysadmin knows this and knows to take appropriate precautions.

This is no different in spirit to giving me physical access to your box and me booting with init=/bin/bash and removing root's password in /etc/passwd. If you give me unfettered access to your filesystem, I could pwn it in countless ways very quickly. This isn't a hack. If you configure your computer insecurely, it will be insecure.

Re: NFS + SSH is a security hole (1)

goarilla (908067) | about 3 months ago | (#47515259)

No, he was putting public keys (not private) into a home directory. Specifically, the user was root which was only possible because a) /root was exported (via exporting /), b) root squash wasn't enabled. Yes, nfs3 is fundamentally insecure. Any vaguely competent sysadmin knows this and knows to take appropriate precautions.

And what's the appropriate action besides root_squash and proper host access control (/etc/exports,tcp wrappers, firewall, etc ...) ?
It still doesn't do any real authentication.

Platform LSF (2)

solarium_rider (677164) | about 3 months ago | (#47509247)

Maybe they have an EDU license? http://www-03.ibm.com/systems/... [ibm.com]

Re:Platform LSF (1)

rongten (756490) | about 3 months ago | (#47510997)

Hi,

  another alternative would maybe sysfera-ds [sysfera.com] , but their open source offering seems lacking documentation and features (see here [github.io] ).

  Need to investigate. Seems something on the lines of what vizstack [sourceforge.net] could have done.

Is this all necessary? (5, Insightful)

Sycraft-fu (314770) | about 3 months ago | (#47509279)

Seems like you are trying to work out a solution to a problem you don't have yet. Maybe first see if users are just willing to play nice. Get a powerful system and let them have at it. That's what we do. I work for an engineering college and we have a fairly large Linux server that is for instructional use. Students can log in and run the provided programs. Our resource management? None, unless the system is getting hit hard, in which case we will see what is happening and maybe manually nice something or talk to a user. We basically never have to. People use it to do their assignments and go about their business.

Hardware is fairly cheap, so you can throw a lot of power at the problem. Get a system with a decent amount of cores and RAM and you'll probably find out that it is fine.

Now, if things become a repeated problem then sure, look at a technical solution. However don't go getting all draconian without a reason. You may just be wasting your time and resources.

Re:Is this all necessary? (3, Informative)

Charliemopps (1157495) | about 3 months ago | (#47509461)

We did it like you describe. We had some problems with people doing dumb stuff and we just stuck post-its on the monitors describing how to use the "top" command.

[you@server1 ~]$ top
PID USER %CPU COMMAND
1960 you 2.3 top
2457 Bob 97.0 bitcoin

[you@server1 ~]$ write Bob DUDE! wtf?!?!

etc...

Re:Is this all necessary? (1)

DroolTwist (1357725) | about 3 months ago | (#47510239)

LOL.

Re:Is this all necessary? (2)

antifoidulus (807088) | about 3 months ago | (#47512169)

-bash: !?!: event not found

Re:Is this all necessary? (1)

Anonymous Coward | about 3 months ago | (#47509471)

Seems like you are trying to work out a solution to a problem you don't have yet. Maybe first see if users are just willing to play nice.

You'll also discover that once in a blue moon, users do have a legitimate reason to briefly consume much more resources than typical.

Spikes happen. It's normal. Monitor the usage, but don't cap it until your problems are more than theoretical.

Re:Is this all necessary? (4, Interesting)

MerlynEmrys67 (583469) | about 3 months ago | (#47509547)

This is hilarious. So was in College several decades ago. Large computer labs and lots of SSH/X forwarding to do work. The only time I remember getting in "trouble" was when we were on a LISP module as a freshman. Their resource management only allowed a few LISP interpreters on the machine - otherwise it would deny them for resource reasons. I quickly got sick of typing $lisp and waiting for my session to actually start - so I created a shell script that ran an infinite loop asking for a lisp interpreter...
15 minutes later, someone tapped on my shoulder and asked me what I was doing - I had taken the full processing capabilities for a while. I showed my script - gasp horror, and a 1 second pause was added to the script and I was good to go. Learned a lesson too.
The year before I got there - enough people were learning how to hack the system to crash it that they were having trouble keeping the system up. Their solution - install a button next to each keyboard that when pushed would crash the system. No work was accomplished for a week - then it didn't go down again. We were told about the button, it was rough for a couple days - and then the systems were rock solid.
Kids will be kids - good kids will create a nightmare for you - work to focus that energy in a positive way and good things will result.

Re:Is this all necessary? (1)

Forever Wondering (2506940) | about 3 months ago | (#47511131)

$lisp?? Was this on McGill's RAX/Music or BU's RAX/VPS system by any chance?

Re:Is this all necessary? (0)

Anonymous Coward | about 3 months ago | (#47511241)

Whomever made the decision in your last paragraph was a brilliant manager and I would love to work for someone like that. Solve human problems with human centric solutions, not try to make technology deal with psychology.

Re:Is this all necessary? (0)

Anonymous Coward | about 3 months ago | (#47512079)

However don't go getting all draconian without a reason. You may just be wasting your time and resources.

This! 1000x this!!!

When I went to school (on a wooly mammoth) the sysadmin tried all sorts of draconian technical measures to stop students basically using the machine for anything more than reading their email. He was so anal about it that you couldn't even change your window manager or run xclock without getting a smackdown. He actually had a script that ran periodically and killed any process that wasn't specifically in his list of acceptable tools. He actively deleted files from user's home directories if he thought they were "unacceptable", even if it was just your CS work that contained a bug and happened to fork-bomb until it crashed. He was real real helpful - NOT!

The first (and biggest) thing anyone ever learned in that school was how to subvert the idiot sysadmin so they could get things done. Mostly, that involved spending a lot your own money (that you couldn't afford) on a PC to use outside of school and just not using the school lab at all.

Don't go all BOFH on the students or they'll be driven out of the classroom and a lot of the benefit of a school-provided lab goes out the window.

Did you look at the PAM modules on your system? (4, Informative)

Anonymous Coward | about 3 months ago | (#47509285)

Some of what you're asking for are ulimit settings - total number of processes, for example. That's pam_limits. Some could also be handled with pam_tally2. Or, since you're already using LDAP, you could use a simple web-based reservation system which specifies allowed login hosts in the LDAP server for however long someone wants to "check out" a machine; that's how I've done it when I've needed to control access to cluster resources.

When you talk about controlling other resources beyond logins, it's generally better to handle it at the application level rather than the OS level if you can. But using ulimits (and again, this can be integrated into LDAP pretty easily), you can restrict resources and apply process priority (ionice and nice are your friend) based on membership in a specific group or another LDAP attribute.

You could, for example, create a "highpower" group per set of machines / per machine (highpower_serverA) and add users to that group based on a checkout system, then define limits on the number of processes they can use, amount of memory they can use, total CPU time they can use, etc in limits.conf based on being in that group or not being in that group.

I'll send you my bill tomorrow.

Re:Did you look at the PAM modules on your system? (1)

nthcode (1119827) | about 3 months ago | (#47512737)

I'll send you my bill tomorrow.

Agree. PAM plus some usage guidelines and monitoring should be enough. Stuff like this http://www.ibm.com/developerwo... [ibm.com] BTW It feels being like one of the torrent nodes backing up your encrypted files for you.

Re:Did you look at the PAM modules on your system? (0)

Anonymous Coward | about 3 months ago | (#47516885)

There is no one magic thing to do everything you request. There are probably sales guys who will tell you otherwise and offer to sell you solutions for a few thousand dollars per head.

What I am about to say also depends upon the distribution. Look in /etc/security. There are files like limits.conf and access.conf that can be set to block remote logins to users or modify resource limits. Also look at pam_console. It can run stuff at console login time to change permissions, but could theoretically change some nice values (you will need to write a small script to connect that up) or logoff if it is a second console login. To inhibit logins, you will need a script to monitor load on a machine and pam_nologin. When your script creates /etc/nologin, no one but root can login. Your script will need to clear this out at some point when the load is lower and you may want a cron job to clear it out in the middle of the night just in case you have a bug in your script.

That leaves SSH. I do not have a good way to limit how many times that can run in a way that a user cannot undo it. You can write a script that runs on each machine and checks running processes. You can see how many ssh sessions are present and who owns them. If a user exceeds your threshold for SSH counts at that moment, you can kill them all (pkill).

Systemd-logind (0)

Anonymous Coward | about 3 months ago | (#47509291)

See the subject, this is the modern Linux component that deals with the area you're trying to set policy around.

Just deal with problem users individually. (3, Insightful)

ZorinLynx (31751) | about 3 months ago | (#47509299)

Have these problems actually been happening a lot?

When I first started to help manage a computer lab, I was concerned users would behave really badly and do horrible things. The truth is, very few users did, and we just talked to those users and told them how to behave.

If you get the occasional repeatedly defiant user, locking out their account can be the final solution. But most people (at least at our site) aren't jerks and listen. Most "bad things" are due more to incompetence than malice, and educating students is easy.

Also, as someone with experience in these matters, allow me to recommend AGAINST Fedora for production systems. I like to call Fedora the self-breaking distro; updates break things CONSTANTLY. You're much better off running Ubuntu (even non-LTS is more stable than Fedora) or the RHEL clones like CentOS or Scientific Linux.

Re:Just deal with problem users individually. (0)

Anonymous Coward | about 3 months ago | (#47509411)

I second the use of CentOS or Scientific Linux! CentOS is perfect for "Production" use where RHEL with support is not a requirement and Scientific Linux is perfect in an educational environment. Both are stable where as Fedora tries to stay leading edge if not bleeding edge.

Re:Just deal with problem users individually. (1)

jabuzz (182671) | about 3 months ago | (#47515119)

Except the long term existence of Scientific Linux is now in doubt with Cern jumping ship to CentOS.

To be honest since the introduction of being able to use extra repositories at install time the requirement for a separate CentOS and Scientific Linux mostly evaporated.

...by hiring them. (1)

oneiros27 (46144) | about 3 months ago | (#47509927)

When I used to work for a university (mid-1990s), our department's sysdmin had gotten in trouble at the engineering school because he had written a script that would log into every machine multiple times until all ttys were exhausted ... so he could run his ray-tracing jobs undisturbed. I heard he got away with it for quite some time before one of their sysadmins came in early and realized something wasn't right.

They told him not to do it, but instead of banning him, they put him to work ... he wrote some pretty impressive software to make it easier for us to manage users, and a menu system for the non-technical users (a gopher-like interface that'd run elm / pine / news / lynx / gopher / etc.)

Re:Just deal with problem users individually. (1)

rongten (756490) | about 3 months ago | (#47510961)

Hi,

  the beowulf clusters we have are running either based on Centos or SLES. For the development workstations where newer versions of certain software are needed I install Fedora.

  This means the developers basically run production on the cluster and develop on the workstations.

  Since there is always a gap between the two (i.e. centos 5 on cluster and fedora 16 on workstations before, centos 6 on cluster and fedora 20 on workstation), when the cluster is updated there is limited breakage, at least until now.

  I understand those that push a stable distro everywhere, maybe for next cycle I will do the same, who knows.

Re:Just deal with problem users individually. (1)

Fotis Georgatos (3006465) | about 3 months ago | (#47511447)

This!

I've been managing systems with hundreds of well-meaning and not-so- scientists, for years.
Generally, I subscribe to the school of thought that putting too many fences does more damage than good.

I know for myself, that I *can* create trouble in a zillion of ways on a system, that fencing against it is almost pointless:
* fork bombs
* malloc bombs
* /tmp overuse
* /dev/shm overuse
* deliver daemonized processes in the background
The first two you may handle a bit with the PAM limit techniques described by a fellow poster, but not without limiting the capabilities of the system (ie. you take out useful features, to enforce some policy). The rest you can attempt to handle with some other clumsy fencing techniques, but again not without side-effects.

In short, do not overengineer, yet be totally reactive: let the rules be relaxed in the beginning, monitor tightly, react quickly and be sure to have often your users at the other end of the phoneline/email justifying their tasks' activity. You'd be surprised how much you'll discover by doing that and policies will be far more justified.

> One thing that shouldn't be underestimated is the ability of a user (especially a young user) with *lots* of free time on his/her hands to figure out ways to game the system...

Also this!
Young users with lots of free time will give you a headache, one way or another. But you can often stop them by just blinking an eye.

You don't have a problem (0)

Anonymous Coward | about 3 months ago | (#47509351)

Don't try to find a solution for a problem that doesn't exist.

If the users need more power, you buy more power, you don't limit them.

Re:You don't have a problem (1)

tiberus (258517) | about 3 months ago | (#47509569)

Truly spoken like a user with no concern for someone operating a lab with little to no budget.

A lot of this seems superflurious ... (4, Interesting)

dougmc (70836) | about 3 months ago | (#47509353)

If you're giving your users access to the machines, they should be able to use them. And if you can't trust them to use them responsibly, don't give them access.

If it were me, I'd secure the boxes normally, set up some resource usage rules (guidelines?) and see what happens. If problems happen often, then maybe look into something automated to enforce the rules, but if not, then you're done.

As for renicing stuff done by remote users, I'm not sure this is a good idea, but if you want to do it you can renice sshd itself, and to be thorough you can also renice crond (if you give them access to cron/at.) But do keep in mind that nice (and ionice) can't do magic with an overloaded system -- they help, but they don't do magic.

As for commercial systems, I haven't really seen this as being a big problem outside academia. Multiuser *nix systems where different people are competing for resources is kind of rare in the commercial sector, as it seems like the trends lately are to have enough hardware, often dedicated, and to enforce limits through voluntary compliance (and have their boss talk to them if it's still a problem.)

That "have their boss talk to them" bit may not work so well for students, but still, I would wait for a problem to appear before I put too much effort into solving it.

Instead, put your efforts into proper sysadmin stuff -- stay up to date on patches, look for problems (especially security ones), make sure backups work, help users with problems, etc. If there's any troublemakers, talk to them, and if they don't shape up after a few warnings, kick them out. (And make sure the policies permit that!)

You can enforce limits on specific users through pam and sshd_config and some other mechanisms, but I'd suggest leaving that for later. Anything you do that will limit what people can do will eventually keep them from doing what they legitimately need to be doing.

Create a Windows Domain (-1)

Anonymous Coward | about 3 months ago | (#47509375)

And have your users join it.

/etc/security/limits.conf (0)

Anonymous Coward | about 3 months ago | (#47509377)

$ man limits.conf

A lot of complexity, a little gain? (2)

xeos (174989) | about 3 months ago | (#47509393)

That sounds like a lot of overhead for a problem that seems unlikely. I've used lots of multi-user linux boxes over the years and never noticed that a few bad users ruined the experience for everybody else. If it's really an issue, think of it instead as a learning opportunity - post concise instructions on proper lab utilization and how to use top, etc to check if somebody else is the reason why the machine you are using is slow. Then let users police each other.

Re:A lot of complexity, a little gain? (1)

dougmc (70836) | about 3 months ago | (#47509493)

I've used lots of multi-user linux boxes over the years and never noticed that a few bad users ruined the experience for everybody else.

I did ... but this was 25 years ago at college when hardware was scarce (we had 1 MB disk quotas!) and the computer system was used to do all sorts of things that people just couldn't do from their own personal computers (i.e. access mail, news or the Internet.)

Users policed each other back then to a degree, but there wasn't much you could do to make a bad user behave unless the sysadmins backed you on it, and they'd only back you if they explicitly broke the rules set down. And often you didn't even know who a user was -- if they sat at a console you'd know who they were, but if they dialed in you might just know their user name and often that gave no clue who they really were. (The sysadmins knew, but they wouldn't share.)

But now ... most of the things that caused problems can be done from anybody's own computer, or from a PC down in a lab somewhere. True multiuser systems are kind of rare nowadays, and most users probably don't deal with them where back then we had little choice.

Does it have to be linux? (0)

Anonymous Coward | about 3 months ago | (#47509425)

If you don't strictly need to enforce but monitoring (and occasional in-person admonisment) is enough, then you can start with system accounting and perhaps user class limiting on single machines and aggregate the statistics for your perusal. FreeBSD, for one, comes with both standard available.

old's cool (0)

Anonymous Coward | about 3 months ago | (#47509457)

Just go old school. Disable remote access entirely. Only have 1 machine for every 10 users, and make them show up and camp at a desk while they do their work. Nobody EVER wasted compute resources when I was in school, because odds were good you spent 3-6 hours napping on the lab floor while you waited for a machine to free up. If anyone was wasting time/resources, odds are good they'd have been found lynched and dangling from the rafters in the morning.

Also, it was uphill, BOTH ways, barefoot and in the snow.

Technical solution to a social problem. (4, Insightful)

Vellmont (569020) | about 3 months ago | (#47509519)

If your users can't play nice together, the solution isn't to treat the place like a prison with automated systems enforcing a hard and fast set of rules.

The solution is for users to create their own enforcement. If some guy tries to take all the resources across your network with distcc, then the people affected should be able to notice that and tell the guy to knock that the fuck off.

In other words, give the users the freedom to break stuff, but also the knowledge to find out who'd breaking their stuff. It'll serve them far better than creating a walled garden where someone else has the responsibility to enforce social rules.

Slashdot and reddit work this way. Neither go around trying to enforce how people behave, they give the users the power to do that themself.

Re:Technical solution to a social problem. (0)

Anonymous Coward | about 3 months ago | (#47509653)

Parent should definitely be modded up.

The solution to one or two people being a jerk, is to not treat everyone as a jerk.

Re:Technical solution to a social problem. (0)

Anonymous Coward | about 3 months ago | (#47510149)

The solution is for users to create their own enforcement. If some guy tries to take all the resources across your network with distcc, then the people affected should be able to notice that and tell the guy to knock that the fuck off.

The only problem? Today's entitled crop of whiny college students who are so used to having mom and dad helicopter in to resolve their problems that they'll probably end up shooting up a fucking computer lab out of frustration. These same students are completely unable to share, because they've largely never been denied anything in their lives.

Back in our day, the computer lab users would self-police, and behave like adults, because they were adults. Today? You'll have a bunch of screaming children crying constantly: "Mom? MOM? I don't know what to do... I can't do my work in the lab!" "Hush now, sweetums - I'll write a nasty letter to the Dean and solve that problem for you!"

Re:Technical solution to a social problem. (1)

Vellmont (569020) | about 3 months ago | (#47510795)

Ha. You make me laugh. People such as yourself have bad memories, or lived in some kind of sheltered environment. Every generation is convinced that the generation after them are the spawn of satan, and when THEY were that age they were all just perfect angels, or at the very least a HELL of a lot better than the current lot of miscreants. The attitude you're projecting has been common for at least the last 60 years.

Uhh.. when _I_ was that age about 20 years ago people were hacking into the computer science workstations, sniffing passwords, hacking root, running a bazillion processes on the box, etc. The only thing that's changed is now it's Linux machines, not SunOS machines.

Re:Technical solution to a social problem. (2)

ray-auch (454705) | about 3 months ago | (#47512145)

Seconded. Except >20yrs and HPUX rather than SunOS.

Police ourselves - yeah sure we did. Act like adults ? er, nope. I figured out several ways to crash machines from console, if someone logged in remote and started using all the resource, I'd crash the machine and move to another. X was completely unsecured in those days but they installed a graphical login. Fake login windows, key loggers, fake error windows (make the guy on the better workstation think it's crashed so he moves off it), check.

Best X trick back then was obviously the ability to put up a window on someone else's screen when the tutor was standing behind them, topless or nude pictures were good (bitmap - time before jpeg existed)... I guess the only thing that's changed now is that the available selection and quality of such images has increased a little. Happy days.

Re:Technical solution to a social problem. (0)

Anonymous Coward | about 3 months ago | (#47514915)

Ha. You make me laugh. People such as yourself have bad memories, or lived in some kind of sheltered environment.

Okay, maybe he makes you laugh, but I'm calling you a fucking idiot. This is no joke: at my previous employer (University of Arizona) a couple of junior level CS students got into a fucking street fight (as in not just punches, but throwing shit, etc) when one wouldn't give the other a copy of a make file needed for an assignment. In.. the.. goddamned.. CS building...

Re:Technical solution to a social problem. (0)

Anonymous Coward | about 3 months ago | (#47519845)

And I am sure no one ever went ballistic after being tripped while carrying a stack of punch cards, or any other stupid stunt.

GP and you are deluded if you think this represents either a general trend or a new phenomenon, and neither of you has any personal experience in a classroom of any sort or age setting. The idea of a helicopter parent being able to do their kid's CS homework for them is beyond funny and into some strange realm where you wonder what exactly that person is smoking, but don't want any part of it for yourself.

Re:Technical solution to a social problem. (0)

Anonymous Coward | about 3 months ago | (#47510545)

If your users can't play nice together,

There are those who sometimes fail to play nice with themselves. For them, an additional remote console access is a must to stop annoying tens of people simultaneously. Or at least disconnect the beeper. Thus speaks the personal experience.

No point (0)

Anonymous Coward | about 3 months ago | (#47509525)

Giving people limited logins to the powerful stuff means that some who need them will run out and others who have zero reason to have it will hog it. You let them manage themselves, the powerful machines should require you to get up and give them access, you then put monitors on them, if a less powerful machine would suffice then you boot them off and send them to it. If you really want to have some chrome then add a usage histogram maker on a user basis, machine basis, program basis etc. That way you can justify whatever you want with those numbers.

I would write my own with LDAP (1)

jerryjnormandin (1942378) | about 3 months ago | (#47509537)

I would write my own with LDAP and some custom code that will manage ulimit and other tools to manage resources. It's a piece of cake.

Re:I would write my own with LDAP (4, Insightful)

mrvis (462390) | about 3 months ago | (#47509579)

I would be terrified if you were my co-worker.

Re:I would write my own with LDAP (1)

Trogre (513942) | about 3 months ago | (#47520407)

My goodness, what has happened to Slashdot? Have the competent admins been replaced with morons?

NFS homedirs (2)

phorm (591458) | about 3 months ago | (#47509641)

Back when I worked in schools, one of our techs setup LTSP with NFS-mounted homedirs.
I mentioned that perhaps IP-based host authorization wasn't exactly a secure way of doing things, especially when it applied to both students and teachers/admin-staff.
I was told that it wouldn't be an issue, and that files were perfectly safe.

So some time goes by and a demo is scheduled for the system. My compatriot logs in and... he gets a hot-pink desktop with My Little Pony wallpaper theme. Unfortunately that didn't dissuade him from going with NFS, and they rolled it out anyways: "kids will never figure that out"

One thing that shouldn't be underestimated is the ability of a user (especially a young user) with *lots* of free time on his/her hands to figure out ways to game the system...

Server Cluster (2)

SampleFish (2769857) | about 3 months ago | (#47509645)

Easy solution:

Put all of your systems in to one big active/active server cluster. Then everyone is sharing all the resources evenly by default.

Here is a Fedora resource:
http://clusterlabs.org/doc/en-... [clusterlabs.org]

If you really want to have some fun you should try to create a Plan9 cluster. This is a transparent cluster OS that was designed for the purpose of resource sharing.
http://plan9.bell-labs.com/pla... [bell-labs.com]

ldap (0)

Anonymous Coward | about 3 months ago | (#47509683)

Put the user's mobile phone number into an ldap field that is easily accessible for any user (id username). Then let people sort those organizational things out themselves. If it's public who causes a problem, then people become less egoistic. The distcc guy may nice its processes himself etc.

The problem is what you are using (1)

BillBrains (1686056) | about 3 months ago | (#47509707)

You write:

In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management

NX? But you are using x2go? THAT is not NX. Contact the experts I.E NoMachine http://nomachine.com/ [nomachine.com] . Only the real authors of probably the most amazing remote access and management tool can you help you there.

FreeIPA (1)

Baby Duck (176251) | about 3 months ago | (#47509731)

Since you are on Fedora already, I'd recommend FreeIPA. It'll give you more than your LDAP+PAM for centralized authentication and authorization, like Host-based Access Control, centralized sudoers policy, DNS, etc.

However, it wouldn't accomplish any of the tasks you specifically asked for out-of-the-box. I was thinking you could write some of these tasks as FreeIPA plugins.

Re:FreeIPA (0)

Anonymous Coward | about 3 months ago | (#47510271)

Since you are on Fedora already, I'd recommend FreeIPA. It'll give you more than your LDAP+PAM for centralized authentication and authorization, like Host-based Access Control, centralized sudoers policy, DNS, etc.

Why run an active directory clone when you could just run Active Directory? FreeIPA hasn't got anything OpenLDAP doesn't have... except, of course, subverted standards and bug compatibility with AD.

FreeIPA has, in my own experience, served primarily as a "gateway drug" for people who weren't ready to accept the intoxicating experience of full Microsoft lock-in. I've watched three sites go FreeIPA... and three sites move to AD... and now all three sites have decomissioned their Open Source DNS, DHCP, NTP, LDAP, etc. etc. etc. and fully embraced the delicious, delicious planned obsolescence of a pure Microsoft infrastructure.

Just accept the evil. Go AD. It won't hurt you - only your employers. You'll never look back, because it would hurt too much. Accept the evil, skip FreeIPA and go right to the purest form. It's calling you.

Re: FreeIPA (0)

Anonymous Coward | about 3 months ago | (#47512013)

FreeIPA doesn't have compatibilty bugs or subverted standards. AD does. Perfect example is the multi-value object "cn" ... except in AD where it's single-value. The list goes on and on. Another is their ham-fisted use of nis objects. Want automap objects in AD? Too bad. Just fight with nis objects to get the job done.

AD is good for people who just want or just need "good enough".

Re:FreeIPA (1)

Rutulian (171771) | about 3 months ago | (#47515145)

Well, if it's linux, FreeIPA is better because then you can take advantage of group policies that are designed to work with linux. If you use AD, you will get authentication and that's about it. Now if you have windows+linux it's a bigger problem. In our lab we went with AD forsaking the advantages of FreeIPA for our linux users, but you could also set up both servers with a shared trust. It's a bit more complicated, but this is something RedHat are trying to develop into a turnkey solution.

Falls back in seat... (0)

Anonymous Coward | about 3 months ago | (#47509745)

It's as if I hear a thousand university students, seeing this post and the responses, all logging in to run the most resource intensive things they can find just to be pricks. The server was silenced.

Sort of glad my school didn't lock us down (0)

Fallen Kell (165468) | about 3 months ago | (#47509789)

I mean, if I had limits on how many systems I could connect to and use at once I would never have passed two of my courses.

One was a neural networking course which involved programming a computational model and then running 100,000 iterations of the model and analyzing the results. We had been given 6 weeks for it because it was going to take at least 1 week or so to run, but I could not get my model to work for the life of me, and working with the professor finally got it working the night before the results were due. He looked at me and said, I should ask for an extension, and I looked at him and said, I think I'm fine. He then gave me the are you nuts look.

I wrote a script that split up the iterations and output between approx 350 systems with the more powerful ones getting higher iteration counts than the older ones. When I handed in my report of the results to the professor he looked at them and was in disbelief that I was able to do it in time and asked me over when class was done, where I showed him all the code that spread the workload across all the systems (it helped that I was working fulltime managing a small beowulf cluster at that point so I had some experience using distributed computing resources). I got quite a few extra credit points for ingenuity on that...

The Cloud? (1)

gizmo2199 (458329) | about 3 months ago | (#47509809)

What about scalable cloud instances that students pay for out of their tuition fees? That way if they want to use 32GB of ram and 12 cores for their hello world.c program, they can do so without affecting other users, but they have to pay?

Limitations (0)

Dega704 (1454673) | about 3 months ago | (#47509819)

As much as I would like to see Linux displace Windows in these kinds of environments, there really aren't any systems that give you the same kind of management functionality as Userlock, or even Active Directory and Group Policy. It's possible of course, but only if you have the time, skill, and manpower to rig something together yourself. I'm sure I'll get flamed for saying this, but the Linux desktop has a long way to go before it can even hope to be a viable alternative to Windows in the enterprise. Even then it will not be possible unless that particular segment unifies around a specific distro. Not saying I like it; just being realistic. It certainly doesn't stop me from going all Linux at home, but it makes it an unthinkable idea to try and sell to management. Hopefully this changes in the future, but it's a long way down the road.

Responsability-linked quotas (1)

cloud.pt (3412475) | about 3 months ago | (#47509937)

The only way I see this happening is if you totally migrate your lab to something like Amazon AWS/EC2, and link each user to an individual account with specific bandwidth and storage (GRATIS) quotas.

For one, processing power wont be an issue since that's on Amazon's side, and it's virtually unlimited. Now, everyone will have a decent amount of the other resources for whatever they need, as long as quotas stay inside each user's scope (for which their free quota should have been well defined).

A user abuses his quota? No problem - Students get overcharged on their tuition fees, or reflected on their grades. Same for employees/researchers, in their salaries OR performance reviews. Is it a public community lab like, say, a library? Restrict access based on fair usage, maintaining an external log of who is where and when. Hell, if anything like this is politically unfeasible, just warn your users, at least you will know individually who is doing what without the heavy-lifting that is required to analyze it manually.

Everybody will be self-educated on how to use the system. On the long-run, the community will educate itself with no need for personal bad experiences. Much like using a printing quota, or the water/electricity bill.

All resources are very similar when it comes to management, so the principle of fair-use with retroactive consequence will always be the best bet.

Re:Responsability-linked quotas (1)

tibit (1762298) | about 3 months ago | (#47510491)

I'd just run my own "cloud" instead, using, say KVM. With billing etc. like in the old times.

Virtualization may be your answer (2)

Jailbrekr (73837) | about 3 months ago | (#47509945)

We had a similar issue with our engineers. We had login servers which worked great as they were poorly advertised and woefully underused, but once we had a system in place for them to make efficient use of them, they started to randomly crash. Most times it was due to them trying to submit a job to our compute farm and end up running it on the login servers, but sometimes it was malicious and a deliberate attempt to get a few extra CPU cycles at the expense of others. For us, the solution was rolling our own virtual desktop farm. We used KVM for the hypervisor, python for the back end control, and php for the front end web interface. We used Active Directory for authentication and rights management. That way we could control precisely how much resources each engineer had rights to.

As you are working at a school, it is not without reason to believe that you can use the students to help develop a system to manage the virtual instances. With a bit of forethought and a limit to the specifications, you can have a simple VDI broker developed and tested in a month. And if you avoid my mistake and use the libvirt API, you will even have the ability to easily expand the system to using linux containers.

Cgroups might help (0)

Anonymous Coward | about 3 months ago | (#47510009)

http://en.wikipedia.org/wiki/Cgroups

let them all use distcc (1)

Gothmolly (148874) | about 3 months ago | (#47510029)

To paraphrase Syndrome: When everyone's impacted by everyone's compile, no-one is.

Also, find me something other than a full kernel compile that takes measurable amounts of time on a real machine.

Re:let them all use distcc (0)

Anonymous Coward | about 3 months ago | (#47510171)

find me something other than a full kernel compile that takes measurable amounts of time on a real machine.

How about any reasonably advanced piece of software available on the market today?

Re:let them all use distcc (0)

Anonymous Coward | about 3 months ago | (#47513241)

find me something other than a full kernel compile that takes measurable amounts of time on a real machine

Compiling anything vaguely complex with GWT. It does rock though.

Why users hate IT (0)

Anonymous Coward | about 3 months ago | (#47510053)

I'm amazed at how much effort is placed on limiting researches use of computers.

Re:Why users hate IT (1)

Culture20 (968837) | about 3 months ago | (#47511291)

I'm amazed at how much effort is placed on limiting researchers' misuse of computers at the expense of other researchers

FTFY

Social problem, social solution (2)

Minwee (522556) | about 3 months ago | (#47510069)

Post a short, general list of rules in several obvious places. Make them reasonable enough to cover most possible user needs but flexible enough to cover things that you haven't thought of yet. Any user who is stupid enough to break the rules by running fork bombs, torrents, mining, hiding stashes of lemur porn or anything else which a child of six could tell you was a bad idea, will have their accounts disabled as soon as they are discovered.

If they have a good excuse for abusing the systems then discuss it with them, suggest alternatives to running rendering jobs on the lab servers and keeping passwords on sticky notes or whatever else it is that they are doing wrong and then restore their access, trusting that they will know better. If you do it right, they may even decide that it is better to ask for permission than forgiveness next time.

If they don't, send a memo to their department head briefly outlining what they did, how it was detected, what action you have taken, and that you won't be reversing this decision until you see a presidential pardon come down from an appropriately high authority. It doesn't matter if they have Really Important Work which needs to be done by the end of the week or not, just cut them off until the proper User Apology and Restoration procedure has been completed.

There you go. This solution is licensed under the WTFPL [wtfpl.net] which is compatible with the Open Source Definition and the Debian Free Software Guidelines so you can use it any way you want. You can even supply your own LART and display it prominently by the door of your office if that helps get the message across.

Re:Social problem, social solution (1)

evilviper (135110) | about 3 months ago | (#47513885)

That sounds reasonable only if you have a very small group of users, and loads of time to deal with it.

Everybody runs a fork bomb once in their life. A computer lab should be a safe place to make mistakes, not somewhere that any mistakes will make you a pariah. If you do take that unreasonable attitude, the "presidential pardons" will be coming down on a regular basis, just signed-off as a routine duty without the slightest thought, every time a department head requests it.

Re:Social problem, social solution (1)

Minwee (522556) | about 3 months ago | (#47515649)

If they have a good excuse for abusing the systems then discuss it with them, suggest alternatives to running rendering jobs on the lab servers and keeping passwords on sticky notes or whatever else it is that they are doing wrong and then restore their access, trusting that they will know better.

Everybody runs a fork bomb once in their life. A computer lab should be a safe place to make mistakes, not somewhere that any mistakes will make you a pariah.

It's good that we agree on that.

cgroups FTW (0)

Anonymous Coward | about 3 months ago | (#47510345)

cgroups (abbreviated from control groups) is a Linux kernel feature to limit, account, and isolate resource usage (CPU, memory, disk I/O, etc.) of process groups.

Condor (0)

Anonymous Coward | about 3 months ago | (#47510665)

Use Condor, it will do what you want after you configure it properly:

http://en.wikipedia.org/wiki/Condor_cycle_scavenger

Yeah... (0)

Anonymous Coward | about 3 months ago | (#47510695)

...it's called Windows...and is easy to do there. Should try it sometime.

NComputing Terminals (0)

Anonymous Coward | about 3 months ago | (#47511409)

Buy one beefy pc and a ton of these - http://www.ncomputing.com/products/lseries/overview

Containers (1)

gmuslera (3436) | about 3 months ago | (#47511601)

Run user sessions on linux containers (docker is getting momentum, may be the right option) that you can limit on the resources that they can use, while being far more efficient than VMs for that. Just a word of caution, they aren't as secure as VMs, they may be present or future vulnerabilities that may let hostile students to break their limits and/or access the main system, as they have more surface contact with the machine kernel than proper virtualization, mixing VMs for security with containers for efficiency could be a good compromise.

Here's how I'd do it (0)

Anonymous Coward | about 3 months ago | (#47512347)

1. Create a linux image (you can use Clonezilla [clonezilla.org] , g4u [feyrer.de] or Ghost [symantec.com] ) that requires labusers authenticate to either LDAP, AD or something so you have their actual user details for logging and auditing. Alternatively you could boot it from the network [tldp.org] or from CD. Another alternative is to use deep freeze [deepfreeze.com.au] .
2. Ensure that the system is checked for integrity on startup and the latest image is downloaded and applied if it doesn't match the correct version. cron a reboot that forces this if you're worried about users doing stuff and not rebooting.
3. Ensure that logs are written to a syslog log server or that you get the authlogs somewhere (who logged in where, on what ip address and when etc...).
4. Give users as much access as you need to (yes, even root). If they do anything wrong you have audit logs and because of the imaging unwanted software and programs will be removed.

CGroups (0)

Anonymous Coward | about 3 months ago | (#47512621)

You can use cgroups to limit cpu/mem/etc . This is a relatively new feature to Linux.

Running Fedora - why not FreeIPA? (1)

thatkid_2002 (1529917) | about 3 months ago | (#47512837)

I think FreeIPA can address most of your needs and if you are already running Fedora then adding it to your network should be fairly trivial. FreeIPA is kind-of like an Active Directory type dealie (and it can synchronise against AD) that offers a lot of integration and control.

Don't get too worked up about resource management (1)

nobby (6911) | about 3 months ago | (#47513987)

I did my undergrad degree on a lab not unlike this (actually Sun workstations using NIS/NFS to mount home directories - this was the 1990s). These machines were likely an 1-2 orders of magnitude less powerful than even your smallest desktop - desktops with 32MB of RAM and servers with 128-256MB. There was no resource management aside from disk quotas and the lab worked fine.

Depending on what you mean by high-usage I would have thought even modest desktop systems would be powerful enough for just about anything people get up to in a university lab (unless you mean Z800s with 192GB of RAM and somebody with an application for a machine that big). You could try goosing your smaller desktops by searching for 20-40GB SSDs to use as system disks (this should be plenty for the O/S, installed applications and swap) or upgrading the memory; SSDs like that go for peanuts on ebay.

Resources... (1)

whitroth (9367) | about 3 months ago | (#47516513)

I saw someone suggesting that the users should play nice. That'd be great... and maybe they did, 30 years ago. (We'll ignore the late 80's early 90's stealing of someone else in the lab's xterm....)

I had a user last year - an intern - like everyone, NFS-mounted home directory. It was, of course, shared with a good number of other users. He ran a job that dumped a logfile in his home directory. MANY gigs of logfile, enough to blow out the filesystem. Users were not amused. *I* was NOT AMUSED, as my home directory was on this system, and my login was screwed up, as well as my firefox bookmarks.....

My question is what order of magnitude number of users - tens? hundreds? more? If Sometimes, human to human works.

ulimit might help, too. So might putting the abusers' home directories on the same filesystem, and let them duke it out....

                    mark

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?