Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

oVirt 3.4 Means Management, VMs Can Live On the Same Machine

timothy posted about 7 months ago | from the right-there-in-the-open dept.

Virtualization 51

darthcamaro (735685) writes "Red Hat's open source oVirt project hit a major milestone this week with the release of version 3.4. It's got improved storage handling so users can mix and match different resource types, though the big new feature is one that seems painfully obvious. For the first time oVirt users can have the oVirt Manager and oVirt VMs on the same physical machine. 'So, typically, customers deployed the oVirt engine on a physical machine or on a virtual machine that wasn't managed or monitored,' Scott Herold, principal product manager for Red Hat Enterprise Virtualization said. 'The oVirt 3.4 release adds the ability for oVirt to self-host its engine, including monitoring and recovery of the virtual machine.'" (Wikipedia describes oVirt as "a free platform virtualization management web application community project.")

cancel ×

51 comments

Sorry! There are no comments related to the filter you selected.

oReally (0)

Anonymous Coward | about 7 months ago | (#46608487)

n/t

Re:oReally (2)

TWX (665546) | about 7 months ago | (#46608491)

I can assure you, this software is too new for this feature to be documented in any of their tech books...

Still trying to wrap my head... (2, Interesting)

TWX (665546) | about 7 months ago | (#46608489)

...around the supposed benefits of server-side virtual machines.

You're running an operating system, so that you can run a software package, so that you can run another operating system, so that you can run another software package that is then interfaced-to by users or other stations on the network?

I guess that I can see it for boxes that serve multiple, different paying subscribers that each get their own "box", but wouldn't it just make more sense to size the applications to use the host OS on a single box as opposed to running multiple copies of operating systems and services that eat resources when the virtual hosts all belong to a single customer?

Re: Still trying to wrap my head... (0)

Anonymous Coward | about 7 months ago | (#46608503)

One point is eassy failover. The vm can if configured move To another machine in case of hardware failure.

Of course your software could be done with redundancy in mind.

Re:Still trying to wrap my head... (1)

Gordon Smith (3584195) | about 7 months ago | (#46608507)

RedHat do that too... OpenShift. It uses linux containers to slice up a single system.

Re:Still trying to wrap my head... (1)

Lennie (16154) | about 7 months ago | (#46608543)

Actually, OpenShift doesn't use containers, it uses SELinux to do that.

I wouldn't be surprised if they are working on moving to something that does both: containers secured by SELinux.

Re: Still trying to wrap my head... (0)

Anonymous Coward | about 7 months ago | (#46609429)

OpenShift uses cgroups and selinux. Cgroups provides the resource management capability of containers, selinux providers the security boundary enforcement. Near future, several namespaces will be introduced to provide isolation and peer invisibility.

Re:Still trying to wrap my head... (3, Funny)

invictusvoyd (3546069) | about 7 months ago | (#46608509)

but wouldn't it just make more sense to size the applications to use the host OS on a single box as opposed to running multiple copies of operating systems and services that eat resources when the virtual hosts all belong to a single customer?

Some hosting customers require control of the OS to run their proprietary security and optimization apps . Besides, virtualization allows for efficient utilization of hardware , power and rackspace.

Re:Still trying to wrap my head... (4, Interesting)

MightyMartian (840721) | about 7 months ago | (#46608511)

As someone who uses KVM VMs, I see a number of advantages:

1. More efficient use of resources. A dedicated server usually idles a lot, and those cycles do nothing. Running guests allows empty cycles to be put to work.
2. Load balancing and moving resources around is a lot easier. Have a busy host, move a guest to an idle one.
3. Hardware abstraction. This is the big one for me. Guests are no longer tied to specific hardware, and I can build a new VM host and move guests to it a helluva lot more painlessly than I could with an OS installed directly on hardware.
4. Backup options. Coupled with functionality like logical volumes, I can make snapshots for backup or testing purposes with incredible ease.

Re:Still trying to wrap my head... (0)

Anonymous Coward | about 7 months ago | (#46608625)

As someone who uses KVM VMs, I see a number of advantages:

1. More efficient use of resources. A dedicated server usually idles a lot, and those cycles do nothing. Running guests allows empty cycles to be put to work.
2. Load balancing and moving resources around is a lot easier. Have a busy host, move a guest to an idle one.
3. Hardware abstraction. This is the big one for me. Guests are no longer tied to specific hardware, and I can build a new VM host and move guests to it a helluva lot more painlessly than I could with an OS installed directly on hardware.
4. Backup options. Coupled with functionality like logical volumes, I can make snapshots for backup or testing purposes with incredible ease.

You just described everything ONE operating system _should_to be able to do. Why do we need to shove one inside another to get ANY of that?

Please accept reality folks, the operating system(s) in question are outdated. Between your host and your applications should be a thin runtime environment, fully instrumented by the host. A RHEL install doesn't fit that description, sorry.

It's pretty obvious we shouldn't be managing 10 hosts, 100 guests, and 1000 apps instead of 10 guests and 1000 apps.

Re:Still trying to wrap my head... (1)

MightyMartian (840721) | about 7 months ago | (#46608669)

You name me any operating system available at a reasonable cost that allows me take instant snapshots, move to new hardware without downtime, I'll agree with you. As it is, virtualization HSS been around for decades now.

Re:Still trying to wrap my head... (2)

kthreadd (1558445) | about 7 months ago | (#46608681)

GNU/Linux with LXC and Btrfs will give you more or less the same isolation that you get with virtual machines, but with no overhead. The moving to new hardware is still somewhat lagging behind, but once Criu becomes useful I see no reason why that shouldn't be possible.

Re:Still trying to wrap my head... (2)

DarwinSurvivor (1752106) | about 7 months ago | (#46608709)

FreeBSD also does something similar with its jail system. It's not quite a full VM, but you can still assign dedicated IP addresses and have a separate filesystem (or null-mount the existing one inside of it).

Re:Still trying to wrap my head... (3, Informative)

Lennie (16154) | about 7 months ago | (#46608907)

And Solaris (and open source forks) with Zones has had this for many, many years too.

Re:Still trying to wrap my head... (1)

Dog-Cow (21281) | about 7 months ago | (#46613347)

That only helps if every single piece of software that you'd ever want to run in a server environment runs on your chosen perfect OS. In the real world, virtualization allows for different OS's in the guests.

Re: Still trying to wrap my head... (0)

Anonymous Coward | about 7 months ago | (#46608705)

How does an operating system suddenly get you new physical machines?
A machine is limited by definition. You can exhaust the resources.

In this case all you can do is move parts of the consumption to other resources or deny access. The moving exists now and works pretty much fine.

Re:Still trying to wrap my head... (1)

Znork (31774) | about 7 months ago | (#46608847)

This isn't the '90s. Compared to hardware resources today, yes, an operating system is a thin runtime environment. Most of the resources are shared and usually they are abundant.

Using containers simply means you get yet another abstraction layer, that needs to be managed in yet another way, that will eventually evolve into being exactly the same thing as that operating system you tried to get away from.

Frankly, I'd rather be managing 1000 guests with 1000 apps, because once I have enough automation to spawn and manage guests on demand I don't want them being unique snowflakes and get the accompanying maintenance nightmare because they each host so much that they create infrastructure dependencies.

For most small to medium scale operations the main cost will be the personell needed for management. If you're only running a lot of things you can deploy and support on something like containers or even better openshift, that might be the better option. But if you're running a lot of things that you need to dink around with, even to a minor extent, and will run into support issues with, then you're just creating a resource drain on the one resource that's actually expensive: your admins time.

Re:Still trying to wrap my head... (1)

Anonymous Coward | about 7 months ago | (#46612413)

>It's pretty obvious we shouldn't be managing 10 hosts, 100 guests, and 1000 apps instead of 10 guests and 1000 apps.
It's obvious you've never worked in a large environment.

Re:Still trying to wrap my head... (1)

Anonymous Coward | about 7 months ago | (#46608513)

...around the supposed benefits of server-side virtual machines.

You're running an operating system, so that you can run a software package, so that you can run another operating system, so that you can run another software package that is then interfaced-to by users or other stations on the network?

I guess that I can see it for boxes that serve multiple, different paying subscribers that each get their own "box", but wouldn't it just make more sense to size the applications to use the host OS on a single box as opposed to running multiple copies of operating systems and services that eat resources when the virtual hosts all belong to a single customer?

jeez, it's 2014. i think you've missed something, somewhere.

how about running multiple different OS'es on a single piece of hardware?
how about decoupliing your workload from hardware?
how about hardware upgrades without downtime (move vm, put hv into maintenance, do firmware upgrade or whatever)?

to name just a few.

Re:Still trying to wrap my head... (1, Insightful)

jon3k (691256) | about 7 months ago | (#46609493)

The scary thing is it's modded +5 interesting. Seriously, Slashdot?

Re:Still trying to wrap my head... (1)

camperdave (969942) | about 7 months ago | (#46608523)

What's good for paying customers is also good for you. If you are running a host OS on a single box, and you need to expand, then you will need to get another box with the same hardware so you can load the same host OS onto it. On the other hand, if you are virtuallizing your own host OS, then you can throw disparate hardware at the problem

Re: Still trying to wrap my head... (0)

Anonymous Coward | about 7 months ago | (#46608527)

It makes no sense... Except the people who buy it have a job that depends on it! I've written entire OS + app for embedded devices that consume around 500kb... Seems that having a self-contained app (no OS at all) is a better answer than this mess. IMHO

Re:Still trying to wrap my head... (2)

omglolbah (731566) | about 7 months ago | (#46608545)

Well, one reason is when you have a vendor which does not support your system -at all- if you install any unauthorized software packages or even OS updates that have not been cleared.

At that point you want 'clean' VMs that follow the vendor spec exactly.

Re:Still trying to wrap my head... (1)

turbidostato (878842) | about 7 months ago | (#46608917)

"At that point you want 'clean' VMs that follow the vendor spec exactly."

Except, of course, when the vendor insists that their software shouldn't be virtualized at all.

Re:Still trying to wrap my head... (2)

Lehk228 (705449) | about 7 months ago | (#46609073)

then you find a vendor who has not been asleep at the switch for the last decade and a half

Re:Still trying to wrap my head... (0)

Anonymous Coward | about 7 months ago | (#46608591)

...around the supposed benefits of server-side virtual machines.

You're running an operating system, so that you can run a software package, so that you can run another operating system, so that you can run another software package that is then interfaced-to by users or other stations on the network?

I guess that I can see it for boxes that serve multiple, different paying subscribers that each get their own "box", but wouldn't it just make more sense to size the applications to use the host OS on a single box as opposed to running multiple copies of operating systems and services that eat resources when the virtual hosts all belong to a single customer?

You are right to be confused. It's like building a giant house around a bunch of play houses because their inner walls were too thin to house more than one occupant comfortably. Obviously we need a bigger house, with good inner walls, and all the little furnishings the little houses had.

We don't need umpteen redundant sets of roofing, siding, and flooring, we need partitioning that makes sense on a larger scale.

But.. that requires an OS vendor that isn't alseep at the wheel.

Re:Still trying to wrap my head... (1)

DarwinSurvivor (1752106) | about 7 months ago | (#46608711)

FreeBSD is pretty close with its jail system. You can read-only null-mount directories between guests (or the host) from the host side, share libraries and files while also setting separate IP addresses (makes firewalling easier if nothing else) and setting rigorous hardware access rules for each guest.

Re:Still trying to wrap my head... (0)

Anonymous Coward | about 7 months ago | (#46608647)

This is an awesome idea, because everybody loves having service X compromised becuase there's a vulnerability in unrelated service Y! Please, can I buy hosting from you?

One Ping to root them all, One Ping to /usr/bin/find them; One Ping to bring them all down and in the darkness bind them.

Re:Still trying to wrap my head... (2)

jimicus (737525) | about 7 months ago | (#46608749)

A couple off the top of my head:

  - You wouldn't believe the number of poorly written applications that will happily bring a server to its knees no matter how powerful. This way you can reset just that application, not the whole business.
  - An application that was never written with any sort of HA in mind can be made highly available without any changes.

Re:Still trying to wrap my head... (1)

Salafrance Underhill (2947653) | about 7 months ago | (#46608871)

Play with one under a decent system at some point. They're useful for all of the reasons people have already given, plus they make fantastic forensic, repair and testing environments.

Re:Still trying to wrap my head... (1)

Anonymous Coward | about 7 months ago | (#46608943)

Every time the topic comes up, someone like you comes along and acts confused.

I really can't be bothered to explain, and other posters have had a stab. However I will point out the irony given your sig. IBM have been doing LPARS on S/370 since 1972. Yet apparently the concept of virtualising your platform is still new and confusing to some people.

Re: Still trying to wrap my head... (0)

Anonymous Coward | about 7 months ago | (#46611189)

It isn't "new or confusing", it's convoluted, verbose and overly complex. I liken it to a Rube Goldburg machine, in that running an application on an OS to then run another application inside that "VM application" is nonsensical! It just goes to show how insanely poor the IT implementation is. If it where engineered, it wouldn't be so crazy. What is needed is application executables that run natively on the hardware without specific OS support. Somthing like a super-BIOS. The system clearly isn't designed properly for this implementation, such that it requires running wrapper after wrapper after wrapper around the executing code. Running an entire OS for a specialized app is bad enough, yet alone multiple copies. Just imagine how small most apps will be compared to the system overhead, I'll bet it's 0.0001% or less in 99.999% of casses. Theirs your inefficiency! If you can't see the silliness in that, well... Don't assume superiority to those that 'don't accept' the general concept. Sorry for the poorly executed English, Thanks.

Re: Still trying to wrap my head... (0)

Anonymous Coward | about 7 months ago | (#46611465)

What is needed is application executables that run natively on the hardware without specific OS support. Somthing like a super-BIOS.

What you're referring to is an exo-kernel. The fact that you don't know this implies you don't know what research and implementations exist around exo-kernels, and are therefore woefully poorly qualified to be lecturing anyone else on the subject of them or virtualisation technology.

The system clearly isn't designed properly for this implementation

No, it isn't. Which is why we have virtualisation technology.

Don't assume superiority to those that 'don't accept' the general concept.

When did I assume superiority? Although I will now: your hypothetical land of unicorns and single applications running on top of tiny exo-kernels all silo'd within their own hardware assisted partition is just never going to happen in general use. Ever. Because apart from the fact that it throws away everything we already know about multi-tasking and network technology and security domains, and replaces it with something that doesn't do any of those things, it isn't backwards compatible. And there's your problem: there is a huge investment in traditional UNIX and Windows on top of x86 (both 32 and 64 bit). It isn't going to go away. Which is why we have virtualisation on those platforms, for those platforms: it is the path of least resistance to a more efficient use of resources without the need to discard everything.

So you keep wailing and gnashing your teeth about how amazing utopia could be if we would all just open our eyes, and out here in the real world we'll get on with the job of making incremental improvements.

Re:Still trying to wrap my head... (1)

GreyWolf3000 (468618) | about 7 months ago | (#46609059)

I'd say the killer feature is pure remote management. You don't need to physically manage your systems anymore.

Re:Still trying to wrap my head... (1)

dreamchaser (49529) | about 7 months ago | (#46609085)

...around the supposed benefits of server-side virtual machines.

You're running an operating system, so that you can run a software package, so that you can run another operating system, so that you can run another software package that is then interfaced-to by users or other stations on the network?

I guess that I can see it for boxes that serve multiple, different paying subscribers that each get their own "box", but wouldn't it just make more sense to size the applications to use the host OS on a single box as opposed to running multiple copies of operating systems and services that eat resources when the virtual hosts all belong to a single customer?

There are tons of benefits to virtualization. One is more efficient use of resources on the server (CPU, RAM, I/O, etc.). The other is the ability in some platforms to actually move running VMs from server to server which can be useful for balancing resources and maintenance. You already pointed out the benefit in a multi-tenant environment.

Done properly, where it makes sense (which is for many, many applications) it can save money and provide a more robust environment.

Re:Still trying to wrap my head... (1)

inhuman_4 (1294516) | about 7 months ago | (#46609133)

One big issue is that virtual machines allows for different OSes. So if you are provides a variety of services, like legacy applications for example, you consolidate them all on to one machine.

It also allows for easier testing. Say for example you need to stress test your application on some combination Red Hat, SUSE, Debian, FreeBSD, WinServer, Mac, and Solaris, or even a variety of different versions of those OSes. Putting them all in virtual machines is much simpler than re-installing or having a dedicated machine for each one. It also makes it easy to call up your test environment if a customer reports a bug.

Re:Still trying to wrap my head... (1)

jon3k (691256) | about 7 months ago | (#46609483)

Uh it's pretty simple. Take 50 relatively idle servers. Combine them onto two physical servers as VMs. Spend 1/25th the money on servers and have complete hardware redundancy for every host which is now a vm. What is there not to get?

Re:Still trying to wrap my head... (4, Informative)

nine-times (778537) | about 7 months ago | (#46609505)

I may be confused, but... are you questioning the whole idea of hypervisors on servers at all?

There are a lot of reasons for that. One of the simple reasons is that it's cheaper. When you're working in IT, you often have a bare minimum of hardware you have to buy with each server in order to be safe, e.g. dual hot-plug power supplies, hot-plug RAID enclosures and drives, lights-out management, etc. Because of that, each server you buy is going to end up being about $4k minimum, and the price goes up from there. If you have to buy 5 servers, you might be spending $25k even if they aren't powerful servers. However, you may be able to run all of those servers on a single server that costs $10k. In addition to the initial purchase being less, it will also use less power, take up less space, and put out less heat. All of that means it'll be cheaper of the long term. It will also require less administration. For example, if an important firmware update comes out that requires a certain amount of work to schedule and perform, you're doing that update on 1/5 of the servers you would be doing it on. Oh, and warranty renewals and other support will probably be cheaper.

So more directly addressing the question, which I think was, "Why not just buy one big server and install everything on it?" There are lots of reasons. I think the most important reason is to isolate the servers. I'm a big believer in the idea of "1 server does 1 thing", except when there are certain tasks that group well together. For example, I might have one server run the web and database services for multiple web apps, and another run DNS/DHCP/AD, but I don't really want one server to do both of those things.

And there are a few reasons for that. Security is a big one. There are services that need to be exposed the the internet, and then there are services where I don't want the server running them to be internet-accessible. Putting all of those services on the same physical server creates a security problem, unless I virtualize and split the roles into different virtual machines. Or it may be that I need to provide administrative access to the server to different groups of people, but each can't have administrative access to each other's data. Hosting providers are a good example of this: You and I could both be hosting our web application on the same physical machine at the same hosting provider, and we both might need administrative access to the server. However, I don't want you having access to my files and you don't want me having access to yours.

Another big reason you'll want to isolate your servers is to meet software requirements. I might have one application that runs on Windows, but is only supported up to 2008R2. I might have another application or role that needs to run on Linux. I might have a third role where I really want to use Windows 2012R2 to take advantage of a feature that's unavailable in earlier versions of Windows. How would I put those things on the same server without using virtual machines?

Isolating your servers is also good because it tends to improve stability. Many applications are poorly written can cause crashes or security problems, and keeping them on their own VM server prevent those applications from interfering with other applications running on the same physical hardware. I can even decide how to allocate the RAM and CPU across the virtual machines, preventing any one application from slowing down the rest by being a resource hog.

Aside from all that, there are a bunch of other peripheral benefits. For example, with virtual machines, you have more options for snapshotting, backing and replication, restoring to dissimilar hardware, etc. With traditional installs, I need special software to do bare-metal restores in case something goes wrong, and the techniques used in that software often doesn't work quite right. If virtualized machines, I just need the VM's files copied to a compatible hypervisor, and I can start it up wherever I need to. With the right software, I can even move the whole VM live, without shutting it down, to another physical server.

There are probably a few other benefits that I'm just not thinking of off the top of my head.

Re:Still trying to wrap my head... (0)

Anonymous Coward | about 7 months ago | (#46609861)

One of the big features comes down to remote management ability. If a virtual machine becomes unresponsive, you can restart it remotely and avoid needing to be onsite.
Having it split up by applications also means there is a much lower chance of a misbehaving application causing impact to the others.
That's what helps in the business IT world at least, though there are other benefits as well like migration, etc.

Re:Still trying to wrap my head... (1)

red crab (1044734) | about 7 months ago | (#46609949)

AIX Workload Partitions and Solaris Zones already implement that concept, but its more about application mobility rather than optimal performance. It makes sense usually to have your own box when you need more control of your environment. And anyway the resources are dynamically allotted; so its unlikely that your web server box would be holding on to its 32 GB of allocated memory even when its not heavily loaded.

Re:Still trying to wrap my head... (1)

tji (74570) | about 7 months ago | (#46610407)

Common inexpensive server machines are very powerful today. Many cores, many GB of RAM. It becomes a management and flexibility nightmare to host all the desired servers on a single operating system.

For example, group A needs a web app hosted in a Tomcat environment; B needs a a JBoss based app; C and D need two different Django apps; E and F need Rails apps.. All of those apps together still only need 10% of the resources of the server. So, you can also host 20 other services on it. Good luck managing the dependencies across all the apps. Try upgrading libraries used by multiple servers. You're stuck with the lowest common denominator. Now add in the fact that group J and K want an app supported in Windows Server 2003, and L and M want Windows Server 2012.

In a VM environment, you can isolate each server into its own OS, with its own minimal set of needed libraries, and you need only manage and test how it works with the single hosted app. You can also bolt on more resources by throwing another server in the cluster and distributing the load.

TL;DR: Servers today are really powerful. You can be very resource inefficient to gain a ton of operational efficiency.

Solaris Zones (0)

Anonymous Coward | about 7 months ago | (#46610611)

I guess that I can see it for boxes that serve multiple, different paying subscribers that each get their own "box", but wouldn't it just make more sense to size the applications to use the host OS on a single box as opposed to running multiple copies of operating systems and services that eat resources when the virtual hosts all belong to a single customer?

Personally this is why I always like Solaris' zone (like FreeBSD Jails++) more than other forms of virtualization (VMware, KVM, Xen, etc.).

The guest/s is/are completely isolated from hosting system, but you only have one kernel, and so the overhead is almost nothing. Patching and other system maintenance is also less burdensome because you're not dealing with multiple "full" systems (with libraries, etc.), and can update many systems from the main host.

There have been guest-to-host privilege escalation exploits for KVM, but I'm not aware of anyone accomplishing the same thing with Solaris Zones (or FreeBSD jails), so the security isolation of containers can be strong. (Linux's containers don't have the same track record sadly.)

Re:Still trying to wrap my head... (0)

Anonymous Coward | about 7 months ago | (#46610839)

I understand the replies here, but I'm still with the poster of this question.

Perhaps the question should be re-phrased as: where did our operating systems go so badly wrong that we decided we need virtualisation?

After all, it's the job of the OS to arbitrate access to CPUs and hardware devices, right?

At least in Linux, package management and constantly moving dependencies feels like one reason. Inadequate isolation and resource guarantees is another.

Why not go the whole hog and cleanly divide our OSes into two layers -- a hardware abstraction layer, and the rest of the OS. At least if we're going to go with this whole virtualisation then we may as well take away some of the workload and variation that developers have to deal with.

Re:Still trying to wrap my head... (1)

Dog-Cow (21281) | about 7 months ago | (#46613493)

Multiple, incompatible guest OS's.

I am not really sure what more need be said.

Re: Still trying to wrap my head... (0)

Anonymous Coward | about 7 months ago | (#46615565)

When did the OS go wrong? About the time one releases a new not quite compatible version, and suddenly program A couldn't run on the same system as program B. Sadly "update everything everywhere all at once" is impractical.

Re:Still trying to wrap my head... (1)

bolsh (135555) | about 7 months ago | (#46619179)

Some benefits:

1. Instead of running 10 services on one physical machine the way we used to, you run one service per VM (one web server, one middleware server, one database server, etc) - you add the overhead of multiple operating system runtimes, but thanks to awesome hypervisor optimizations identical memory pages are merged, so you don't use any more RAM
2. If one server gets over-subscribed, you just live migrate a running service to a less loaded server. No more building the infrastructure on another server, and doing a dump/restore process, updating DNS, etc
3. If servers are under-subscribed, you can consolidate services on a smaller number of servers and spin down the under-utilized ones - saving power.
4. High availability - have services monitored, so that if they host they're on goes down they're immediately brought up on another host - you can even have redundant copies of the VM running all the time so that you don't have any downtime if a host goes down

As to your question about whether it makes sense to size applications to servers... servers these days are way more that your typical application needs. It would be wasteful to use a 16 core, 500GB RAM server for a web server. Virtualization gives you a way to run this hardware at capacity. I suppose you could run multiple services on one OS, either in the same OS or in containers (as another commenter suggested). That's certainly a valid approach, especially for scale-out applications which you find in global web app vendors like Google or Facebook. Virtualization allows you to get all of the benefits of mutualization, with the ability to run multiple host OSes, provide granular access to people using the infrastructure, and providing a high level of security against jailbreaking applications.

Cheers,
Dave.

shameful post . (0)

Anonymous Coward | about 7 months ago | (#46608727)

i've been doing this for one and a half year .

just yum install ovirt-* and bang! self hosted ovirt 3.... on zfs.

shameful post.

Re:shameful post . (0)

Anonymous Coward | about 7 months ago | (#46608729)

pd : i didn't RTFA

So your typical customer (0)

Anonymous Coward | about 7 months ago | (#46609609)

Is a moron.

All it needs is OCCI (1)

ScienceMan (636648) | about 7 months ago | (#46610213)

Now if we could just get it interfaced to the Open Cloud Computing Interface (https://en.wikipedia.org/wiki/Open_Cloud_Computing_Interface), all would be well.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?