Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Software Virtualization IT Linux

Docker Turns 1: What's the Future For Open Source Container Tech? 65

darthcamaro (735685) writes "Docker has become one of the most hyped open-source projects in recent years, making it hard to believe the project only started one year ago. In that one year, Docker has now gained the support of Red Hat and other major Linux vendors. What does the future hold for Docker? Will it overtake other forms of virtualization or will it just be a curiosity?"
This discussion has been archived. No new comments can be posted.

Docker Turns 1: What's the Future For Open Source Container Tech?

Comments Filter:
  • by Anonymous Coward

    Isn't docker just a wrapper around real container techs? (union filesystems, cgroups/namespaces/containers (lxc basically), etc) with a cloud-init style deployment script?

    • Re: (Score:3, Informative)

      by Anonymous Coward

      Yes, but it makes it much easier to use. It also adds and API and event model as well as the ability to push and pull container images into a public or private registry. Add to that a growing ecosystem and you have a very interesting building block.

    • Yes it is. I would be more sympathetic to Docker if they presented themselves as such, but even then I think people are better off understanding these tools directly.

  • What? (Score:5, Funny)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday March 21, 2014 @09:53PM (#46548517) Homepage Journal

    Docker has become one of the most hyped open-source projects in recent years

    The pants? Yeah, those are OK. They don't last that well.

    If I've heard of Docker once before, I don't remember it.

    • If I've heard of Docker once before, I don't remember it.

      That's what I'd say if Docker was a moped girl.

      • That's what I'd say if Docker was a moped girl.

        I'd ride a moped and I'd fuck a fat girl, or whatever it is that makes them a moped to you. But in this case, this is what I said because I don't want to make the mistake of claiming I've never heard of it when I may have left a snarky comment in a thread about it here on Slashdot.

        • I'd ride a moped and I'd fuck a fat girl, or whatever it is that makes them a moped to you.

          I see what you did there... now I'm the shallow mother-fucker.

          Well played.

          • I'd ride a moped and I'd fuck a fat girl, or whatever it is that makes them a moped to you.

            I see what you did there... now I'm the shallow mother-fucker.

            Well played.

            It's just that he's fat too, that's all. Probably a pervert as well. A fat pervert with small feet. Who rides around on a moped.

      • That's pretty damn funny

    • Glad I'm not the only one that thought "pants". Having not read the article(s) yet, I still have no idea what we're talking about, though I'm guessing it's not pants.

      Clearly the hype is failing to live up to its hype.

  • by Anonymous Coward

    The idea of docker is cool but the implementation needs works. It's pretty complicated to understand compared to say VMware or VirtualBox. Especially the versioning stuff, it's really annoying. It's like combining git or svn and virtual machines. You get the obscure weird architecture of a version control system combined with the configuration complexity of a VM. It's pretty confusing even for seasoned professionals.

    • Well, you can read the help files for lxc-create lxc-start lxc-stop and lxc-console. Zero to having a container running should take anyone about an hour and as a bonus, you'll understand what you're doing. Or use Docker which makes it really easy to understand create stop and start.

    • by Anonymous Coward
      Docker isn't VMWare or Virtualbox, and isn't intended to replace or even act like VMWare or Virtualbox, so thinking about it in terms of VMWare or Virtualbox is probably why you're having such a hard time understanding it.

      The classic use case for Docker is testing: you're writing some code and you need to test it. Static analysis will only get you so far. So you spin up a new lightweight container on your workstation, load the code into it and test it inside that. Docker helps with the "spin up a lightwei
      • by Lennie ( 16154 )

        I thought the classic use case is to have the same environment in dev, test, qa, production, wherever. Anywhere you can run a modern Linux kernel.

      • by qpqp ( 1969898 )

        I need to test this on both Ubuntu 12.04-LTS and FreeBSD 9.0

        That's not how containers work. You're bound to using the kernel of your host.

  • Subjects suck. (Score:5, Informative)

    by aardvarkjoe ( 156801 ) on Friday March 21, 2014 @10:30PM (#46548705)

    Since nobody else is commenting, I guess that I'm not the only one that had never heard of Docker.

    The story doesn't bother to summarize what Docker is. Or even give a link to an explanation. That may not be completely unreasonable, because it's hard to find any understandable information on the main website either. Apparently a "container" is a method of delivering an application that is geared towards VMs and cloud computing, but that's about all I got out of it.

    • But.. but.. aren't you amazed?!? It's only been a year since that thing you never heard of did something you aren't being told?!?!! Who says journalism is dead?
    • The story doesn't bother to summarize what Docker is. Or even give a link to an explanation.

      Hey, it's new within the last year and it's got lots of hype, so obviously it's got a .io domain. Everybody knows that open source projects that aren't .io by now are complete shit. (hey, I'm just trying to get on the hype wagon)

    • Yeah - when I first read the subject line, I thought this was about containers [containertech.com].
    • I haven't used it but the jist I get is that your build system, rather than just outputting an application package that then has to be installed on an OS outputs a complete container that can then be run with zero other dependencies in all your QA/test environments right up until deployed to production.

    • Re:Subjects suck. (Score:5, Informative)

      by subreality ( 157447 ) on Saturday March 22, 2014 @01:28AM (#46549307)

      It's a high-level interface to LXC (similar to Solaris Containers, or FreeBSD Jails). If you're not familiar with those, think of it as a combination of:
        chroot (virtualized filesystem root)
        git (version control where a hash-id guarantees an exact environment)
        virtual machines (virtualized networking, process tables)
        make (you make a config file describing an image to start from, then all the things to do to set up your application / build environment / whatever)

      If you are building a complex product you can write a short Dockerfile which will:
        Start with 8dbd9e392a96 - a bare-bones Ubuntu 12.04 image
        apt-get install git gcc make libc6-dev

      You now have a completely reproducible build machine - Docker builds it and gives you back a hashref. You run it with the right arguments (basically: a path to where your source code is, plus a command to run) and it builds your project reliably (you always have a clean container exactly the way it was when you built it) and quickly (unlike a snapshotted VM there's no need to boot it - in a split second the container comes up and it's running your makefile). More importantly, everyone else working on your project can clone that tag and get /exactly/ your environment, and two years from now people won't be scratching their heads trying to reproduce the build server.

      Now let's say you're shipping your product - you're a web company, so you have to package it up for the operations guys to deploy. It used to be you would give a long list of dependencies (unreliable, and kind of a pain for the user); more recently you'd ship a VM image (big, resource-heavy, but at least it escapes dependency hell); with Docker you build an image, publish it on an internal server and give the hashref to the ops guys. They clone it (moderate-sized, resource-friendly) and they get your app with everything required to run it correctly exactly the way QA was running it.

      As it's being run they can periodically checkpoint the filesystem state, much like snapshotting a VM. If something goes wrong it's easy to roll back and start up the previous version.

      It's a young project and there are still some rough edges, but the benefits are significant. I think in a few years doing builds without a container will be looked at the same way as coding without source control.

      • Thanks for the review and examples. I think, as of writing this, there may be a grand total of 2 relevant posts in this tread of 16... shit's gone down hill around here.
    • by gweihir ( 88907 )

      Never heard of it and I do follow the virtualization market.

    • and lets not forget that it was 'overhyped' so much so that nobody has a clue what it is/was/does.

    • by jon3k ( 691256 )
      That's so you'll look it up and then be "in the know" by having "discovered it yourself". It's just a thinly veiled spam story.
  • by gmuslera ( 3436 ) on Friday March 21, 2014 @10:47PM (#46548785) Homepage Journal

    ... but rationalizing it. Sometimes you just need to run more or less isolated single apps, not for a full blown OS. In a lot of usage scenarios is far more efficient, (both in disk/memory/cpu usage and app density) and probably more flexible. In others full OS virtualization or running on dedicated hardware may be the best option.

    It also brings a virtualization-like approach for apps in the cloud. You can have cointainerized apps in aws, google apps and many others, something like having a vm inside a vm.

    Is not the only solution of its kind. Google is heavily using containers in Omega [theregister.co.uk] (you can try their container stack with lmctfy [github.com]), you can use openvz, lxc, or solaris zones or bsd jails. But the way that docker mixes containers (not just lxc by 0.9) with union fs, making them portable and to have inheritance, is a touch of genius.

    The missing pieces are being added by different projects. CoreOS [coreos.com] as a dedicated OS for containers (that coupled with etcd and fleet could become a big player in a near future), OpenStack/OpenShift bringing manageability, and maybe someone could bring to the table what Omega does with google containers.

    • by davecb ( 6526 ) <davecb@spamcop.net> on Saturday March 22, 2014 @09:50AM (#46550745) Homepage Journal

      Sun, when it still shone, used containers heavily, because they made "dedicate a machine" trivial.

      You could give a product or product suite a dedicated machine, and have netstat or vnstat report on just the behavior of the one program. You could clone a copy of production for the developers to base their next release on, you could hand a release to QA to test hand have them hand it back, and finally you could hand a tested machine to production to start exposure testing.

      This allowed a much more agile cycle than having to re-install a product for development, install it again for test, then fail to reproduce a problem and have tor reinstall both, and finally reinstall the "fixed" config on prod and have the bug come back! Far better quality, and far less work.

      I'm a capacity planner, so I liked it because I could give a "machine" a minimum guarantee of 20% of a 64-cpu machine, and know that it it would give back the capacity it didn't use, something that "hard" LPARS can't do.

    • by jon3k ( 691256 )
      Your comment was more interesting than the article, can you just write posts instead of timothy posting things?
  • WTF does it do? (Score:3, Insightful)

    by pla ( 258480 ) on Friday March 21, 2014 @11:57PM (#46549015) Journal
    Link 1: Wow, look how much uses Docker!
    Link 2: Okay, docker works as some sort of VMy thing, oh and hype hype hype in case you missed link #1.

    I rarely complain about FPs, even blatant Slashvertisements... But seriously? Yay, something wildly successful (that I've never heard of) has lasted a year. Woo-hoo! Pass me a beer.
  • I had never heard of "Docker" before today, nor heard any hype about it.

  • Run a minimalistic Linux box? Check.
    Put software on a virtual disk so I can chroot with a restriction to the device? Check.
    Build software statically linked to the libraries in the build directory so they don't need access to the rest of the system? Check.
    Know that it would be popular and might make monies? Doh!

One man's constant is another man's variable. -- A.J. Perlis

Working...