Docker containers are growing up and embracing open standards

In this new world where containers are challenging virtualized environments, Docker has had to grow up fast, and not only is it growing up, but it's opening up to new standards as well.

Containers have been in widespread use for more than 10 years. Over that time, Docker has become one of the clear leaders in the field along with Rocket and Kubernetes. Now, with Docker donating its container format and runtime code to the Open Container Project (OCP), it is likely to become more than a front runner. This container format is poised to become a standard setter—capturing more of the hot container market.

Lars Herrmann, Senior Director of Strategy at Red Hat, offered a hopeful view of this future. "There are really two keys. There is the aspect of specification and the aspect of code. The great news is that these have both come together under an open governance structure under the Linux foundation as a trusted neutral entity. It will take a little bit of time to bring it into complete alignment with the principles laid out in the spec to actually create that flexible architecture in which multiple implementations can thrive. But things move fast in the container community. We've seen that in a couple of months you can drive a lot of change."

Why are containers so popular?

There are really two keys. There is the aspect of specification and the aspect of code.

Lars Herrmann, Red Hat

Besides being lightweight and portable, containers have a natural edge in performance over virtualization. According to Jeremy Eder, Principle Software Engineer at Red Hat, "It really comes down to a couple of things. When you use a container, you are on a single kernel and you are sharing a single kernel that gives you direct-access to hardware and memory. When you are using virtualization, you have to cross a boundary in the hypervisor. Crossing that boundary is where the performance impact takes place. It doesn't matter what hypervisor you're using, that is always the case."

But hasn't that inherent problem with VM been addressed? Yes, but optimization still requires some effort. "There are plenty of workarounds to improving performance in virtualization and every major virt vendor has already published best practices for virtualization performance. It actually takes a little bit of work to close the gap between bare metal and virtualization. You get performance basically ‘for free' with containers. Performance is operationalized more easily because there's a lower bar to entry." Removing performance obstacles rather than overcoming them is certainly an attractive idea.

Is there still room for specialization in container development?

As containers become more popular and a standardized container format and runtime is made widely available in open source, is there anything left for vendors to offer in this space? According to Herrmann, there is plenty of room for specialization. As long as there are problems to solve, there will be vendors building solutions. Here are three key areas where development is likely to take off:

  • Applications & services. Since the container format itself is just about getting from Point A to Point B, what's inside the container is still up for grabs. The possibilities are endless.
  • Container clustering: Containers have been helpful for packaging on the development side, but orchestration is needed to enable the production side. Technologies like Swarm are likely to be developed to the next level with variations based on use case.
  • Container management: This third level of abstraction is a must for enterprise-level container use. Management tools like Red Hat's own CloudForms will bring together containerized and VM-based workloads in the same UI for a unified user experience.

A variety of Best of Breed solutions that address specific problems will likely crop up in the next few years, capitalizing on the fact that the platform is now well-defined. Overall, the standardization is going to propel development rather than dampen the fervor of solution vendors.

What is RedHat developing in the container space?

For Eder's team, taking a closer look at how to get the most out of container performance offered benefits beyond what they first expected. They set up containers to ensure that precious OS resources would only be used on maintenance activities on an as-needed basis. "The process of beginning performance investigation on containers led us to develop something called the RHEL tools container which is a performance analysis and debugging container that we're shipping with RHEL Atomic. It allows us to strip down RHEL Atomic significantly and take all of the normal debug and performance analysis (and even some statistic utilities) out of the base OS and move them to this debug container which is deployed only during debug  scenarios."

In Jeremy's world, debugging is a constant reality. But, as he pointed out, "In production, you don't necessarily need all this extra stuff." In the end, that's what using containers may boil down to: having a place to put stuff until you need it—and being able to access it with high performance when you do.

How will standardization of the container format and runtime help you? Let us know.

 

Dig Deeper on DevOps-driven, cloud-native app development