Here we are with Day 2 at DockerCon 2015! This was an extremely enthusiastic day with a lot of great announcements!
The general session started with the “Happy birthday to Docker Hub” celebrating its first year. Launched last year during DockerCon in San Francisco, Docker Hub, the Docker images repository, has grown to more than 240,000 users and served more than 500 million pulls of images in 13,000 organizations.
Chris Buckley, Director of DevOps at Business Insider, came on the scene in order to explain the Business Insider (BI)case. He started from when they initially used Docker. Buckley started talking about some things learned during the time:
- He thinks that have worked for the production was at first grandiose but not correct
- Porting it back to development hasn’t been the right approach
This allowed Business Insider to arrive to Fig (the actual Docker Compose), with a consequent decrease in the time it took to get a development environment up and running. The combination of Vagrant and Docker, allowed BI to reduce the time to two hours.
When BI revised the apps production, they changed direction using Upstart/SysV scripts for containers, but this again wasn’t the right fit. BI turned back to Puppet in order to build containers, links, options to set the environment and define dependencies on other containers starting first.
Before Docker, the workflow passed to GitHub to Jenkins developers, which then put AWS in production. Adding Docker to this mix allowed developer and operations streams to flow and mix ingredients, if needed. In short Buckley suggested to use what we already know and have, because is useless to re-discover something. BI’s used Puppet before and after: this is an example of this. Moreover he pointed out that there is not a wrong way to experiment, you need only to find what fits for you.
He then resumed in two points the future steps of BI with Docker; exploring more orchestration options and deepening the automated builds.
At this point Buckley gave the scene to Marianna Tessel, VP Engineering at Docker. The topic she came to talk about was Docker Hub. The word that for her describes a better Docker Hub, is “quality”.
A lot of improvements have been done in Docker Hub, like the 2.0x speedup in the Dashboard and the increased reliability, proved by the small number of errors in the Docker Hub. Docker is then working in order to improve the security of Docker Hub and to create some new features and functionalities.
Tessel announced then a Public Beta of the new Docker Hub, available from that day.
At this point, Tessel called Scott Johnston, VP of Product Management, on the stage. Johnston, linking to this year’s DockerCon main theme (running Docker in production), expressed some features on which he heard good feedbacks from Docker users:
- On-premises registry
- Directory integration
He then introduced one of the most popular images on the Docker Hub, with 6.5 million downloads: the open source Docker registry. Johnston announced officially the Docker Trusted Registry (DTR), a commercial product that answer to the customers’ need of an on-premises solution. Docker Trusted Registry is an on-premises registry that integrates LDAP/Active Directory, offers role-based access control and provides audit/event logging for compliance. It is easy to deploy, upgrade and restore.
Johnston gave an idea of previews of the user interface for Docker Trusted Registry and said that more than 800 people, including companies like GE, Capital One, Disney and others participated to the private beta version.
At this point, Johnston introduced Nirmal Mehta, from Booz Allen Hamilton, the senior lead technologist that works on the US General Services Administration project. The US GSA is considered Docker Trusted Registry first customer.
Mehta started giving an overview on the US GSA project, and joked saying that “Washington DC loves the monoliths”, talking about current GSA applications. These apps are, he said, so rigid that often implement replicated services using different technologies and solutions. For this reason, Booz Allen has been engaged in creating a common open source platform that would have renew these rigid applications. Booz will use Docker and open source technologies in order to do this. He then introduced a demo in order to explain what they’ve already done with this project.
After the brief demo, Mehta reviewed the benefits that GSA will bring:
- Improvement of services that will be customer-centric
- Reduction of time-to-market
- Reduction of costs
- Creation of new business opportunities
- Reduction of time for the security review
Mehta talked also about Notary and about how it can be used for the governance of images, using Keywhiz to insert secrets in containers, working on networking of containers and using Interlock plugins and APIs for a stronger integration.
Mehta gave the ball again to Johnston who started talking about the new Docker service of commercial support: one subscription offering, the price of which is $150 per month for 10 Docker engines, including the support for Docker Engines, the images registry and scaling to various levels.
The following part of the keynote was a roundtable, where Mark Russinovich (Microsoft), Jason McGee (IBM) and Michael Farber Johnston (Booz Allen Hamilton) discussed the future of distributed applications. They analyzed the forces that are driving to microservices-based architectures and how this is determining a sort of evolution in development, applications, architectures, organizations and cultures.
After this interesting panel, Johnston asked Russinovich about the use of both Linux and Windows for Docker containers. Johnston and Russinovich mentioned some Docker-Microsoft collaborations that have taken place during last year, like Windows Server containers, and started a demo on few Microsoft integrations with Docker.
Russinovich made a demonstration also on Docker integration through Visual Studio, on Windows and also on OS X, and showed how Visual Studio can create and deploy Docker containers to some remote hosts.
At the end of his intervention he showed Microsoft participation in a Docker hackathon over the weekend, a Cortana/Docker integration for voice commands.
Johnston went again on the scene introducing Project Orca, saying that it is not a product but it’s the future, coming from the sysadmins instead of the developers. Project Orca has the vision of providing a top-to-bottom integrated stack that takes all the tools and plumbing (Docker Engine, Docker Swarm, Networking, GUI, Docker Compose, security, plus tools for installation, deployment, configuration, etc.). Project Orca can be located in the final part of the “build-ship-run” Docker process.
Johnston welcomed Evan Hazlett, “chief hacker” for Project Orca, who started a demo explaining Orca capability to integrate directory services and provides role-based access controls. He focused also on the logging/auditing functionality, which enables users to see what has happened. Orca will provide lists of images, showing also what’s inside the images and has an integration into Docker Swarm, to show the nodes in a Swarm cluster and manage the nodes in a Swarm cluster. Orca uses the concept of stacks and has the ability to scale the number of containers associated with a stack. Developers can update their application definitions, and Orca will dynamically deploy these changes to production.
Johnston finished the session thanking Docker partners and, last but not least, Docker users for their great support.
Share these articles on Social Networks…and don’t miss next article on Kiratech’s journey to Mountain View! We’ll visit some of the most famous companies with Headquarters in the Silicon Valley and we’ll share with you all these unforgettable moments. Stay tuned!