Docker Inc. explains Docker as a way to ship an application using a standardized format, thereby enabling 'build once, run everywhere'. The analogy is made with international container transport, where containers with standard dimensions are transported on boats, trains and trucks alike. The contents of the containers is of no (real) concern to the companies that transport them.
First and foremost a developers tool
And it is absolutely true that this is a concept that makes sense and could potentially solve quite a lot of problems when deploying a single application onto today's greatly varying environments (development, stage, production) and infrastructures (local, private clouds, public clouds). However, it should be emphasized that Docker is first and foremost a developers tool. It promises to solve a major challenge that developers face, being library dependencies on varying platforms. Encapsulating an application within a static (in relation to those dependencies) container would make deployment fully predictable and therefore much easier.
But this methodology fails to reflect the challenges of change. No application lives in a static environment and all applications need constant refinement to reflect those ongoing changes, both from within (security patches of dependencies) as from the outside (change in customer-demands). Today's 'deploy fast' mantra enables many startups to rollout the latest and greatest technology at amazing speeds. But it can also easily lead to a 'deploy and forget' mentality that makes the life of operations into an ever increasing nightmare, requiring having responsibility for a myriad of blackbox services.
Extended ownership and responsibility
One way to solve this issue is that developers need to take extended ownership over their applications. This implies that developers need to take responsibility for operations – which includes monitoring and security patching – for the complete lifetime of the application. This means a radical change from today's project driven development paradigm and requires a very different developer-profile.
Alternatively, operations could provide developers with a secure base on which to build their applications, but this requires developers to properly describe a way to build their applications upon this base. This is why a proper Dockerfile is important, since it provides a means to recreate the project from scratch, thereby enabling an automatic way to include the latest security patches. The application deployment process also needs to include proper (preferably automated) testing tools – both unit tests and integration tests – to be able to guarantee that a (security) update does not (re)introduce bugs.
Whatever scenario is chosen, choosing Docker for deployments requires restructuring and rethinking the development/operations boundaries to be able to provide the same service level agreements, which customers take for granted from more traditional IT operations.
Photo credit: Rotterdam Express, Hapag-Llyod via Photopin (license)