Many people use Vagrant to quickly and consistently deploy the infrastructure upon which they want to do their development. Vagrant is also used by people to work on the infrastructure components themselves, but we will concentrate on the first case.
Recently, containerization of infrastructure applications has allowed for lighter-weight deployment of that infrastructure1, commonly people have been using Docker to provide the containerization. Vagrant has a provider for Docker called, intuitively enough, the Docker Provider2 which allows one to use containerized infrastructure applications in a similar way to traditional VM-hosted infrastructure. Personally, I found this confusing at first, because I was expecting to use the Vagrant Docker Provider to develop docker containers. However, once you see it in action and recognize the traditional goals of Vagrant Providers I think it makes perfect sense.
However, what is inspiring this post is a desire to use Vagrant to develop applications that will ultimately be deployed on a Kubernetes environment. Kubernetes, in short, is a way of connecting containers together in a declarative and architecturally robust way. However, it can be a bit of a bear to set up and use. I know of two projects that provide Kubernetes functionality with Vagrant.
First, the Kubernetes project itself provides a configuration to launch a Kubernetes cluster in Vagrant. However, this isn’t quite what I want, as it is really using Kubernetes tools to manage Vagrant which is not my normal workflow.
Second, the Oh My Vagrant project recently launched support for Kubernetes. The project allows you to articulate a multi-node cluster, running various Linux-distros and docker containers. However, I find the complexity of the environment is more than I need to just run a web and database server.
As a result, I am hoping for a Kubernetes Provider similar to the Docker Provider, where I just provide a couple of Kubernetes’ pod and, perhaps, service files and Vagrant worries about the details of where and how to launch the cluster. In fact, my preference would be that the “cluster” is launched on a single VM serving the Kubernetes Master, one Kubernetes
Minion^-W Node, and my containers to minimize overhead. After all, my goal is not to test Kubernetes, just to write an application that will sit on top of it. At some point, I want to shift my application to a more “real world” scenario with more complexity but, while just writing my code, “similar” is probably sufficient. However, it would be complex enough to mirror the communication requirements which, in my experience, is where a lot of the nasty bugs show up.
I also would rather the default be to use a VM, which is different from the Docker Provider on my Linux machine, to isolate the complexity of the Kubernetes and Docker installations to an environment that can’t bleed in to my “daily driver”3.
An example Vagrantfile might look like:
config.vm.provider "kubernetes" do |k|
k.build_dir = "." #for the docker image that has your code
k.pod_files = ["./k8s/apache-pod.json",
k.service_files = ["./k8s/apache-service.json",
k.volumes = ["apache-pod":
At some point, I would love for it to support a nulecule definition, but that is probably in the future and/or the subject of another post.
I suppose I could just use a set of docker containers that use “docker link” to talk to each other. However, my experience with the migration from docker links to something like Kubernetes is not straightforward and I would rather just iron out the complexity of communication at the outset. I could also use
fig^-W docker-compose but, again, not quite the same thing, and, if going to the effort of building a Vagrant Provider, it may as well be one closer to the “real” thing.
In conclusion, I need a Vagrant Kubernetes Provider. Any volunteers?