The short version: Docker acquired a fantastic company called Infinit. Using their technology, we will provide secure distributed storage out of the box, making it much easier to deploy stateful services and legacy enterprise applications on Docker.
Today Docker is spinning out its core container runtime functionality into a standalone component, incorporating it into a separate project called containerd, and will be donating it to a neutral foundation early next year. This is the latest chapter in a multi-year effort to break up the Docker platform into a more modular architecture of loosely coupled components.
Reading Time: 3 minutes In the first post of this series, we introduced using Kubernetes for deployments. In this post, we'll get started with integrating Codeship into the workflow. Given a functioning Kubernetes Deployment (remember our discussion from the last post about the difference between deployment and Kubernetes' Deployment), how do we integrate it into our Codeship workflow?
At Editor's note: Today's post is by Sebastien Goasguen, Founder of Skippbox, showing a new tool to move from 'docker-compose' to Kubernetes. Skippbox, we developed kompose a tool to automatically transform your Docker Compose application into Kubernetes manifests. Allowing you to start a Compose application on a Kubernetes cluster with a single kompose up command.
The key benefits of behavior-driven development (BDD) practices are communication enhancement and customer satisfaction. You can read more on that from Dan North and Gojko Adzic. Perhaps the biggest practical challenge that stands in the way of reaping those benefits is the burden of provisioning, installation, maintenance of requisite complex and fussy infrastructure and setting up reliable test infrastructure.
The RUN bundle install instruction smoothly executes a Docker build as long as gems are sourced from public HTTPS repositories. If the Gemfile lists the URLs of private repositories without their credentials, the Docker build fails.
This is a really quick write-up on how I've been running HTTP/2 on my server for the last 2 months, despite having an OS that doesn't support OpenSSL 1.0.2. It uses a Docker container to run Nginx, built on the latest Alpine Linux distribution. This has a modern OpenSSL built-in without extra work.
Are you are bewildered by the blisteringly fast-paced world of "containers"? Maybe you have no trouble understanding what they are - in fact you might be familiar with a half a dozen orchestration systems and container runtimes already - but frustrated because this seems like a whole lot of work and you just don't see what the point of it all is?
My name is Jonathan McCaffrey and I work on the infrastructure team here at Riot. This is the first post in a series where we'll go deep on how we deploy and operate backend features around the globe. Before we dive into the technical details, it's important to understand how Rioters think about feature development.
Kubernetes shares the pole position with Docker in the category "orchestration solutions for Raspberry Pi cluster". However it's setup process has been elaborate - until v1.4 with the kubeadm announcement. With that effort, Kubernetes changed this game completely and can be up and running officially within no time.
Registries are one of the key components that make working with containers, primarily Docker, so appealing to the masses. A registry hosts images that are downloaded and run on hosts in a container engine. A container is simply a running instance of a specific image.
Deploy your container apps in 5 sec!
Learn how to use Packer to build an AWS AMI (Amazon Machine Image), configure it to run a Docker image, and continuously deploy the application using Semaphore.
Reading Time: 7 minutesOne of the principles of Docker containers is that an image is immutable - once built, it's unchangeable, and if you want to make changes, you'll get a new image as a result. In this post, we'll take a deep dive into the immutability of containers, and then we'll look at some of the consequences ...
As Kickstarter moves from a monolithic application towards a service oriented architecture, we needed to develop a fast and secure way for a service to programmatically retrieve API tokens, passwords, keys, etc. We are currently managing our containers using Amazon's ECS (Elastic Container Service), which we like for its seamless scalability of Docker containers, and really wanted to secure the application's access to sensitive information.
A new container technology called Hyper.sh or just "Hyper" (formerly HyperHQ, and not to be confused with Microsoft's Hyper-V), could conceivably alter the course of containerization. Like dotCloud, which eventually became Docker, Hyper is a containerized workload deployment and hosting service. It's a PaaS that calls itself a "CaaS" (containers-as-a-service).
The Microbadger Metadata API is a simple API that lets you query the build-time labels for any Docker Hub public image. A full description of the API is available on our website, but fundamentally You call our API with the namespace/name of the image you want to query.
Following on from the post I wrote on Portainer; There was a lot of great feedback, like the following from Aleksandr Blekh, Ph.D.; Great post, thank you! Could you briefly clarify why/when one would want to prefer Portainer to Rancher and vice versa.
At the beginning of 2016, Nextdoor production releases took about an hour. That is to say, once all the code in the new release was tested and verified, it would still take an entire hour before our users saw those changes.
The last couple of months I've been experimenting with Jenkins and how to best integrate it with Docker and Kubernetes. A couple of months ago I even blogged about possible setups that involve the use of the Docker Workflow Plugin inside Kubernetes (you can find the post ).