Kata Containers: Secure Containers from the OpenStack Community


Kata Containers is new container technology that combines technology from Intel Clear Containers with runV from Hyper. Managed by a community, the objective of Kata is to deliver speed and security.

Ever since Linux containers were launched with Docker, containerisation has become a full-fledged domain. Containers are now used in a number of applications and, as in the case of all developing technologies, real-life challenges linked to performance and security have begun to matter with containers as well.

Intel has been working on the Clear Containers Project for some time to address security concerns within containers through Intel Virtualization Technology (Intel VT). This essentially offers the capability to launch containers as lightweight virtual machines (VMs), providing an alternative runtime, which is interoperable with popular container environments such as Kubernetes and Docker. At the same time, the Hyper community has been working on providing the alternate OCI-compliant runtime to run containers on hypervisors, with a few limitations caused by the current incompatibility between hypervisors and containers.

In recent times, it’s been noticed that single vendor open source projects and communities do not attract many contributors due to their inherent vendor-specific policies. There has always been a need for an open source community to build these projects up, along with the current set of contributors, and drive further collaborative innovation.

During the last KubeCon, OpenStack Foundation announced a new initiative aimed at unifying the speed and manageability of containers with the security advantages of virtual machines (VMs). This was called Kata Containers.

What are Kata Containers?

Kata Containers is an open source project and community, working to build a standard implementation of lightweight virtual machines (VMs) that feel and perform like containers, but provide the workload isolation and security advantages of VMs. Intel is contributing Intel Clear Containers technology and Hyper is contributing the runV technology to initiate the project. The Kata Containers community will initially merge both the technologies at their current state to provide light and fast VM based containers

An overview of the project

The Kata Containers project will initially comprise six components, including the agent, runtime, proxy, shim, kernel and packaging of QEMU 2.9.The initial set of projects is essentially based on projects from contributing projects like Clear Containers or runV. It is designed to be architecture agnostic, run on multiple hypervisors and be compatible with the OCI specifications for Docker containers and CRI-O for Kubernetes. For now, Kata will only run on chips based on the x86 architecture and will only support KVM as its hypervisor. The plan is to expand support to other architectures and hypervisors over time.

For users, Kata Containers does not yet provide an installation option directly. Users can either install Clear Containers or runV, since both projects will provide a migration path to Kata Containers at a later date.

The community

Kata Containers is hosted on GitHub under the Apache 2 licence. While it will be managed by the OpenStack Foundation, it is an independent project with its own technical governance and contributor base. The Kata Containers project is governed according to the ‘four opens’ — open source, open design, open development and open community. Technical decisions will be made by technical contributors and a representative architecture committee. The community also has a working committee to make non-technical decisions and help influence the project’s overall strategy, including marketing, communications, product management and ecosystem support.

Contributing to Kata Containers

Kata Containers is working to build a global, diverse and collaborative community. If you are interested in supporting the technology, you are welcome to participate. There is a requirement for contributors with different expertise and skills, ranging from development, operations, documentation, marketing, community organisation and product management. You can learn more about the project at katacontainers.io, or view the code repositories on GitHub to contribute to the project. You can also talk to fellow contributors on Freenode IRC: #kata-dev or Kata Containers Slack or subscribe to the kata-dev mailing list.


Originally published at opensourceforu.com on May 11, 2018.

Advertisements

Setup Multi-node Kubernetes cluster with kubeadm and vagrant

Introduction

With reference to steps listed at Using kubeadm to Create a Cluster for setting up the Kubernetes cluster with kubeadm. I have been working on an automation to setup the cluster. The result of it is kubeadm-vagrant, a github project with simple steps to setup your kubernetes cluster with more control on vagrant based virtual machines.

Installation

  • Clone the kubeadm-vagrant repo

git clone https://github.com/coolsvap/kubeadm-vagrant

  • Choose your distribution of choice from CentOS/Ubuntu and move to the specific directory.
  • Configure the cluster parameters in Vagrantfile. Refer below for details of configuration options.

vi Vagrantfile

  • Spin up the cluster

vagrant up

  • This will spin up new Kubernetes cluster. You can check the status of cluster with following command,

sudo su

kubectl get pods –all-namespaces

Cluster Configuration Options

  1. You need to generate a KUBETOKEN of your choice to be used while creating the cluster. You will need to install kubeadm package on your host to create the token with following command

# kubeadm token generate

148a37.736fd53655b767b7

  1. BOX_IMAGE is currently default with “coolsvap/centos-k8s” box which is custom box created which can be used for setting up the cluster with basic dependencies for kubernetes node.
  2. Set SETUP_MASTER to true if you want to setup the node. This is true by default for spawning a new cluster. You can skip it for adding new minions.
  3. Set SETUP_NODES to true/false depending on whether you are setting up minions in the cluster.
  4. Specify NODE_COUNT as the count of minions in the cluster
  5. Specify  the MASTER_IP as static IP which can be referenced for other cluster configurations
  6. Specify NODE_IP_NW as the network IP which can be used for assigning dynamic IPs for cluster nodes from the same network as Master
  7. Specify custom POD_NW_CIDR of your choice
  8. Setting up kubernetes dashboard is still a WIP with K8S_DASHBOARD option.

OpenStack PTG Denver – Day 5

Day 5 of PTG started as day for hackathons, general project/cross-project discussion for most project teams with many people left from PTG and few preparing for their travel plans or site-seeing in Colarado. The kolla team started the day with alternate Dockerfile build tool review. Later in the day was something everything in OpenStack and containers community was looking forward to the OpenStack – Kubernets SIG with Chris Hodge leading the effort to get everyone interested in same room. Some key targets for the release were identified including contributors interested. We then had most pressing issue for all deployment projects based on containers, the build and publishing pipeline for kolla images with openstack-infra team. Most of the current requirements, needs and blocking points were identified for rolling this feature. The kolla team and openstack infra team will work together to get this rolling in the starting phase of this cycle once zuul v3 rollout stabelizes. The kolla team ended day early for some much needed buzz for the whole week’s work at Station 26.

 

This is all from this edition of PTG see you next at Dublin.

http://platform.twitter.com/widgets.js

OpenStack Queens PTG – Day 4

Day 4 of PTG started with next Kolla discussions related to kolla-ansible. Discussion started with kolla dev-mode effort started by pbourke. discussion was about currently missing pieces in dev_mode like installing clients, libs and virtualenv bindmount. The goal in the cycle is to fill the missing pieces, verify options for multinode dev_mode, investigate on options for remote debugging and also consider using PyCharm.

One of the important topics in kolla is the gating. Currently kolla has around 14 different gates for deployment testing and it has to be improved with testing the deployment for sanity with Tempest. This will help the validate the entire deployment in the gates. Upgrades testing is also one key requirement, kolla team will model something like grenade testing for it. The key is to maximize the testing of scenarios that kolla supports in gate, but since we are restricted with openstack infra resources as well as the time each test takes to validate. It is agreed that team members will create a list of scenarios and assign to everyone to verify and record the results in a central location like a google sheet. This will also help evaluate stability of kolla deployment in each release.

Skip level upgrades is one of the major talking point in the current PTG. Kolla team will evaluate fast forward upgrades for each service deployed with kolla to decide on skip level upgrade support in kolla. This would be a PoC in current cycle.

Second half of the discussion was around the kolla-kubernetes, where the team discussed the roadmap for current cycle. That will include upgrade prototyping for z stream & x stream services, validate the logging solution with fluent-bit, automated deployment, remove deprecated components and improve documentation.

Most of the teams have wrapped up their design discussions on Thursday and will be having hackathons on the last day.