Kata Containers: Secure Containers from the OpenStack Community


Kata Containers is new container technology that combines technology from Intel Clear Containers with runV from Hyper. Managed by a community, the objective of Kata is to deliver speed and security.

Ever since Linux containers were launched with Docker, containerisation has become a full-fledged domain. Containers are now used in a number of applications and, as in the case of all developing technologies, real-life challenges linked to performance and security have begun to matter with containers as well.

Intel has been working on the Clear Containers Project for some time to address security concerns within containers through Intel Virtualization Technology (Intel VT). This essentially offers the capability to launch containers as lightweight virtual machines (VMs), providing an alternative runtime, which is interoperable with popular container environments such as Kubernetes and Docker. At the same time, the Hyper community has been working on providing the alternate OCI-compliant runtime to run containers on hypervisors, with a few limitations caused by the current incompatibility between hypervisors and containers.

In recent times, it’s been noticed that single vendor open source projects and communities do not attract many contributors due to their inherent vendor-specific policies. There has always been a need for an open source community to build these projects up, along with the current set of contributors, and drive further collaborative innovation.

During the last KubeCon, OpenStack Foundation announced a new initiative aimed at unifying the speed and manageability of containers with the security advantages of virtual machines (VMs). This was called Kata Containers.

What are Kata Containers?

Kata Containers is an open source project and community, working to build a standard implementation of lightweight virtual machines (VMs) that feel and perform like containers, but provide the workload isolation and security advantages of VMs. Intel is contributing Intel Clear Containers technology and Hyper is contributing the runV technology to initiate the project. The Kata Containers community will initially merge both the technologies at their current state to provide light and fast VM based containers

An overview of the project

The Kata Containers project will initially comprise six components, including the agent, runtime, proxy, shim, kernel and packaging of QEMU 2.9.The initial set of projects is essentially based on projects from contributing projects like Clear Containers or runV. It is designed to be architecture agnostic, run on multiple hypervisors and be compatible with the OCI specifications for Docker containers and CRI-O for Kubernetes. For now, Kata will only run on chips based on the x86 architecture and will only support KVM as its hypervisor. The plan is to expand support to other architectures and hypervisors over time.

For users, Kata Containers does not yet provide an installation option directly. Users can either install Clear Containers or runV, since both projects will provide a migration path to Kata Containers at a later date.

The community

Kata Containers is hosted on GitHub under the Apache 2 licence. While it will be managed by the OpenStack Foundation, it is an independent project with its own technical governance and contributor base. The Kata Containers project is governed according to the ‘four opens’ — open source, open design, open development and open community. Technical decisions will be made by technical contributors and a representative architecture committee. The community also has a working committee to make non-technical decisions and help influence the project’s overall strategy, including marketing, communications, product management and ecosystem support.

Contributing to Kata Containers

Kata Containers is working to build a global, diverse and collaborative community. If you are interested in supporting the technology, you are welcome to participate. There is a requirement for contributors with different expertise and skills, ranging from development, operations, documentation, marketing, community organisation and product management. You can learn more about the project at katacontainers.io, or view the code repositories on GitHub to contribute to the project. You can also talk to fellow contributors on Freenode IRC: #kata-dev or Kata Containers Slack or subscribe to the kata-dev mailing list.


Originally published at opensourceforu.com on May 11, 2018.

Advertisements

Setup Multi-node Kubernetes cluster with kubeadm and vagrant

Introduction

With reference to steps listed at Using kubeadm to Create a Cluster for setting up the Kubernetes cluster with kubeadm. I have been working on an automation to setup the cluster. The result of it is kubeadm-vagrant, a github project with simple steps to setup your kubernetes cluster with more control on vagrant based virtual machines.

Installation

  • Clone the kubeadm-vagrant repo

git clone https://github.com/coolsvap/kubeadm-vagrant

  • Choose your distribution of choice from CentOS/Ubuntu and move to the specific directory.
  • Configure the cluster parameters in Vagrantfile. Refer below for details of configuration options.

vi Vagrantfile

  • Spin up the cluster

vagrant up

  • This will spin up new Kubernetes cluster. You can check the status of cluster with following command,

sudo su

kubectl get pods –all-namespaces

Cluster Configuration Options

  1. You need to generate a KUBETOKEN of your choice to be used while creating the cluster. You will need to install kubeadm package on your host to create the token with following command

# kubeadm token generate

148a37.736fd53655b767b7

  1. BOX_IMAGE is currently default with “coolsvap/centos-k8s” box which is custom box created which can be used for setting up the cluster with basic dependencies for kubernetes node.
  2. Set SETUP_MASTER to true if you want to setup the node. This is true by default for spawning a new cluster. You can skip it for adding new minions.
  3. Set SETUP_NODES to true/false depending on whether you are setting up minions in the cluster.
  4. Specify NODE_COUNT as the count of minions in the cluster
  5. Specify  the MASTER_IP as static IP which can be referenced for other cluster configurations
  6. Specify NODE_IP_NW as the network IP which can be used for assigning dynamic IPs for cluster nodes from the same network as Master
  7. Specify custom POD_NW_CIDR of your choice
  8. Setting up kubernetes dashboard is still a WIP with K8S_DASHBOARD option.

Kata Containers Dev environment setup with Vagrant

With reference to Kata Containers Developers Guide steps, I setted up the  development environment. At the same time, I went ahead and created a little automation to recreate the environment with Vagrant.

The primary code to create the environment is pushed at vagrant-kata-dev.

For setting it up, you will need,

  • VirtualBox (Currently only tested with virtualbox)
  • Vagrant with following plugins
    • vagrant-vbguest
    • vagrant-hostmanager
    • vagrant-share

To Install the plugins, use following command,

$ vagrant plugin install <plugin-name>

The setup instructions are simple, once you have installed the prereqs, clone the repo

$ git clone https://github.com/coolsvap/vagrant-kata-dev

Edit the Vagrantfile to update details

  1. Update the bridge interface so the box will have IP address from your local network using DHCP. If you do not update, it will ask for the interface name you start machine.
  2. Update the golang version, currently its at 1.9.3

Create the vagrant box with following command

$ vagrant up

Once the box is started, login to the box using following command

$ vagrant ssh

Switch to root user and move to vagrant shared directory and install the setup script

$ sudo su

# cd /vagrant

# ./setup-kata-dev.sh

It will perform the steps required to setup the dev environment. Verify the setup done correctly with following steps

# docker info | grep Runtime
WARNING: No swap limit support
Runtimes: kata-runtime runc
Default Runtime: runc

Hope this helps new developers get started with Kata Development. This is just first version of the automation and please help me better with your inputs.

 

Event Report – Expert Talks 2017

Expert Talks 2017 was my first participation in the Expert Talks conference held in Pune. The conference started a couple of years before as an elevated form of Expert Talks Meetup series by Equal Experts, this year’s conference had a very good mix of content. It included talks on a variety of topics including BlockChain, Containers, IoT, Security to name a few. This is the first edition of the conference which had a formal CFP which witnessed 50+ submissions from different parts of the country and 9 talks were selected out of it. This year the conference was held at Novotel Hotel Pune.

 

The conference started with registration desk which was well organized for everyone registered to pick up their kit. Even for a conference scheduled on a Saturday, the attendance was quite noticeable. The event started with a welcome speech to all participants and speakers.

 

The first session delivered by Dr. Pandurang Kamat on demystifying blockchain was a very good start to the event with much anticipated and buzzed topic at the moment. He covered the ecosystem around blockchain with precise detail for everyone to understand the example of most popular blockchain application “BitCoin”. He also gave the overview of Open Source Frameworks like Project Hyperledger for blockchain implementations.

 

The following session Doveryai, no proveryai – an introduction to TLA+ delivered by Sandeep Joshi was well received by the audience as the topic was pretty unique in terms of the name as well as content. The session started a bit slowly with the audience getting the details of TLA+ and PlusCal. This was well scoped with some basic details and a hands-on demo. The model checker use case was well received after looking at the real world applications and we had the first coffee break of the day after it.

 

Mr. Lalit Bhatt started well with his session about Data Science – An Engineering Implementation Perspective which discussed the mathematical models used for building the real world data science applications and explained the current use-cases he has in the organization.

 

Swapnil Dubey and Sunil Manikani from Shlumberger gave good insight into their microservice strategy with containers with building blocks like Kubernetes, Docker and GKE. They also presented how they are using GCE capabilities to effectively reduce the operational expenses.

 

Alicja Gilderdale from Equal Experts presented some history about container technologies and how they validated different container technologies for one of their projects. She also provided some of the insights into the challenges and lessons learned throughout their journey. The end of this session gave thunder to the participants with the lunch break.

 

Neha Datt, from Equal Experts, showcased the importance of Product Owner in the overall business cycle in the current changing infrastructure world. She provided some critical thinking points to bridge the gap between business, architecture and development team and also how product manager can be the glue between them.

Piyush Verma, took the Data Science – An Engineering Implementation Perspective discussion forward with his thoughts about Distributed Data Processing. He showcased typical architectures and deployments in distributed data processing by splitting the system into layers; defining the relevance, need, & behavior of each. One of the core attraction points of the session was the drawn diagrams incorporated in his presentation which he did as a part of the homework for the same.

 

After the second official coffee break of the day, Akash Mahajan enlightened everyone with the most crucial requirement in the currently distributed workloads living on the public clouds, the security. He walked everyone with different requirements for managing secrets with a HashiCorp Vault example while explained the advantages & caveats of the approach.

 

The IoT, Smart Cities, and Digital Manufacturing discussion were well placed with providing application of most of the concepts learned throughout the day to the real world problems. Subodh Gajare provided details on the IoT architecture, its foundation with requirements related to Mobility, Analytics, Big data, Cloud and Security. He provided very useful insights into the upcoming protocol advances and the usage of Fog, Edge computing in the Smart City application of IoT.

It was a day well spent with some known faces and an opportunity to connect with many enthusiastic IT professionals in Pune.

 

TC Candidacy – Swapnil Kulkarni (coolsvap)

OpenStackers,

I am Swapnil Kulkarni(coolsvap), I have been a ATC since Icehouse and I wish
take this opportunity to throw my hat for election to the OpenStack Technical
Committee this election cycle. I started contributing to OpenStack with
introduction at a community event and since then I have always utilized every
opportunity I had to contribute to OpenStack. I am a core reviewer at kolla
and requirements groups. I have also been active in activities to improve the
overall participation in OpenStack, through meetups, mentorship, outreach to
educational institions to name a few.

My focus of work during TC would be to make it easier for people to get
involved in, participate, and contribute to OpenStack, to build the community.
I have had a few hickups in the recent past for community engagement and
contribution activities but my current employment gives me the flexibilty
every ATC needs and I would like to take full advantage of it and increse
the level of contribution.

Please consider this my application and thank you for your consideration.

[1] https://www.openstack.org/community/members/profile/7314/swapnil-kulkarni
[2] http://stackalytics.com/report/users/coolsvap
[3] https://review.openstack.org/510402

OpenStack PTG Denver – Day 5

Day 5 of PTG started as day for hackathons, general project/cross-project discussion for most project teams with many people left from PTG and few preparing for their travel plans or site-seeing in Colarado. The kolla team started the day with alternate Dockerfile build tool review. Later in the day was something everything in OpenStack and containers community was looking forward to the OpenStack – Kubernets SIG with Chris Hodge leading the effort to get everyone interested in same room. Some key targets for the release were identified including contributors interested. We then had most pressing issue for all deployment projects based on containers, the build and publishing pipeline for kolla images with openstack-infra team. Most of the current requirements, needs and blocking points were identified for rolling this feature. The kolla team and openstack infra team will work together to get this rolling in the starting phase of this cycle once zuul v3 rollout stabelizes. The kolla team ended day early for some much needed buzz for the whole week’s work at Station 26.

 

This is all from this edition of PTG see you next at Dublin.

http://platform.twitter.com/widgets.js

OpenStack Queens PTG – Day 3

Day 3 of Queens PTG started with project specific design discussions, I joined the Kolla team where we started with the topic first for all the design summits we have had and very much important for the community, the “Documentation“. We broke down the discussion in documentation for quick-start with kolla, contributor, operators and reference documentation. The documentation available currently is scattered across projects after project split and its essential that it has a common landing page on OpenStack Deployment guides where everyone can refer to. We had representatives from the Documentation team Alex, Doug and Petr who are working on improving the doc experience by migrating the docs to a common format across the community. They understood the problem kolla is facing and we had a working discussion where we created the table of contents for all available and required documentation required for kolla.

Kolla team then joined the TripleO team which is consuming the kolla images for OpenStack deployment for discussion about collaboration of effort. The teams will work together to improve the build and publish pipeline for kolla images, improving & adding more CI jobs for the kolla/kolla-ansible/kolla-kubernetes, Configuration management post deployment of containers. The tripleo team has come up with basic healthchecks for containerized deployment, the kolla team will help get those checks in current kolla images and improve on those to better monitor the contaierized OpenStack deployment. The teams will also collaborate on improving the orchestration steps, container testing, upgrades and creating metrics for OpenStack deployment.

During lunch we had extended the discussion with Lars and kfox for discussion around Monitoring for OpenStack, Prometheus and other monitoring tools.

Post lunch, kolla team started with key discussion to the heart of operators, the OpenStack plugins deployment with kolla. There are multiple issues currently related to plugin as when would be ideal time to make them available, during build/deployment? Plugins might have non-matching depedencies to OpenStack components and so further. The team came up with multiple permutation of options available which would need to be PoCed during the release.

Since the inception of project loci there has been discussion around kolla-images size and the team had an interesting discussion on how to reduce that. The important part is to remove the things like apt/yum cache, removing the fat base image and so further. The team also discussed about utilizing althernate container build tooling to writing own image build tool. The team will hack on Friday removing the fat base images and see if that improves the image size.

External tools like Ceph are common pain points when we are doing OpenStack deployment. When kolla community evaluated the options for Ceph as storage backed for containerized openstack deployment there was no thing like containerized ceph. The team build it from scratch and got it working. The ceph team has currently come up with ceph-docker and ceph-ansible. It would be useful for operators that kolla uses the tools directly available from vendors for. We had a discussion with representative from ceph to initiate the collaboration to deprecate current ceph deployment in kolla and use the combination of ceph-docker & ceph-ansible. It will help both the communities will benefit exchange things done better at each end.

I got a surprise gift of vintage OpenStack swag from the PTG team

http://platform.twitter.com/widgets.js

and I had another photo with the marketing team for with the TSP members.

The day ended with hanging out with kolla team mates at Famous Dave‘s

Are you using PyCharm for your OpenStack development?

Its been a long time since I have been maintaining the Jebrains Community support with  PyCharm licences for OpenStack developers and I thought it might be time to understand how PyCharm actually helps developers with ease of OpenStack development. If you are using PyCharm for your development work, please take a  to provide your valuable inputs in following
survey [1]

If you are an active contributor and need a community edition licence for using PyCharm, please refer to [2]

Thank you in advance for your inputs.

[1] https://goo.gl/forms/pQGdFfUYzmgMt8iG2

[2] https://wiki.openstack.org/wiki/Pycharm