Setting up the development environment for Kata Containers – proxy

Summarizing the information for setting up the development environment for my first project in Kata-Containers. I have setted up the dev environment for proxy project.

First Things FirstInstall Golang as a prerequisite to the development. Ensure you follow the complete steps to create the required directory structure and test the installation.

Get The Source

This guide assumes you already have forked the proxy project. If not please for the repo. Once you have successfully forked the repo, clone it on your computer

git clone https://github.com/<your-username>/proxy.git $GOPATH/src/github.com/<your-username>/proxy

Add the upstream proxy project as remote to the local clone to fetch up the updates.

$ cd proxy
$ git remote add upstream https://github.com/kata-containers/proxy.git

The proxy project requires following dependencies to be installed prior to build. Use following command to install them.

$ go get github.com/hashicorp/yamux
$ go get github.com/sirupsen/logrus

Do the first build. This will create the executable file kata-proxy in the proxy directory.

$ make
go build -o kata-proxy -ldflags “-X main.version=0.0.1-02a5863f1165b1ee474b41151189c2e1b66f1c40”

To run unit tests run

$ make test
go test -v -race -coverprofile=coverage.txt -covermode=atomic
=== RUN TestUnixAddrParsing
— PASS: TestUnixAddrParsing (0.00s)
=== RUN TestProxy
— PASS: TestProxy (0.05s)
PASS
coverage: 44.6% of statements
ok github.com/coolsvap/proxy 1.064s

To remove all generated output files run

$ make clean
rm -f kata-proxy

This is for this time. I am working on setting up the development environment with GolangD IDE. Keep you posted.

Event Report – Expert Talks 2017

Expert Talks 2017 was my first participation in the Expert Talks conference held in Pune. The conference started a couple of years before as an elevated form of Expert Talks Meetup series by Equal Experts, this year’s conference had a very good mix of content. It included talks on a variety of topics including BlockChain, Containers, IoT, Security to name a few. This is the first edition of the conference which had a formal CFP which witnessed 50+ submissions from different parts of the country and 9 talks were selected out of it. This year the conference was held at Novotel Hotel Pune.

 

The conference started with registration desk which was well organized for everyone registered to pick up their kit. Even for a conference scheduled on a Saturday, the attendance was quite noticeable. The event started with a welcome speech to all participants and speakers.

 

The first session delivered by Dr. Pandurang Kamat on demystifying blockchain was a very good start to the event with much anticipated and buzzed topic at the moment. He covered the ecosystem around blockchain with precise detail for everyone to understand the example of most popular blockchain application “BitCoin”. He also gave the overview of Open Source Frameworks like Project Hyperledger for blockchain implementations.

 

The following session Doveryai, no proveryai – an introduction to TLA+ delivered by Sandeep Joshi was well received by the audience as the topic was pretty unique in terms of the name as well as content. The session started a bit slowly with the audience getting the details of TLA+ and PlusCal. This was well scoped with some basic details and a hands-on demo. The model checker use case was well received after looking at the real world applications and we had the first coffee break of the day after it.

 

Mr. Lalit Bhatt started well with his session about Data Science – An Engineering Implementation Perspective which discussed the mathematical models used for building the real world data science applications and explained the current use-cases he has in the organization.

 

Swapnil Dubey and Sunil Manikani from Shlumberger gave good insight into their microservice strategy with containers with building blocks like Kubernetes, Docker and GKE. They also presented how they are using GCE capabilities to effectively reduce the operational expenses.

 

Alicja Gilderdale from Equal Experts presented some history about container technologies and how they validated different container technologies for one of their projects. She also provided some of the insights into the challenges and lessons learned throughout their journey. The end of this session gave thunder to the participants with the lunch break.

 

Neha Datt, from Equal Experts, showcased the importance of Product Owner in the overall business cycle in the current changing infrastructure world. She provided some critical thinking points to bridge the gap between business, architecture and development team and also how product manager can be the glue between them.

Piyush Verma, took the Data Science – An Engineering Implementation Perspective discussion forward with his thoughts about Distributed Data Processing. He showcased typical architectures and deployments in distributed data processing by splitting the system into layers; defining the relevance, need, & behavior of each. One of the core attraction points of the session was the drawn diagrams incorporated in his presentation which he did as a part of the homework for the same.

 

After the second official coffee break of the day, Akash Mahajan enlightened everyone with the most crucial requirement in the currently distributed workloads living on the public clouds, the security. He walked everyone with different requirements for managing secrets with a HashiCorp Vault example while explained the advantages & caveats of the approach.

 

The IoT, Smart Cities, and Digital Manufacturing discussion were well placed with providing application of most of the concepts learned throughout the day to the real world problems. Subodh Gajare provided details on the IoT architecture, its foundation with requirements related to Mobility, Analytics, Big data, Cloud and Security. He provided very useful insights into the upcoming protocol advances and the usage of Fog, Edge computing in the Smart City application of IoT.

It was a day well spent with some known faces and an opportunity to connect with many enthusiastic IT professionals in Pune.

 

TC Candidacy – Swapnil Kulkarni (coolsvap)

OpenStackers,

I am Swapnil Kulkarni(coolsvap), I have been a ATC since Icehouse and I wish
take this opportunity to throw my hat for election to the OpenStack Technical
Committee this election cycle. I started contributing to OpenStack with
introduction at a community event and since then I have always utilized every
opportunity I had to contribute to OpenStack. I am a core reviewer at kolla
and requirements groups. I have also been active in activities to improve the
overall participation in OpenStack, through meetups, mentorship, outreach to
educational institions to name a few.

My focus of work during TC would be to make it easier for people to get
involved in, participate, and contribute to OpenStack, to build the community.
I have had a few hickups in the recent past for community engagement and
contribution activities but my current employment gives me the flexibilty
every ATC needs and I would like to take full advantage of it and increse
the level of contribution.

Please consider this my application and thank you for your consideration.

[1] https://www.openstack.org/community/members/profile/7314/swapnil-kulkarni
[2] http://stackalytics.com/report/users/coolsvap
[3] https://review.openstack.org/510402

OpenStack PTG Denver – Day 5

Day 5 of PTG started as day for hackathons, general project/cross-project discussion for most project teams with many people left from PTG and few preparing for their travel plans or site-seeing in Colarado. The kolla team started the day with alternate Dockerfile build tool review. Later in the day was something everything in OpenStack and containers community was looking forward to the OpenStack – Kubernets SIG with Chris Hodge leading the effort to get everyone interested in same room. Some key targets for the release were identified including contributors interested. We then had most pressing issue for all deployment projects based on containers, the build and publishing pipeline for kolla images with openstack-infra team. Most of the current requirements, needs and blocking points were identified for rolling this feature. The kolla team and openstack infra team will work together to get this rolling in the starting phase of this cycle once zuul v3 rollout stabelizes. The kolla team ended day early for some much needed buzz for the whole week’s work at Station 26.

 

This is all from this edition of PTG see you next at Dublin.

http://platform.twitter.com/widgets.js

OpenStack Queens PTG – Day 4

Day 4 of PTG started with next Kolla discussions related to kolla-ansible. Discussion started with kolla dev-mode effort started by pbourke. discussion was about currently missing pieces in dev_mode like installing clients, libs and virtualenv bindmount. The goal in the cycle is to fill the missing pieces, verify options for multinode dev_mode, investigate on options for remote debugging and also consider using PyCharm.

One of the important topics in kolla is the gating. Currently kolla has around 14 different gates for deployment testing and it has to be improved with testing the deployment for sanity with Tempest. This will help the validate the entire deployment in the gates. Upgrades testing is also one key requirement, kolla team will model something like grenade testing for it. The key is to maximize the testing of scenarios that kolla supports in gate, but since we are restricted with openstack infra resources as well as the time each test takes to validate. It is agreed that team members will create a list of scenarios and assign to everyone to verify and record the results in a central location like a google sheet. This will also help evaluate stability of kolla deployment in each release.

Skip level upgrades is one of the major talking point in the current PTG. Kolla team will evaluate fast forward upgrades for each service deployed with kolla to decide on skip level upgrade support in kolla. This would be a PoC in current cycle.

Second half of the discussion was around the kolla-kubernetes, where the team discussed the roadmap for current cycle. That will include upgrade prototyping for z stream & x stream services, validate the logging solution with fluent-bit, automated deployment, remove deprecated components and improve documentation.

Most of the teams have wrapped up their design discussions on Thursday and will be having hackathons on the last day.

OpenStack Queens PTG – Day 3

Day 3 of Queens PTG started with project specific design discussions, I joined the Kolla team where we started with the topic first for all the design summits we have had and very much important for the community, the “Documentation“. We broke down the discussion in documentation for quick-start with kolla, contributor, operators and reference documentation. The documentation available currently is scattered across projects after project split and its essential that it has a common landing page on OpenStack Deployment guides where everyone can refer to. We had representatives from the Documentation team Alex, Doug and Petr who are working on improving the doc experience by migrating the docs to a common format across the community. They understood the problem kolla is facing and we had a working discussion where we created the table of contents for all available and required documentation required for kolla.

Kolla team then joined the TripleO team which is consuming the kolla images for OpenStack deployment for discussion about collaboration of effort. The teams will work together to improve the build and publish pipeline for kolla images, improving & adding more CI jobs for the kolla/kolla-ansible/kolla-kubernetes, Configuration management post deployment of containers. The tripleo team has come up with basic healthchecks for containerized deployment, the kolla team will help get those checks in current kolla images and improve on those to better monitor the contaierized OpenStack deployment. The teams will also collaborate on improving the orchestration steps, container testing, upgrades and creating metrics for OpenStack deployment.

During lunch we had extended the discussion with Lars and kfox for discussion around Monitoring for OpenStack, Prometheus and other monitoring tools.

Post lunch, kolla team started with key discussion to the heart of operators, the OpenStack plugins deployment with kolla. There are multiple issues currently related to plugin as when would be ideal time to make them available, during build/deployment? Plugins might have non-matching depedencies to OpenStack components and so further. The team came up with multiple permutation of options available which would need to be PoCed during the release.

Since the inception of project loci there has been discussion around kolla-images size and the team had an interesting discussion on how to reduce that. The important part is to remove the things like apt/yum cache, removing the fat base image and so further. The team also discussed about utilizing althernate container build tooling to writing own image build tool. The team will hack on Friday removing the fat base images and see if that improves the image size.

External tools like Ceph are common pain points when we are doing OpenStack deployment. When kolla community evaluated the options for Ceph as storage backed for containerized openstack deployment there was no thing like containerized ceph. The team build it from scratch and got it working. The ceph team has currently come up with ceph-docker and ceph-ansible. It would be useful for operators that kolla uses the tools directly available from vendors for. We had a discussion with representative from ceph to initiate the collaboration to deprecate current ceph deployment in kolla and use the combination of ceph-docker & ceph-ansible. It will help both the communities will benefit exchange things done better at each end.

I got a surprise gift of vintage OpenStack swag from the PTG team

http://platform.twitter.com/widgets.js

and I had another photo with the marketing team for with the TSP members.

The day ended with hanging out with kolla team mates at Famous Dave‘s

Hyperledger Fabric — Introduction

The Fabric Model

1. Assets

Assets can range from the tangible (real estate and hardware) to the intangible (contracts and intellectual property). Fabric supports the ability to exchange assets using unspent transaction outputs as the inputs for subsequent transactions. Assets (and asset registries) live in Fabric as a collection of key-value pairs, with state changes recorded as transactions on a ledger. Fabric allows for any asset to be represented in binary or JSON format.

2.Chaincode

Chaincode is software defining an asset or assets, and the transaction instructions for modifying the asset(s). In other words, it’s the business logic. Chaincode enforces the rules for reading or altering key value pairs or other state database information. Chaincode functions execute against the ledger current state database and are initiated through a transaction proposal. Chaincode execution results in a set of key value writes (write set) that can be submitted to the network and applied to the ledger on all peers.

3. Ledger Features

The ledger is the sequenced, tamper-resistant record of all state transitions in the fabric. State transitions are a result of chaincode invocations (‘transactions’) submitted by participating parties. Each transaction results in a set of asset key-value pairs that are committed to the ledger as creates, updates, or deletes.

The ledger is comprised of a blockchain (‘chain’) to store the immutable, sequenced record in blocks, as well as a state database to maintain current fabric state. There is one ledger per channel. Each peer maintains a copy of the ledger for each channel of which they are a member.

– Query and update ledger using key-based lookups, range queries, and composite key queries
– Read-only queries using a rich query language (if using CouchDB as state database)
– Read-only history queries — Query ledger history for a key, enabling data provenance scenarios
– Transactions consist of the versions of keys/values that were read in chaincode (read set) and keys/values that were written in chaincode (write set)
– Transactions contain signatures of every endorsing peer and are submitted to ordering service
– Transactions are ordered into blocks and are “delivered” from an ordering service to peers on a channel
– Peers validate transactions against endorsement policies and enforce the policies
– Prior to appending a block, a versioning check is performed to ensure that states for assets that were read have not changed since chaincode execution time
– There is immutability once a transaction is validated and committed
– A channel’s ledger contains a configuration block defining policies, access control lists, and other pertinent information
– Channel’s contain :ref:`MSP`s allowing crypto materials to be derived from different certificate authorities

4. Privacy through Channels

Fabric employs an immutable ledger on a per-channel basis, as well as chaincodes that can manipulate and modify the current state of assets (i.e. update key value pairs). A ledger exists in the scope of a channel — it can be shared across the entire network (assuming every participant is operating on one common channel) — or it can be privatized to only include a specific set of participants.

In the latter scenario, these participants would create a separate channel and thereby isolate/segregate their transactions and ledger. Fabric even solves scenarios that want to bridge the gap between total transparency and privacy. Chaincode gets installed only on peers that need to access the asset states to perform reads and writes (in other words, if a chaincode is not installed on
a peer, it will not be able to properly interface with the ledger). To further obfuscate the data, values within chaincode can be encrypted (in part or in total) using common cryptographic algorithms such as SHA0–256, etc. before appending to the ledger.

5. Security & Membership Services

Hyperledger Fabric underpins a transactional network where all participants have known identities. Public Key Infrastructure (PKI) is used to generate cryptographic certificates which are tied to organizations, network components, and end users or client applications. As a result, data access control can be manipulated and governed on the broader network and on channel levels. This “permissioned” notion of Fabric, coupled with the existence and capabilities of channels, helps address scenarios where privacy and confidentiality are paramount concerns.

6. Consensus

In distributed ledger technology, consensus has recently become synonymous with a specific algorithm, within a single function. However, consensus encompasses more than simply agreeing upon the order of transactions, and this differentiation is
highlighted in Hyperledger Fabric through its fundamental role in the entire transaction flow, from proposal and endorsement, to ordering, validation and commitment. In a nutshell, consensus is defined as the full-circle verification of the correctness of a set of transactions comprising a block.

Consensus is ultimately achieved when the order and results of a block’s transactions have met the explicit policy criteria checks. These checks and balances take place during the lifecycle of a transaction, and include the usage of endorsement policies to dictate which specific members must endorse a certain transaction class, as well as system chaincodes to ensure that these policies are enforced and upheld. Prior to commitment, the peers will employ these system chaincodes to make sure that enough endorsements are present, and that they were derived from the appropriate entities. Moreover, a versioning check will take place during which the current state of the ledger is agreed or consented upon, before any blocks containing transactions are appended to the ledger. This final check provides protection against double spend operations and other threats that might compromise data integrity, and allows for functions to be executed against non-static variables.

In addition to the multitude of endorsement, validity and versioning checks that take place, there are also ongoing identity verifications happening in all directions of the transaction flow. Access control lists are implemented on hierarchical layers of the network (ordering service down to channels), and payloads are repeatedly signed, verified and authenticated as a transaction proposal passes through the different architectural components. To conclude, consensus is not merely limited to the agreed upon order of a batch of transactions, but rather, it is an overarching characterization that is achieved as a byproduct of the ongoing verifications that take place during a transaction’s journey from proposal to commitment.

OpenStack Queens PTG, Denver – Day 2

Sep 12, 2017

The day started with meeting with people at registration desk hallway. I joined the TC discussion around Q & A for new projects in OpenStack. The discussion included emerging projects like Blazar, Glare, Gluon, Masakari, Stackube, Cyborg and Mogan. Each project representative presented TC members with general project overview, objectives, current status and any queries related to the Requirements for new OpenStack Projects applications. TC members also asked details related to maturity of projects w.r.t. contributors diversity, potential to increase collaboration, identifying any overlap with current/existing projects in OpenStack. We also had a discussion around potential project removals from OpenStack official projects. This is due to very low contributor activity during the cycle, no project goals achieved or changed focus of the participating organisations. THe particular projects highlighted in the area are Searchlight, Solum, Designate, CloudKitty.

I and inc0 were chasing the infra/TC members for resolution related to publishing kolla images on public registry like quay.io with openstack-kolla namespace. We did not get any direct resolution, need to initiate a discussion on openstack-dev mailing list for further discussion.

I met with Saad Zaher PTL of Freezer regarding the current status of project. I had very informative discussion and got my path clear for things to do ahead.

We had few hallway discussions related to kolla-kubernetes, openstack-helm, kolla with Ironic, kolla with neutron.

In the mean time we also had team photos clicked for kolla and requirements team. Looking forward to get them from the OpenStack marketing team.

This now ends the fist section of schedule of inter-project discussions. The day ended with IBM sponsored happy hour in the hotel where random discussions happened regarding projects, OpenStack, etc.

OpenStack Queens PTG, Denver – Day 1

It is my first time joining the Project Teams Gathering since it started in Atlanta. The location of the event is pretty unique its own way. I had never been to this part of USA and you can feel the difference. The event is held in the Renaissance Denver for 5 days between Sep 11-Sep 15.

I arrived here on Sep 10 and I could already see the active contributors in the hotel lobby discussing something or the other. I met some of the friends and took rest on the day further.

Sep, 11, the day started with registration for the event. I joined the #openstack-ptg channel to get the updates about the day and there I got introduced to ptgbot. Initially many people including me were a bit confused with how it works, but as we got familiar with it, we got more used to it for tracking the events.

As per schedule, the first two days of the event are dedicated to inter-project discussions.

I headed directly to the infra/stable/release/requirements room for discussions of requirements team. We had a discussion around topics to be worked on in Queens which include the per-project/independent/divergent requirements, OpenStack client testing, Python 3. The discussion was pretty good with insights provided by tonyb, promentheanfire, dirk, mordred, notmyname

Post lunch I joined Kolla team with discussions around collaboration across different deployment tooling in OpenStack. We had discussions around architecture, health monitoring, the role of containers, kubernetes and security.

I also attended the TC meeting for Rebooting of the Stewardship WG and Onboarding new community members.

The day ended with unofficial PTG happy hour at the elevated lounge in Renaissance Denver.

Introduction to Docker Security Hands-On

Docker has recently made an announcement related to Docker Security which will help enhance container security which abstracts it from the infrastructure. The three key components of the Docker Security are

  • Usable Security
  • Trusted Delivery
  • Infrastructure Independent

which will eventually result in safer apps.

In Docker, a secret is any blob of data, such as a password, SSH private key, TLS Certificate, or any other piece of data that is sensitive in nature. docker secret is the docker command for managing the secrets in Docker. It uses the built-in Certificate Authority that gets automatically created when bootstrapping a new swarm.

docker@manager1:~$ docker secret

Usage: docker secret COMMAND

Manage Docker secrets

Options:
 — help Print usage

Commands:
 create Create a secret from a file or STDIN as content
 inspect Display detailed information on one or more secrets
 ls List secrets
 rm Remove one or more secrets

For evaluating Docker Secrets, I reviewed the article https://blog.docker.com/2017/02/docker-secrets-management. I found we need some more steps to evaluate secrets.

You can create a key with very simple steps

docker@manager1:~$ echo “This is a secret” | docker secret create my_secret_data -
e0krhfllujxsnz6dunhrwpu2o

docker@manager1:~$ docker secret ls
ID NAME CREATED UPDATED
e0krhfllujxsnz6dunhrwpu2o my_secret_data 15 seconds ago 15 seconds ago

The detailed secret information can be obtained as

docker@manager1:~$ docker secret inspect my_secret_data
[
 {
 “ID”: “e0krhfllujxsnz6dunhrwpu2o”,
 “Version”: {
 “Index”: 64
 },
 “CreatedAt”: “2017–02–14T08:37:07.556279987Z”,
 “UpdatedAt”: “2017–02–14T08:37:07.556279987Z”,
 “Spec”: {
 “Name”: “my_secret_data”
 }
 }
]

Now lets use the secret with any service

docker@manager1:~$ docker service create — name=”nginx” — secret=”my_secret_data” nginx
tppk0d5azzxljeqe874m72sbt

docker@manager1:~$ docker service ls
ID NAME MODE REPLICAS IMAGE
tppk0d5azzxl nginx replicated 1/1 nginx:latest

Lets see which secret is actually allocated to the instance

docker@manager1:~$ docker service inspect nginx | grep -i secret
 “Secrets”: [
 “Name”: “my_secret_data”,
 “SecretID”: “e0krhfllujxsnz6dunhrwpu2o”,
 “SecretName”: “my_secret_data”

docker@manager1:~$ docker service ps nginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
q1dwnk2bv63t nginx.1 nginx:latest worker1 Running Running 2 minutes ago

Go to Worker1 and execute following

docker@worker1:~$ docker exec $(docker ps — filter name=nginx -q) ls -l /run/secrets
total 4
-r — r — r — 1 root root 17 Feb 14 08:43 my_secret_data

Now I will scale the service to 3

docker@manager1:~$ docker service scale nginx=3
nginx scaled to 3
docker@manager1:~$ docker service ps nginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
q1dwnk2bv63t nginx.1 nginx:latest worker1 Running Running 10 minutes ago
qqdawh6ko0dm nginx.2 nginx:latest worker2 Running Running 1 second ago
xbac8ucqju3s nginx.3 nginx:latest manager1 Running Running 1 second ago

Now we can see the service is also running on swarm manager node as well. Lets see if it has the same secret. Execute the same command on manager node

docker@manager1:~$ docker exec $(docker ps — filter name=nginx -q) ls -l /run/secrets
total 4
-r — r — r — 1 root root 17 Feb 14 08:53 my_secret_data

Lets try and remove the secret from the service

docker@manager1:~$ docker service update — secret-rm=”my_secret_data” nginx
nginx

docker@manager1:~$ docker exec $(docker ps — filter name=nginx -q) ls -l /run/secrets
ls: cannot access /run/secrets: No such file or directory

docker@worker1:~$ docker exec $(docker ps — filter name=nginx -q) ls -l /run/secrets
ls: cannot access /run/secrets: No such file or directory

Lets now remove the service and secret we have created for evaluation

docker@manager1:~$ docker service rm nginx
nginx

docker@manager1:~$ docker secret rm my_secret_data
e0krhfllujxsnz6dunhrwpu2o