The Beauty of the Blockchain

 


The meteoric rise in the value of bitcoins has put a spotlight on the blockchain, which is the primary public, digital ledger for bitcoin transactions. A blockchain allows digital transactions to be transparent and distributed, but not copied. It is thought to be the brainchild of an anonymous person or group operating under the pseudonym Satoshi Nakamoto.

The bitcoin network has attracted attention from almost all industries and experts due to its variable market value. These captains of industry and the experts are trying to figure out how this technology can be adapted to and integrated with their work. The dictionary definition of blockchain is, “A digital ledger in which transactions made in bitcoin or another cryptocurrency are recorded chronologically and publicly.” This definition is derived from the most popular implementation of blockchain technology — the bitcoin. But blockchain is actually not bitcoin. Let’s have a look at blockchain technology, in general.

Distributed ledger technology (DLT)

Distributed ledger technology includes blockchain technologies and smart contracts. While DLT existed prior to bitcoin or blockchain, it marks the convergence of a host of technologies, including the time-stamping of transactions, peer-to-peer (P2P) networks, cryptography, shared computational power, as well as a new consensus algorithm. In short, distributed ledger technology is generally made up of three basic components:

  • A data model that captures the current state of the ledger.
  • A language of transactions that changes the ledger state.
  • A protocol used to build consensus among participants around which transactions will be accepted by the ledger and in what order.

Figure 1: Structure of a block in the chain

What is blockchain technology?

Blockchain is a specific form or sub-set of distributed ledger technologies, which constructs a chronological chain of blocks; hence the name ‘blockchain’. A block refers to a set of transactions that is bundled together and added to the chain at the same time. A blockchain is a peer-to-peer distributed ledger, forged by consensus, combined with a system for smart contracts and other assistive technologies. Together, these can be used to build a new generation of transactional applications that establish trust, accountability and transparency at their core, while streamlining business processes and legal constraints. The blockchain then tracks various assets, the transactions are grouped into blocks, and there can be any number of transactions per block. A block commonly consists of the following four pieces of metadata:

  • The reference to the previous block
  • The proof of work, also known as a nonce
  • The time-stamp
  • The Merkle tree root for the transactions included in this block

Is a blockchain similar to a database?

Blockchain technology is different from databases in some key aspects. In a relational database, data can be easily modified or deleted. Typically, there are database administrators who may make changes to any part of the data or its structure and even to relational databases. A blockchain, however, is a write-only data structure, where new entries get appended onto the end of the ledger. There are no administrator permissions within a blockchain that allow the editing or deleting of data. Also, the relational databases were originally designed for centralised applications, where a single entity controls the data. In contrast, blockchains were specifically designed for decentralised applications.

Types of blockchains

A blockchain can be both permissionless (e.g., bitcoin and Ethereum) or permissioned, like the different Hyperledger blockchain frameworks. The choice between permissionless and permissioned blockchains is driven by the particular use case.

A permissionless blockchain is also known as a public blockchain, because anyone can join the network. A permissioned blockchain, or private blockchain, requires pre-verification of the participants within the network, who are usually known to each other.

Characteristics of blockchains

Here is a list of some of the well-known properties

of blockchains.

The immutability of the data which sits on the blockchain is perhaps the most powerful and convincing reason to deploy blockchain-based solutions for a variety of socio-economic processes that are currently recorded on centralised servers. This ‘unchanging over time’ feature makes the blockchain useful for accounting and financial transactions, in identity management and in asset ownership, management and transfer, just to name a few examples. Once a transaction is written onto the blockchain, no one can change it or, at least, it would be extremely difficult to do so.

Transparency of data is embedded within the network as a whole. The blockchain network exists in a state of consensus, one that automatically checks in with itself. Due to the structure of a block, the data in a blockchain cannot be corrupted; hence altering any unit of information in it is almost impossible. Though, in theory, this can be done by using a huge amount of computing power to override the entire network, this is not possible practically.

By design, the blockchain is a decentralised technology. Anything that happens on it is a function of the network, as a whole. A global network of computers uses blockchain technology to jointly manage the database that records transactions. The consensus mechanism discussed next ensures the correctness of data stored on the blockchain.

By storing data across its network, the blockchain eliminates the risks that come with data being held centrally, and the network lacks centralised points of vulnerability that are prone to being exploited. The blockchain ensures all participants in the network use encryption technologies for the security of the data. Primarily, it uses PKI (public key infrastructure), and it is up to the participants to select other encryption technologies as per their preference.

What are consensus mechanisms and the types of consensus algorithms?

Consensus is an agreement among the network peers; it refers to a system of ensuring that participants agree to a certain state of the system as the true state. It is a process whereby the peers synchronise the data on the blockchain. There are a number of consensus mechanisms or algorithms. One is Proof of Work. Others include Proof of Stake, Proof of Elapsed Time and Simplified Byzantine Fault Tolerance. Bitcoin and Ethereum use Proof of Work, though Ethereum is moving towards Proof of Stake.

What are smart contracts?

Back in 1996, a man named Nick Szabo coined the term ‘smart contract’. You can think of it as a computer protocol used to facilitate, verify, or enforce the negotiation of a legal contract. A smart contract is a phrase used to describe computer code. Smart contracts are simply computer programs that execute predefined actions when certain conditions within the system are met. Smart contracts provide the language of transactions that allows the ledger state to be modified. They can facilitate the exchange and transfer of anything of value (e.g., shares, money, content and property).

Open source blockchain frameworks, projects and communities

Looking at the current state of research and some of the implementations of blockchain technologies, we can certainly say that most enterprise blockchain initiatives are backed by open source projects. Here’s a list of some of the popular open source blockchain projects.

  • Hyperledger is an open source effort created to advance cross-industry blockchain technologies. Hosted by the Linux Foundation, it is a global collaboration of members from various industries and organisations.
  • Quorum is a permissioned implementation of Ethereum, which supports data privacy. Quorum achieves data privacy by allowing data visibility on a need-to-know basis, using a voting-based consensus algorithm. Interestingly, Quorum was created and open sourced by J.P. Morgan.
  • Chain Core, created by chain.com, was initially designed for financial services institutions and for things like securities, bonds and currencies.
  • Corda is a distributed ledger platform designed to record, manage and automate legal agreements between businesses. It was created by the R3 Company, which is a consortium of over a hundred global financial institutions.

Originally published at opensourceforu.com on June 2, 2018.

Advertisements

Building the Community Around Containerised OpenStack Deployment


Kolla provides production-ready containers and deployment tools for operating OpenStack clouds that are scalable, fast, reliable and upgradable — using community best practices.

I still remember the kind of applause I got from my colleagues when I got my first OpenStack deployment working. Most of the things then worked around DevStack, the development environment of OpenStack, and vendors were still trying to find the right mix of DevOps tools to deploy OpenStack. The evolution process involved working with tools like Chef, Puppet and Ansible, with vendors creating a deployment model around them. Then came Docker, creating a buzz with a lost Linux feature called containers. A group of community members came together to see if they could containerise the OpenStack services and deploy them. The project started with the sole mission of containerising OpenStack into microservices and providing additional tooling to simplify management.

The entry barrier for Kolla

The arrival of Kolla was not particularly welcomed by the community, which was busy with already proven technologies to automate the OpenStack deployment process. At the same time, the Kolla team had its own set of challenges to cope with. Docker was itself very new at this stage, with very limited automation frameworks to use. The project went through a set of repeated events, while rediscovering the architectural changes and tools to be used. The initial implementation focused on creating the individual Docker images for core services for Centos and Ubuntu operating system flavours and deploying them with Kubernetes, which was just picking up. The configuration management part was still a half-solved puzzle, and networking was far below the requirements of both OpenStack deployment as well as its users. The deployment part was then rewritten in docker-compose, which at that time was the most popular deployment framework for multiple containers to add one more version of project rewrites.

Deploying containerised OpenStack with Kolla and Ansible

With the time spent fiddling around with the OpenStack Docker images, the Kolla team decided to reinvent the architecture again, and look for alternatives that could help deploy the containerised services with better control. This time, the team turned towards Ansible — at that time one of the latest configuration management tools on the horizon; it was still awaiting wider community adoption, so there were few preconceived notions about containers. The Kolla team used the same container images and a few known bits of the Ansible framework to get the first successful deployment. The configuration management part was well managed with Ansible. During the process, the team made a lot of changes in the Ansible Docker module, which was then contributed back to the Ansible project.

Making baby steps towards wider community adoption

The initial deployment of Kolla with Ansible was an achievement that attracted a lot of attention towards Kolla as a deployment framework for OpenStack, resulting in a number of contributors and innovative ideas from users who did different PoCs (proofs-of-concept). The team’s diversity increased and it was accepted as a part of the Big Tent governance model of the OpenStack Foundation. The Docker images were central to the adoption of Kolla and the image creation got a major revamp. Jinja based image templates took the place of individual Docker files and a configuration based build framework to build specific images. The same framework was again extended to include changes so that vendors could create custom images with little changes. As the deployment matured, along with demand from the community, the team took steps to create reusable independent modules. The project was split into two major reusable components, the image build part and the deployment part. This gave vendors better control over the consumption of Kolla based container images in their existing frameworks — as an alternative to bare-metal/VM based deployment.

Deployment with Kubernetes

As Kubernetes matured, and in response to requirements from major vendors who wanted to use Kubernetes as their orchestration platform, Kubernetes based deployment again came into the picture for Kolla images. This development is still in its nascent stage, with the 1.0 milestone in sight with core services deployment. It is still getting rewritten with different existing Ansible automation features as well as a native Kubernetes tools like Helm.

Contributing to Kolla

The Kolla project is now one of the popular repositories in the OpenStack community with containerised images of almost all Big Tent projects. The deployment automation for most of the projects is also complete with the remaining undergoing reviews. There are three major deliverables:

  • kolla, the image build repository
  • kolla-ansible, the Ansible based deployment automation
  • kolla-kubernetes, the Kubernetes based deployment automation

The Kolla repositories are hosted under OpenStack GitHub, under the Apache 2 licence. Have a look at the project documentation, connect with the community on Freenode at #openstack-kolla channel or subscribe to the openstack-dev mailing list. The Docker images are also available on Docker Hub. You can have a look at project milestones, features and bug reports on Launchpad.


Originally published at opensourceforu.com on May 8, 2018.

Kata Containers: Secure Containers from the OpenStack Community


Kata Containers is new container technology that combines technology from Intel Clear Containers with runV from Hyper. Managed by a community, the objective of Kata is to deliver speed and security.

Ever since Linux containers were launched with Docker, containerisation has become a full-fledged domain. Containers are now used in a number of applications and, as in the case of all developing technologies, real-life challenges linked to performance and security have begun to matter with containers as well.

Intel has been working on the Clear Containers Project for some time to address security concerns within containers through Intel Virtualization Technology (Intel VT). This essentially offers the capability to launch containers as lightweight virtual machines (VMs), providing an alternative runtime, which is interoperable with popular container environments such as Kubernetes and Docker. At the same time, the Hyper community has been working on providing the alternate OCI-compliant runtime to run containers on hypervisors, with a few limitations caused by the current incompatibility between hypervisors and containers.

In recent times, it’s been noticed that single vendor open source projects and communities do not attract many contributors due to their inherent vendor-specific policies. There has always been a need for an open source community to build these projects up, along with the current set of contributors, and drive further collaborative innovation.

During the last KubeCon, OpenStack Foundation announced a new initiative aimed at unifying the speed and manageability of containers with the security advantages of virtual machines (VMs). This was called Kata Containers.

What are Kata Containers?

Kata Containers is an open source project and community, working to build a standard implementation of lightweight virtual machines (VMs) that feel and perform like containers, but provide the workload isolation and security advantages of VMs. Intel is contributing Intel Clear Containers technology and Hyper is contributing the runV technology to initiate the project. The Kata Containers community will initially merge both the technologies at their current state to provide light and fast VM based containers

An overview of the project

The Kata Containers project will initially comprise six components, including the agent, runtime, proxy, shim, kernel and packaging of QEMU 2.9.The initial set of projects is essentially based on projects from contributing projects like Clear Containers or runV. It is designed to be architecture agnostic, run on multiple hypervisors and be compatible with the OCI specifications for Docker containers and CRI-O for Kubernetes. For now, Kata will only run on chips based on the x86 architecture and will only support KVM as its hypervisor. The plan is to expand support to other architectures and hypervisors over time.

For users, Kata Containers does not yet provide an installation option directly. Users can either install Clear Containers or runV, since both projects will provide a migration path to Kata Containers at a later date.

The community

Kata Containers is hosted on GitHub under the Apache 2 licence. While it will be managed by the OpenStack Foundation, it is an independent project with its own technical governance and contributor base. The Kata Containers project is governed according to the ‘four opens’ — open source, open design, open development and open community. Technical decisions will be made by technical contributors and a representative architecture committee. The community also has a working committee to make non-technical decisions and help influence the project’s overall strategy, including marketing, communications, product management and ecosystem support.

Contributing to Kata Containers

Kata Containers is working to build a global, diverse and collaborative community. If you are interested in supporting the technology, you are welcome to participate. There is a requirement for contributors with different expertise and skills, ranging from development, operations, documentation, marketing, community organisation and product management. You can learn more about the project at katacontainers.io, or view the code repositories on GitHub to contribute to the project. You can also talk to fellow contributors on Freenode IRC: #kata-dev or Kata Containers Slack or subscribe to the kata-dev mailing list.


Originally published at opensourceforu.com on May 11, 2018.

Setup Multi-node Kubernetes cluster with kubeadm and vagrant

Introduction

With reference to steps listed at Using kubeadm to Create a Cluster for setting up the Kubernetes cluster with kubeadm. I have been working on an automation to setup the cluster. The result of it is kubeadm-vagrant, a github project with simple steps to setup your kubernetes cluster with more control on vagrant based virtual machines.

Installation

  • Clone the kubeadm-vagrant repo

git clone https://github.com/coolsvap/kubeadm-vagrant

  • Choose your distribution of choice from CentOS/Ubuntu and move to the specific directory.
  • Configure the cluster parameters in Vagrantfile. Refer below for details of configuration options.

vi Vagrantfile

  • Spin up the cluster

vagrant up

  • This will spin up new Kubernetes cluster. You can check the status of cluster with following command,

sudo su

kubectl get pods –all-namespaces

Cluster Configuration Options

  1. You need to generate a KUBETOKEN of your choice to be used while creating the cluster. You will need to install kubeadm package on your host to create the token with following command

# kubeadm token generate

148a37.736fd53655b767b7

  1. BOX_IMAGE is currently default with “coolsvap/centos-k8s” box which is custom box created which can be used for setting up the cluster with basic dependencies for kubernetes node.
  2. Set SETUP_MASTER to true if you want to setup the node. This is true by default for spawning a new cluster. You can skip it for adding new minions.
  3. Set SETUP_NODES to true/false depending on whether you are setting up minions in the cluster.
  4. Specify NODE_COUNT as the count of minions in the cluster
  5. Specify  the MASTER_IP as static IP which can be referenced for other cluster configurations
  6. Specify NODE_IP_NW as the network IP which can be used for assigning dynamic IPs for cluster nodes from the same network as Master
  7. Specify custom POD_NW_CIDR of your choice
  8. Setting up kubernetes dashboard is still a WIP with K8S_DASHBOARD option.

Setting up the development environment for Kata Containers – proxy

Summarizing the information for setting up the development environment for my first project in Kata-Containers. I have setted up the dev environment for proxy project.

First Things FirstInstall Golang as a prerequisite to the development. Ensure you follow the complete steps to create the required directory structure and test the installation.

Get The Source

This guide assumes you already have forked the proxy project. If not please for the repo. Once you have successfully forked the repo, clone it on your computer

git clone https://github.com/<your-username>/proxy.git $GOPATH/src/github.com/<your-username>/proxy

Add the upstream proxy project as remote to the local clone to fetch up the updates.

$ cd proxy
$ git remote add upstream https://github.com/kata-containers/proxy.git

The proxy project requires following dependencies to be installed prior to build. Use following command to install them.

$ go get github.com/hashicorp/yamux
$ go get github.com/sirupsen/logrus

Do the first build. This will create the executable file kata-proxy in the proxy directory.

$ make
go build -o kata-proxy -ldflags “-X main.version=0.0.1-02a5863f1165b1ee474b41151189c2e1b66f1c40”

To run unit tests run

$ make test
go test -v -race -coverprofile=coverage.txt -covermode=atomic
=== RUN TestUnixAddrParsing
— PASS: TestUnixAddrParsing (0.00s)
=== RUN TestProxy
— PASS: TestProxy (0.05s)
PASS
coverage: 44.6% of statements
ok github.com/coolsvap/proxy 1.064s

To remove all generated output files run

$ make clean
rm -f kata-proxy

This is for this time. I am working on setting up the development environment with GolangD IDE. Keep you posted.

Event Report – Expert Talks 2017

Expert Talks 2017 was my first participation in the Expert Talks conference held in Pune. The conference started a couple of years before as an elevated form of Expert Talks Meetup series by Equal Experts, this year’s conference had a very good mix of content. It included talks on a variety of topics including BlockChain, Containers, IoT, Security to name a few. This is the first edition of the conference which had a formal CFP which witnessed 50+ submissions from different parts of the country and 9 talks were selected out of it. This year the conference was held at Novotel Hotel Pune.

 

The conference started with registration desk which was well organized for everyone registered to pick up their kit. Even for a conference scheduled on a Saturday, the attendance was quite noticeable. The event started with a welcome speech to all participants and speakers.

 

The first session delivered by Dr. Pandurang Kamat on demystifying blockchain was a very good start to the event with much anticipated and buzzed topic at the moment. He covered the ecosystem around blockchain with precise detail for everyone to understand the example of most popular blockchain application “BitCoin”. He also gave the overview of Open Source Frameworks like Project Hyperledger for blockchain implementations.

 

The following session Doveryai, no proveryai – an introduction to TLA+ delivered by Sandeep Joshi was well received by the audience as the topic was pretty unique in terms of the name as well as content. The session started a bit slowly with the audience getting the details of TLA+ and PlusCal. This was well scoped with some basic details and a hands-on demo. The model checker use case was well received after looking at the real world applications and we had the first coffee break of the day after it.

 

Mr. Lalit Bhatt started well with his session about Data Science – An Engineering Implementation Perspective which discussed the mathematical models used for building the real world data science applications and explained the current use-cases he has in the organization.

 

Swapnil Dubey and Sunil Manikani from Shlumberger gave good insight into their microservice strategy with containers with building blocks like Kubernetes, Docker and GKE. They also presented how they are using GCE capabilities to effectively reduce the operational expenses.

 

Alicja Gilderdale from Equal Experts presented some history about container technologies and how they validated different container technologies for one of their projects. She also provided some of the insights into the challenges and lessons learned throughout their journey. The end of this session gave thunder to the participants with the lunch break.

 

Neha Datt, from Equal Experts, showcased the importance of Product Owner in the overall business cycle in the current changing infrastructure world. She provided some critical thinking points to bridge the gap between business, architecture and development team and also how product manager can be the glue between them.

Piyush Verma, took the Data Science – An Engineering Implementation Perspective discussion forward with his thoughts about Distributed Data Processing. He showcased typical architectures and deployments in distributed data processing by splitting the system into layers; defining the relevance, need, & behavior of each. One of the core attraction points of the session was the drawn diagrams incorporated in his presentation which he did as a part of the homework for the same.

 

After the second official coffee break of the day, Akash Mahajan enlightened everyone with the most crucial requirement in the currently distributed workloads living on the public clouds, the security. He walked everyone with different requirements for managing secrets with a HashiCorp Vault example while explained the advantages & caveats of the approach.

 

The IoT, Smart Cities, and Digital Manufacturing discussion were well placed with providing application of most of the concepts learned throughout the day to the real world problems. Subodh Gajare provided details on the IoT architecture, its foundation with requirements related to Mobility, Analytics, Big data, Cloud and Security. He provided very useful insights into the upcoming protocol advances and the usage of Fog, Edge computing in the Smart City application of IoT.

It was a day well spent with some known faces and an opportunity to connect with many enthusiastic IT professionals in Pune.

 

TC Candidacy – Swapnil Kulkarni (coolsvap)

OpenStackers,

I am Swapnil Kulkarni(coolsvap), I have been a ATC since Icehouse and I wish
take this opportunity to throw my hat for election to the OpenStack Technical
Committee this election cycle. I started contributing to OpenStack with
introduction at a community event and since then I have always utilized every
opportunity I had to contribute to OpenStack. I am a core reviewer at kolla
and requirements groups. I have also been active in activities to improve the
overall participation in OpenStack, through meetups, mentorship, outreach to
educational institions to name a few.

My focus of work during TC would be to make it easier for people to get
involved in, participate, and contribute to OpenStack, to build the community.
I have had a few hickups in the recent past for community engagement and
contribution activities but my current employment gives me the flexibilty
every ATC needs and I would like to take full advantage of it and increse
the level of contribution.

Please consider this my application and thank you for your consideration.

[1] https://www.openstack.org/community/members/profile/7314/swapnil-kulkarni
[2] http://stackalytics.com/report/users/coolsvap
[3] https://review.openstack.org/510402

OpenStack PTG Denver – Day 5

Day 5 of PTG started as day for hackathons, general project/cross-project discussion for most project teams with many people left from PTG and few preparing for their travel plans or site-seeing in Colarado. The kolla team started the day with alternate Dockerfile build tool review. Later in the day was something everything in OpenStack and containers community was looking forward to the OpenStack – Kubernets SIG with Chris Hodge leading the effort to get everyone interested in same room. Some key targets for the release were identified including contributors interested. We then had most pressing issue for all deployment projects based on containers, the build and publishing pipeline for kolla images with openstack-infra team. Most of the current requirements, needs and blocking points were identified for rolling this feature. The kolla team and openstack infra team will work together to get this rolling in the starting phase of this cycle once zuul v3 rollout stabelizes. The kolla team ended day early for some much needed buzz for the whole week’s work at Station 26.

 

This is all from this edition of PTG see you next at Dublin.

http://platform.twitter.com/widgets.js

OpenStack Queens PTG – Day 4

Day 4 of PTG started with next Kolla discussions related to kolla-ansible. Discussion started with kolla dev-mode effort started by pbourke. discussion was about currently missing pieces in dev_mode like installing clients, libs and virtualenv bindmount. The goal in the cycle is to fill the missing pieces, verify options for multinode dev_mode, investigate on options for remote debugging and also consider using PyCharm.

One of the important topics in kolla is the gating. Currently kolla has around 14 different gates for deployment testing and it has to be improved with testing the deployment for sanity with Tempest. This will help the validate the entire deployment in the gates. Upgrades testing is also one key requirement, kolla team will model something like grenade testing for it. The key is to maximize the testing of scenarios that kolla supports in gate, but since we are restricted with openstack infra resources as well as the time each test takes to validate. It is agreed that team members will create a list of scenarios and assign to everyone to verify and record the results in a central location like a google sheet. This will also help evaluate stability of kolla deployment in each release.

Skip level upgrades is one of the major talking point in the current PTG. Kolla team will evaluate fast forward upgrades for each service deployed with kolla to decide on skip level upgrade support in kolla. This would be a PoC in current cycle.

Second half of the discussion was around the kolla-kubernetes, where the team discussed the roadmap for current cycle. That will include upgrade prototyping for z stream & x stream services, validate the logging solution with fluent-bit, automated deployment, remove deprecated components and improve documentation.

Most of the teams have wrapped up their design discussions on Thursday and will be having hackathons on the last day.