Enterprise Networks with example

Kubernetes and YAML Files

From Assemblage to Machine

 

kubeinstallation

Figure 1

The following contains an example .yaml file for composition of a single Kubernetes cluster on the single Master Node of a Minikube Application.

“Y.A.M.L = Yet Another Meta Language”.
This example will give you – with your own database, (and your own Elastos/Trinity-Browser/Ionic Front End DApps, integrated with your database), – a development system only. There are serious security issues with this arrangement if it were exposed to the internet. You should research the networking requirements for security of a production system. We also highly recommend you comprehend the following article before proceeding to production: Running Postgresql on Kubernetes. The name of the following yaml file is optional.

[Interestingly, the Elastos Foundation is helping to save the future of Ethereum and its revolutionary smart contract blockchain, since the volume of data stored on Ethereum’s single mainchain has been threatening to choke the Ethereum system.

Elastos (along with other blockchain providers in their own systems) has opted to provide an ‘out’ for Ethereum by setting up Elastos Sidechains to handle Ethereum Smart Contracts on one of the Elastos nets. The Elastos system has virtually unlimited scalability due to the non-mainchain design, choosing to arrange the chains as branches which may be multiplied indefinitely. Beside the Ethereum Sidechain is a NEO smart contract Sidechain, now also a sustainable option for Smart Contracts Programmers on Elastos. Our own DApps will be using the Ethereum Smart Contract system. See item 8 in the above diagram.]

One does “kubectl apply -f assemblage.yaml” to instantiate the machine from an assemblage similar (somewhat simpler) to Figure 2, below. The actual title of the file is nominal.

The above diagram, Figue 1, shows a non-inter-networked, single-member-class system. The machine of Figure 2 involves internetworking and multiple member classes.

Some Views of a More General System

You could generalise this system for a case where, say, there were 3 ‘member-classes’ (see The General page). All member classes belong to the same Business Network, however there are essentially 3 different, but as yet unspecified, roles or ‘missions’ amongst the participants in the Network. We split the system into 3 deployment-groups – one for each member-class (role or mission) – plus the blockchain deployment.

The main difference between the 3 auth-db-app-memclass-x groups is in the structure of tables and columns in the different database schema these member classes require .. the database is almost everything .. it’s where your work begins if you follow this mode of development. In this case, we are using postgres schema-level classification (default “public”) to multiply schema to equal the number of member classes plus as many IoT classes and an Oseer class. This occurs on a single database.

You should be developing the database in docker (look up docker postgres images), not kubernetes initially. One links a PgAdmin4 container (in a “sudo docker run ..” statement) to a running docker postgres container’s network, logging in on PgAdmin4 (in a web browser) to view and work with the actual postgres database on the other container. As noted on the previous page, the central trigger function for updating your master (general) ledger is not a minor task to complete correctly, as updating must cater for transactions to be added at any real time and date (upon actual entry), but with a Transaction Date that is unrelated. Every transaction previously recorded (in the ledger accounts affected) whose Transaction Date comes after the newly inserted record’s Transaction Date, must be updated with the new amount credited or debited to the relevant accounts, and this process must ripple up through the transactions on the accounts affected, until the most recent transaction in each account is reached and updated.

You add tables and columns and primary keys initially. The PLpgSQL extension is used to code procedural functions as trigger functions on some tables (the insertion/update/deletion of a record in the table fires a trigger on the database, which runs the function associated). This is useful for automating sequences of data processing, and effectively opens the entire database to your coding and procedural needs. It is much more powerful than mere SQL. An understanding of COBOL or a similar business procedural language (with ‘complete’ language capacity) helps here.

A complete map of Associations between Tables, where a “foreign key” field in one Table links to the primary key of a record (whose id = field-contents-in-calling-Table) in an associated Table, so that data in the associated Table’s record becomes available to a DApp request that hits the calling Table first. Please beware that the process of mapping Associations is not trivial, either, and needs to be completed as thoroughly and carefully as possible. As your project progresses you will be adding to the map by defining foreign key fields in tables and pointing them in the definition to primary keys (in ONE-ONE, ONE-MANY and MANY-MANY relationships, where the latter 2 cases require join tables). Note each primary key in the database must be composed of a single field, except join tables which have a composite primary key made from the 2 primary key fields of the tables to be joined.

As mentioned elsewhere, first, second and third level database normalisation processing is recommended. It may be a headache, however it is virtually guaranteed that your database will not function as you want, without performing normalisation.

It should be fairly straightforward, extending the simple case, to decipher how to set up a kubernetes installation for any number of member classes.

Make sure you list the Service Spec first in order, before its Deployment or Stateful Set Spec, as pods require their own service to be in place in order to become contactable. (See the “kind” fields in the yaml file).

Note that every container and volume in the diagram is actually replicated; in the “deployments” (most of the installation) forming Replica Sets; and with the databases, forming Stateful Sets. Keeping the volumes synchronised does not happen automatically in high-load environments, and requires specialised skills to prepare for production in such a situation. Please refer to the link at the top of this page!

The Haskell server may be configured for multiple database schema. Note: there is a 4th member-class, plus IoT and Overseer/Administration Schemae and Dapps shown in the diagram, which are explained below. We require one replicated, configured and programmed Redis Pod per Schema (nine here, with 4 member schemae, 1 oseer, and one IoT Schema per memclass Schema).

kubegeneral

Figure 2

In the above machine, every transaction will originate from a DApp session (possibly an IoT DApp) and will hit the Blockchain first to ensure integrity of the system. Subsequently, the databases (the Redis in-memory datastore and the Postgres structured, persistent RDBMS), are updated with the bulk data from the transaction. Not much data is actually stored on the blockchain. The connections between the device running the DApp in the field, and the Blockchain/Database Cloud Installation, are enclosed in Elastos’ P2P Carrier, ensuring security and neither requiring nor allowing the use of web-sockets to connect. The P2P Carrier system relies on an encrypted translation of traditional web addresses to node id’s in Carrier. In actual operation there are no “web-sockets” of the traditional insecure variety, as used everywhere else (outside of Elastos) on the internet. Encrypted node id’s are securely recorded on a blockchain, tamper-proof. A translation of a traditional web address to a node id is not permanent, instead created at each connection request.

A more concrete example to flesh out this scheme might be to imagine a supermarket supply chain system for fresh food which values the reliability, traceability and convenience of blockchain transactions.
The supermarket company would constitute one member class by itself. Let’s say the multiple-member distribution and transport member class constituted a second member class and that the very many-membered farm/garden/orchard/hothouse/smallgoods/abattoirs/seafood/poultry etc food producer member class constituted the third.

The farmers and primary producers would require their own standardised DApp – “memclass-3-dapp” – (general enough to accommodate all primary producers’ needs as well as the requirements of the supermarket and distribution networks for that DApp – especially regarding id, quality, origin and timing evidence).

The transport and distribution networks would require their own broad DApp – “memclass-1-dapp” – to cover the scheduling and tracking as well as quality assurance of fresh goods. It could also cover maintenance of vehicles, communication, driving regulations and reporting, and most things required by a transport company. The supermarket and primary producers would have an interest in the workings of this DApp to ensure and protect their own interests.

Finally, the supermarket would have a top-level retailer’s DApp – “memclass-0-dapp” – to handle shipping and all supply & quality problems for fresh produce. This DApp would need to be comprehensive enough to deal with all supply issues for/from any branch or store site, yet be centralised still in the cloud installation database. While communications over phone or text are obviously available, the details which might often be requested between and within companies, are, in a system such as this, largely available securely & automatically to all concerned parties. The system does require adequate input of data to function properly.

To match these different DApps, we create schema on the common database, one for each member class, one IoT Schema per member class and an Oseer Schema. The tables and other database objects for the member class owning the schema, are contained within the schema (similar to a directory containing other directories and files).

Naturally payments for goods and services would occur on the blockchain and on the databases (and especially in the real-world bank accounts) of the respective members (companies).

Economist Sir Donald Trudge warns that the World, let alone the US, can never repay its debt. There have been flaws in so-called Modern Monetary Theory which point at its future sustainability Sustainability of MMT. In the eventuality of a catastrophic global fiat currency crash, the Bitcoin/Elastos/Ethereum/Neo token systems could easily and conveniently replace a fiat currency system and by-pass the banking system. There is a well established market in these Electronic Coins. At such a catastrophic juncture, any suppliers who were not already set up to accept Bitcoin payments, would be wasting no time in changing over, so one could envisage a converted economic payments system in as quick time as necessary. Convincing Governments, and some Employers, Workers and Consumers to convert wages, benefits and other payments to Bitcoin may be more of a problem. It would be in the interests of each of these groups to do so, however, and this would become increasingly apparent in such times.

The payments system can nevertheless be linked to an existing software installation and leased as a service for members with existing Enterprise IT Systems, or the system (with something like the chubba
Block ‘n’ Tackle operating) could be leased as a standalone, catastrophe-proof & comprehensive multi-Enterprise Accounting Package as well.

Within the global database, each member would have their own unique business channel number, amounting to an id field identifying the company/member uniquely; and each record on all their schema, including the IoT and Oseer schema, has that business channel id field attached to it, in order to separate the members’ data securely. Obviously the need to provision a global properties and control schema, in addition to the others, is satisfied in the Oseer Class of Schema and DApp.

In a real scheme there may be a need for a 4th tier/member class (mem-class-2 fitting into the arrangement so far) for the fruit, vegetable, meat, fish and poultry processing markets and plant, such as abattoirs, smallgoods factories, fish markets, poultry processors, fruit and vegetable markets, etc.

Owing to the fact that electronic sensors and recording and actuation devices will be used, it would also be wise to introduce a set of IoT DApp and Schema layers (as shown), one per member class. This layer can process and filter lower level enterprise IoT data which can be read directly from the incoming (centralised in cloud server) data, as IoT information technical-level. Correct functioning, Quality and Regulatory Compliance are the main concerns here.

There is also a need for an Administration and Overseer DApp and Schema layer which can perform customer administration and control tasks such as database registration, customer onboarding tasks, general higher level admin, automated business process master-control, etc.

If a national network were involved, one may have to copy the structure on separate clusters across the country, integrating centrally by message queuing to the headquarters cluster continuously. The queue is enclosed by Elastos Carrier, the system guaranteeing the security of all Elastos communications on the web.

A unitised installation is better than a monolithic one, so whereas Minikube allows only a single node on the cluster, a real installation would be spread across as many clusters as there were separate (say, national) sites, so that, at least, taking down the entire system is never necessary. There are also benefits derived from redundancy when multiple nodes can be employed. In such a case a developer might choose to develop directly with kubeadm, kubectl and kubelet (which work together naturally – however also check out microk8s with multipass) instead of Minikube, so that multiple nodes and clusters may be created.

The assemblage.yaml file on this page would be suitable to apply to a development node for the purposes of working on a more general business-networked case (in the Elastos system), such as shown in Figure 2. You need as many schema and DApps as there are member classes plus an IoT layer (Schema and DApps – one per member class) and an Overseer layer. The only additional requirement in architecture is the need to create and develop multiple schema within the database, and to configure the webserver to handle this arrangement. The Elastos DApps are coded in the Elastos/Trinity/Ionic development system (see previous page).

You can copy a basic set of Enterprise financial and accounting tables and functions from a base schema to every schema you need, by having them in the default ‘public’ schema in postgres, dumping that schema and editing the dump to search and replace “public” with the new schema name. You then restore the database (using psql as usual) and the new schema will be added, alongside the unaltered public schema. These new schema each need to have their journal/ledger, and other, fields refactored to suit the Member Class (or IoT DApp) concerned. Note that an IoT report of an event becomes a transaction, at the cloud level, when the incoming centralised data is forwarded to a blockchain transaction and then into the main system, although involving no consideration other than data and trust. One would probably use a multi-party, multi-ledger system to record IoT transactions corresponding to the stakeholders in an event.

You could also consider structuring the public schema to handle any erroneous requests to that schema (none is expected).

Redis will help speed transactions, if set up and configured well. This requires one replicated instance of Redis per Member Class Schema (here, requiring 4 replicated Redis Pods) plus as many again for the IoT networks, and one further for the Oseer Set. The method used to differentiate between the different schema at runtime involves the use of postgres ‘search paths’, associated with the different users’ home schema. A user must be unique on the entire Redis/Postgres system, only able to access one Redis server. Create other users for other schema if necessary. It’s probably not a good idea to connect to your redis system as root or postgres, unless configuring and developing. We are taking bets on whether you would end up in the default public schema for every root or postgres connection from a front end gui, with this redis system operating. The method used to route user requests to the correct Redis server for their Schema, is described at the bottom of this page, before the sample manifest.json file. There would need to be a considerable effort spent on programming the Redis key-value cache datastores, but this is beyond the scope of this article. For a high-load environment, the effort will be worth the returns in performance improvements.

Note: For a more unitised set-up, the IoT functions might be separated from the Non-IoT functions, to create a second IoT-only node with its own postgres database, alongside a simplified main node. You would need to develop using kubectl, kubeadm, and kubelet (or microk8s with multipass) and not Minikube. Here are the schematics for the 2 nodes shown as separate installations. One would only require a single Master Node and a single Cluster.

 

Non-IoT Worker Node

kubegeneral-non-iot

Figure 3

 

IoT Worker Node

kubegeneral-iot

Figure 4

ITCSA’s Working Non-IoT Node

Including the Postgres main Volume (but not the file-holding volume for the database backup/restore copy), 13 Volumes are shown. The Head Oseer Schema and DApp are designed to take care of our own top-level accounting, administration and control requirements.

kubegeneral-ITCSA

Figure 5

ITCSA’s Working IoT Node

kubegeneral-ITCSA-iot

Figure 6

ITCSA’s Planned Working Non-IoT “Node” – Multiple General Networks

There are N+1 independent General Networks, labeled 0,1,..,n,n+1,..,N. The nth network has M(n)+1 member-classes labeled (n,0),(n,1),..,(n,m),(n,m+1),..,(n,M(n)). There are 3 “extra” member-classes/schema covering the non-inter-networked, Real Estate Property-Based DApps/Schema (a-CHEIRRS,b-ChubbaMorris & c-convey-IT). There are f(i) (=F(i)/2 – see fig.) future non-internetworked Schema allowed for (ie one overseer and one main schema each). The Redis servers for these overseers and mains are hidden. There would be j(i) members in each of these single member-class DApps, possibly with distinct (tailored) DApps. Within the CHEIRRS member-class, and possibly within future DApps, each member has their own tailored DApp despite the existence of only the single Schema per member-class globally. In real production and development, each network occupies its own node-pair (non-iot and iot). In addition the Head Overseer system is on the master node. The labeling and numbering systems here and below represent the view of operations of the Head Overseer.

kubegeneral-ITCSA-multi

Figure 7

 

ITCSA’s Planned Working IoT “Node” – Multiple General Networks

This figure represents the N+2 independent General IoT Networks, labeled 0,1,..,n,n+1,..,N and A. The nth network (corresponding to the Non-IoT nth network) has M(n)+1 member-classes labeled (n,0),(n,1),..,(n,m),(n,m+1),..,(n,M(n)), as for the Non-IoT node. Network A belongs to the CHEIRRS Schema and DApps, and foresees a need for IoT device networking, recording and control in Social and Affordable Housing. This network has 0 -> M(A) members (ie M(A)+1 “sub”-classes, since there is only one Schema A, and only a single member-class A, but M(A)+1 DApps). The members are labeled (A,0), (A,1), ..,(A,m), (A,m+1), ..,(A,M(A)). We have allowed for g (= Σ|n(G(n))/2 – see fig.) future iot necessary network pairs for G future non-internetworked Systems including overseers, whose DApp numbers depend on the numbers of distinct member DApps (l) served by each network (k) in these single-member-class Networks. As above, there would actually be a separate “iot” node for each network (where required).

kubegeneral-ITCSA-multi-iot

Figure 8
With the memclass-x-dapps, the development occurs on the host, following the Elastos Developer Documentation and Elastos Development Tools and Environment. You will need node/npm (and possibly yarn, for dependency debugging, as well – DO NOT do ‘sudo apt install yarn’ – wrong version results! Download the latest version instead.), ionic and the trinity-cli from Elastos (and an understanding of Ionic development).

You should understand that the devices running the mem-class-x and iot-class-x DApps also run the blockchains themselves. Although other Distributed and Centralised BlockChain Technologies exist (eg Hyperledger), with Elastos, it is the security advantage that attracts us. Please refer to BlockChains.

By the way, if you are interested to follow a more updated set of instructions using Multipass/Microk8s and a replicated set of database servers, with blockchains running and connected, please visit https://github.com/john-itcsolutions/smart-web-postgresql-grpc. You are welcome to clone the library, although we do not provide any database schema besides the ‘public’ schema used by the blockchains.

A further and easier approach (once you’ve understood the Juju/Charm technology) to setting up a Kubernetes Back-End, can be found at (to start) Juju, Charms and Kubernetes.

We have a github site including the path to follow to develop this way. CHEIRRS.

The following image is a representation of the recent state of our platform. The above “cheirrs” repository reflects this layout in code.
kubeeverything

IT Cloud Solutions Australia would like to thank Susan Dart, formerly of the Melbourne Blockchain Centre, for introducing us to Elastos. We also wish to express gratitude to Carsten Eckelmann and 2pi Software for hosting Susan’s event, and for pointing out the inherent dangers of a Centralised Blockchain. “Centralised” is not really in the “blockchain spirit of Trust Guarantee”.

Final advice: Download Microsoft’s Visual Studio Code Editor for Ubuntu/Debian. Cheirrs!