- We are “Mobile First” developers .. we rove free today
- We believe in and use Open Source Software
“Mobile First” means your distributed application will largely work on iOS & Android tablets, and will work completely for all desktops & laptops. Large-screen-dependent sections of the DApp will be difficult to operate on mobile phones and less so on Tablets, yet the rest of your DApp will work naturally for contacts/messages/elastos mail/tasks/calendar integration, for most of your DApp, and for ‘Push’ notifications. Our services are hosted on the Elastos Smart Web.
For example, we integrate Individual Push Notifications with our mobile apps, to enable your database to instantly remind you of details of contacts (which you select when you are registering them), by a visible ‘Push’ notification to your phone as they ring. This service can be disabled for regular contacts.
You are welcome to refer to our page Computers As Machines for a potted history of Computing, with some references, beginning at the Polish Machine the unsuccessful Bomba, in 1938 (designed attempting to decrypt German Messages encoded with the Enigma Machine), via the successful Bombe (1940) of Alan Turing, who proceeded to design the Theoretical Turing Machine which led (through the British GPO’s Colossus (1943), ENIAC – delivered in 1945, and EDVAC – delivered in 1949) to its practical outcome, “Sequential Machine Architecture” (assisted in consultation by John von Neumann, between 1943 and 1945), followed by the development of the first Operating System for the second generation of those machines (General Motors, 1956, as a customer of IBM). Also how the development in 1947 and 1948 under William Shockley, of the first two of the various types of Solid State Transistors, and the accompanying developments in miniaturisation (ending with the ‘Computer on a Chip’), began the long haul towards the first useful Personal Computers between 1970 & 1980, and away from the need for rooms full of thermionic valves. Then the web, ‘Network-As-Machine’ and Artificial Intelligence, to end with the first Quantum Computer in 2019 (IBM).
The following contains a more modern line of ancestry of software in use on computers today (starting around 1970, but still on traditional Sequential Devices, forerunners of today’s sequential personal computers and macs, etc).
Dennis Ritchie designed and created the ‘C’ Programming Language at AT & T’s Bell Labs between 1972-1973. Initially, it was for running UNIX utilities. The UNIX operating system was being developed by Ken Thompson at the same time. The pair had previously worked together on the creation of C’s predecessor, the ‘B’ programming language. Evidence of Ken’s and Dennis’s compatibility as programmers and language designers was reinforced, according to Brian Kernighan, when they compared their own independent solutions to a problem which emerged as identical in code. Ken Thompson explained Unix was the first incarnation of a system where everything happened “in the first person”. There was no need for the third eye view – Thompson allowed the Op Sys to do whatever it needed as if the sole controller in the processes. He is a believer in using brute force when necessary (in coding terms).
From the same Labs, from 1979 onwards, Bjarne Stroustrup developed C++ as an efficient extension of C , allowing the creation and operation of a new class of things in C (but studied since the 1950’s) called software “Objects”. Objects are Data Structures extended to include the necessary methods (functions) to deal with the data onboard the object, and enabling safer communication between these objects.
In order to get to the present state on personal computers from this point (ie C and C++), you may refer to the Security page on this site which explains the History of Linux as a derivative of MINIX, which itself was designed and sold by Prof Andrew Tanenbaum as a teaching aid (with accompanying software on “floppy disk”) conforming to the UNIX Standards, yet able to be compiled onto a PC, making a personal computer into a secure multi-user machine, suitable as a web server (for example). At the same time there were other branches of Open Source activity such as OpenBSD which is based upon the Objective-C language. The differences between the Objective-C approach and the C++ approach to Object Oriented Programming result in certain trade-offs when deciding between languages. Apple’s systems are now based on a fork of the OpenBSD operating system, and do nevertheless allow some programming in C++. Your Mac can also be employed as a multi-user web server, as can a Linux box.
One of the major components in any modern UNIX/Linux Desktop system (and until 20/11/2020, also in Apple Mac OS – now “install on demand” only) is the X11 Window System (a Window System has the same importance for any desktop operating system, though never installed on any production web server). This, together with the user’s choice of “Window Manager”, gives the graphics capabilities and so much of the user-friendliness and convenience of a modern computer. The Window System integrates and gives access to keyboard, mouse/pointing device and screen. Without a system like X11 a personal computer would remain a command line-based thing with very limited graphical capabilities and usefulness.
All of these Software Systems which run on top of other software layers based around the operating system kernel, rely on the compilation of C and C++ source code into machine code for execution on demand (with a strict permission system however!).
However after further investigation of the Haskell language (of which it is said, “Haskell servers do one thing well”), we are testing web servers delivering Rest API endpoints for our databases, as Haskell ‘Black Boxes’, with a further saving by a factor of 8 in memory demands as well as improvements in response times. Haskell is a functional programming language which you might liken to a spreadsheet. A spreadsheet essentially re-evaluates a function at each entry occasion. Haskell is an example of a very powerful, so-named “Lambda function” evaluating language.
Nevertheless, as the most promising prospect in the Elastos Environment, for us, uses Python as the “Middleware” language (in a so-called gRPC system, not REST API’s), and NGINX as the cloud-installed webserver/load balancer, we are unable to use Haskell.
Front End Graphical User Interfaces
Data Processing on Database
In a similar way to Shipping Containers with Goods, Software “Containers” make life a lot easier for developers: Docker helps in moving software around the world, and with integrating development elements into a single environment.
Further than Docker Containers, the Kubernetes Container-Cluster Management System, based on Docker (if so chosen), assists in integrating development elements, with even more adaptability than by Docker alone, into a single, Cloud-ready development and production environment. In Greek Kubernetes is the Helmsman or “Captain” of a vessel.
Desktop Development Environment from the ceiling – mirrored in Production (from a roof) on at least 2 replicated nodes in the cloud. This single development node (node – shown, on cluster – not shown) is actually inside the desktop computer host (192.168.x.z), on a virtual machine (refer to Install Minikube). Your modem would be at 192.168.x.y on the internal LAN. Minimum requirements for the host hardware are 32GB RAM with 250GB SSD (Solid State Drive)
Across the entire intercommunication system involved above: between the DApps on desktops, laptops, and mobile devices (as well as IoT device DApps) – and the Kubernetes installations in the Cloud, and between the containers and pods within Kubernetes, the Elastos P2P Carrier Network working with Kubernetes Engineering guarantees security.
If you wished to follow this line of development, the multiplexed update function for the master ledger (also known as the general ledger) would need to be encoded in PLpgSQL on the Postgres database(s), which you need to build for your target enterprise(s) (See for example our own Block ‘n ‘ Tackle). This update function (to be run as a trigger function after the entry of data into your financial transaction journal) is not trivial and must allow for full “ripple-up” updating, as transactions are added out-of-chronological-order; and must follow accounting principles, keeping credits and debits balanced structurally and ensuring financial and accounting integrity & consistency on the database in global terms (programatically).
Study of a fundamental bookkeeping course is strongly recommended here, as is any possible experience you can attain as a voluntary Treasurer, for example of a local club, society or association. You should be trying to ‘reverse engineer’ any BookKeeping software you use. How does it work? The traditional way for a larger organisation was to store data entered via a system such as IBM’s CICS, networked to a database, with data ‘sliced, diced’ (using SQL – Structured Query Language) and presented in Spreadsheets for reporting and planning purposes. Still a very powerful approach .. yet you do need to decide and iteratively design the structure of the databases you want to employ. Having designed one schema, we multiplied and copied them 9 further times on our test development database system, and ended up with well over 6,000 Tables in the 10 Schema, as an estimate of those required by a more general Full Enterprise-Networked System. The system is currently working for us. You can see the details by clicking here or on the above diagram. Remember: ‘The Database is Everything’. It has taken our 2-man partnership nearly 10 years to get this far, so the task is big, however there are other sections of the project to develop alongside the database, simultaneously.
You would need a table to commence: a financial transaction journal, followed by tables for customers and for suppliers, followed by your general/master ledger (table). How must the master ledger be designed? What fields would you need in the transaction journal? The master ledger? You would also need a Chart of Accounts in some format (some ingenuity required here) which is extensible, and maybe a system of account ‘classes’. How will you handle Invoicing & Charging, Bills, Inventory, Production, Production Scheduling & Quality Assurance, Repairs and Maintenance Scheduling and Response, Payroll, Taxation, Superannuation, Human Resources, Orders Issued and Received, shipments out and in? Do you see why you need to know about Accounting and Bookkeeping, at the very least? There is much more, as this merely scratches the surface, hoping to give leads.
You do need to be able to emulate the strategic thinking of a CEO and Board (or manager and owner) about the status and direction of an organisation (and not just financially). There is much to be guided and managed. You are at business nuts and bolts level here. Together with the more top-level interests of Owners and Managers you are simultaneously caring for the software interests of each of the Board (Owner) and CEO’s (General Manager’s) Staff Members. The risks & responsibilities are further extended by entering the multiplexed smart contract market (assuming you do). All participating organisations on a Business Netork require guaranteed individual Protection of Interests at every stage in every process.
In order to be able to easily (in a browser-based gui) add tables, columns, primary keys, triggers and trigger functions to the database, consider PgAdmin4. You would be running PgAmin4 within its own container separate to the other containers on your desktop. You have to link it (in the “docker run” statement) to the postgres database server container’s network on docker so it finds the server. All this database development is better done outside/without kubernetes.
You would need to become conversant with the docker system and commands. You can update your database structure by regularly performing a pg_dump on the schema. We have found that to “sudo docker push” your images to the cloud regularly is expensive in data, and that it is more reliable and effective to rely on pg_dumps (and restoring the sql dump with psql) in case of system crashes, or just to update, say, a Kubernetes set-up. You “sudo docker pull” to retrieve the bare database image, or let the yml/yaml file do the work. You then enter the container after copying the backup.sql into it from the host, and restore from within the container (see “docker exec”, “kubectl exec”, “docker cp” and “kubectl cp”).
Run “minikube start” then “kubectl create secret docker-registry –help”, and issue a command following the pattern at the bottom of the resulting help page. Later, in order to give minikube access to what it considers a secure private repository, you issue ‘minikube cache add your_repo_name/your_image_name:your_tag’ then ‘minikube cache reload’. You can list your cached repo’s with ‘minikube cache list’. We have found that attempting to rely on the database image as-cached may be disappointing, and that the restore-from-backup procedure above is necessary on the cached images, after they have been built and are running as containers.
“minikube start” needs to run with no error or warning messages at all. If there is a message you need to attend to the errors noted (particularly enabling non-root usage of docker as recommended in the message, exactly). Then “minikube stop” and “minikube delete”. Then “minikube start”, followed (because of the delete) by your “kubectl create secret docker-registry, etc” command, again (and after each delete/start).
The kubernetes system builds a node from pod and pod-service specifications in a .yaml file (kind of similar to a docker-compose.yml file) which is run with “kubectl apply -f path/to/filename.yaml”. The pod and service specs contain the specs for each container in that pod, and the entire yaml file may contain several pod specs, each with their own containers specified.
In our Kubernetes System modeled above, there are 3 ‘deployment’ specs involved, consisting of 2 replicated pods each. One pod houses the elastos blockchain set of 4 containers; and the others the in-memory cache and webserver. The database consists of 2 replicated pods in a Stateful Set. Overall, each deployment or stateful set spec has its own service specification prepended, thus there are 4 services defined, containing the specs of port numbers exposed on each container in each pod. For the stateful set, its node service is a ‘Headless Service’, which is never exposed externally.
If you intended to develop a secure system, consider utilising the Elastos Development Tools and Environment. These tools and environment provide a way to develop Ionic front end gui’s as well as giving access to the necessary blockchains. The key to accessing your database from the Elastos DApps securely is to specify the url as service-name:port/* of your database’s webserver in src/assets/manifest.json (in your Elastos/Ionic DApp’s root folder) together with all necessary url’s to be accessed. In our case, for the database webserver the url is haskell:webserve-https/*. (The port name, here ‘webserve-https’, should be used, not the Port Number, similarly with service-name in place of the IP-Address.)
We develop code and collaborate on GitHub – more of the work of Linus Torvalds! (But now owned by Microsoft)
A short word about our apps….
As server and general system response times are partially dependent on the relative locations of the Data Centre (the Cloud Centre) and the client, at this stage we are planning to use Sydney and London as our Data Centres.
Immutable audit trails and multi-party transactions on Elastos Blockchain; mass Relational Data storage on Postgres
Databases: Elastos P2P Carrier network to connect them (web-socket-safe).
by John L. Olsen, Edward B. Whittle
using the Elastos Component Assembly Runtime in C++ : On the SideChains, and connecting to Databases, WebServices and the HIVE file storage system via Carrier.
Our MultiPlexed Double-Entry Accounting System
Master Ledger::Transaction Journal catering for Multi-Party Transactions on the Elastos BlockChain::SideChains
ITCSA’s Accounting Solution,
the ‘Block ‘n’ Tackle’™
.. incorporating a convenient Business Process Design Interface ..
[© IT Cloud Solutions Australia, 2011-2021]
Broad Elastos Application Concept
- Elastos Blockchains are based on modern technology developed under the auspices of the Elastos Foundation.
- Our databases are built for predictability and reliability.
- The majority of your Business Transaction Data is stored on a Relational Database off the Chain and certain Business Process Data is copied amongst neighbouring nodes on-chain (devices) across the globe for safety security and redundancy..
- The Elastos Project is an Open Project involving many corporate and individual participants based on open-source code with strength deriving from its open-source nature.
- In part, we use Elastos BlockChains as Enterprise Accounting Audit-Trail Journals (Blockchains are actually Journals more than their name “Distributed Ledger Technology” suggests), in connection with Postgres Databases for mass relational data storage.
- Yours would be a Business Channel on a SideChain sharing a database system securely connected to that SideChain (enclosed in Elastos Carrier) with other Business Channel owners in related (networked or non-networked) businesses.
- Accordingly, unlike non-blockchain systems (where Superusers may change records on the database), the transactions recorded on BlockChains are not able to be changed by anyone at all, ever. Each device keeps the others honest.
- Unlike the Blockchains underpinning Bitcoin and others, the Elastos Blockchain is Permissioned not Anonymous so the identities of the users & entities who were involved in each transaction are recorded.
- With our Blockchains, “coin-mining” is involved, to ensure valid and sealed transactions.
- Blockchains provide other data processing advantages including Automatic, Real-time, Multi-Party-Validated Transactions a.k.a ‘Smart Contracts’
Specific Elastos Distributed Application Concept
Elastos BlockChain < >
Proof of ^ | | | | | | v Proof of
locked ^ | | | | | | v locked
assets on | | | assets on
BitCoin | Elastos
DApps access ^ SideChains v via Carrier:
from any registered user’s device or any
< Elastos Smart-Web Server >
< Raw Documents etc >
- Component Assembly Runtime in C++ by Elastos developers
- Our method of development ensures 100% functionality on all iOS and Android mobile tablet devices, and communication, contacts, tasks, calendar etc (plus ‘Push’ notifications) integration for mobile phones.
- This functionality is easy to generalise to desktop/laptop computers.
- It all just works. And fast.