Mobile First Development

  •  We are “Mobile First” developers .. we rove free today

  •  We believe in and use Open Source Software

“Mobile First” means your distributed application will largely work on iOS & Android tablets, and will work completely for all desktops & laptops. Large-screen-dependent sections of the DApp will be difficult to operate on mobile phones and less so on Tablets, yet the rest of your DApp will work naturally for contacts/messages/elastos mail/tasks/calendar integration, for most of your DApp, and for ‘Push’ notifications. Our services are hosted on the Elastos Smart Web.

For example, we integrate Individual Push Notifications with our mobile apps, to enable your database to instantly remind you of details of contacts (which you select when you are registering them), by a visible ‘Push’ notification to your phone as they ring. This service can be disabled for regular contacts.

 
You are welcome to refer to our page Computers As Machines for a potted history of Computing, with some references, beginning at the Polish Machine the unsuccessful Bomba, in 1938 (designed attempting to decrypt German Messages encoded with the Enigma Machine), via the successful Bombe (1940) of Alan Turing, who proceeded to design the Theoretical Turing Machine which led (through the British GPO’s Colossus (1943), ENIAC – delivered in 1945, and EDVAC – delivered in 1949) to its practical outcome, “Sequential Machine Architecture” (assisted in consultation by John von Neumann, between 1943 and 1945), followed by the development of the first Operating System for the second generation of those machines (General Motors, 1956, as a customer of IBM). Also how the development in 1947 and 1948 under William Shockley, of the first two of the various types of Solid State Transistors, and the accompanying developments in miniaturisation (ending with the ‘Computer on a Chip’), began the long haul towards the first useful Personal Computers between 1970 & 1980, and away from the need for rooms full of thermionic valves. Then the web, ‘Network-As-Machine’ and Artificial Intelligence, to end with the first Quantum Computer in 2019 (IBM).
 
The following contains a more modern line of ancestry of software in use on computers today (starting around 1970, but still on traditional Sequential Devices, forerunners of today’s sequential personal computers and macs, etc).
 
Dennis Ritchie designed and created the ‘C’ Programming Language at AT & T’s Bell Labs between 1972-1973. Initially, it was for running UNIX utilities. The UNIX operating system was being developed by Ken Thompson at the same time. The pair had previously worked together on the creation of C’s predecessor, the ‘B’ programming language. Evidence of Ken’s and Dennis’s compatibility as programmers and language designers was reinforced, according to Brian Kernighan, when they compared their own independent solutions to a problem which emerged as identical in code. Ken Thompson explained Unix was the first incarnation of a system where everything happened “in the first person”. There was no need for the third eye view – Thompson allowed the Op Sys to do whatever it needed as if the sole controller in the processes. He is a believer in using brute force when necessary (in coding terms).
From the same Labs, from 1979 onwards, Bjarne Stroustrup developed C++ as an efficient extension of C , allowing the creation and operation of a new class of things in C (but studied since the 1950’s) called software “Objects”. Objects are Data Structures extended to include the necessary methods (functions) to deal with the data onboard the object, and enabling safer communication between these objects.
In order to get to the present state on personal computers from this point (ie C and C++), you may refer to the Security page on this site which explains the History of Linux as a derivative of MINIX, which itself was designed and sold by Prof Andrew Tanenbaum as a teaching aid (with accompanying software on “floppy disk”) conforming to the UNIX Standards, yet able to be compiled onto a PC, making a personal computer into a secure multi-user machine, suitable as a web server (for example). At the same time there were other branches of Open Source activity such as OpenBSD which is based upon the Objective-C language. The differences between the Objective-C approach and the C++ approach to Object Oriented Programming result in certain trade-offs when deciding between languages. Apple’s systems are now based on a fork of the OpenBSD operating system, and do nevertheless allow some programming in C++. Your Mac can also be employed as a multi-user web server, as can a Linux box.
One of the major components in any modern Linux Desktop system is the X11 Window System (a Window System has the same importance for any desktop operating system, though never installed on any production web server). This gives the graphics capabilities and so much of the user-friendliness and convenience of a modern computer. Without a system like X11 a personal computer would remain a command line-based thing with very limited graphical capabilities and usefulness.
All of these Software Systems which run on top of other software layers based around the operating system kernel, rely on the compilation of C and C++ source code into machine code for execution on demand (with a strict permission system however!).
During spread of the world wide web in 1995, Netscape Communications recruited Brendan Eich with the goal of embedding the Scheme programming language into its Netscape Navigator.[15] Before he could get started, Netscape Communications collaborated with Sun Microsystems to include in Netscape Navigator Sun’s more static programming language Java (created by James Gosling), in order to compete with Microsoft for user adoption of Web technologies and platforms.[16] Netscape Communications then decided that the scripting language they wanted to create would complement Java and should have a similar syntax, which excluded adopting other languages such as Perl, Python, TCL, or Scheme. To defend the idea of JavaScript against competing proposals, the company needed a prototype. Eich wrote one in 10 days, in May 1995.
More recently (2009), Ken Thompson (see above) et al, working for the Google Corporation, developed the GO language which has simplified objects to exclusively contain what C calls data “struct”s and other data collections such as arrays (so separating functions and data structures) and removing inheritance capabilities, in order to achieve very economical thread generation overheads, addressing a modern problem where we are limited by processor performance in the number of concurrent threads that we can safely run. These moves also address the problems related to keeping code more easily maintainable and sharable among and between teams. Although inheritance is no longer possible, it is still achievable in practice by copying and pasting code, and making alterations within a code versioning and sharing system such as github. The GO language is used for much of the Elastos “ChainCode”. A GO thread takes a minute fraction of the memory overhead of a JavaScript thread. On a web server connected to a database, there needs to be one thread per database table, so with hundreds of tables, there are many savings to be made. ITCSA had migrated our apps to GO. At that stage, we no longer required 8GB ‘node.js’ servers for our Enterprise Databases. The memory demand had been cut by a factor of 10.
However after further investigation of the Haskell language (of which it is said, “Haskell servers do one thing well”), we are testing web servers delivering Rest API endpoints for our databases, as Haskell ‘Black Boxes’, with a further saving by a factor of 8 in memory demands as well as improvements in response times. Haskell is a functional programming language which you might liken to a spreadsheet. A spreadsheet essentially re-evaluates a function at each entry occasion. Haskell is an example of a very powerful, so-named “Lambda function” evaluating language.
ionicdocker

Front End Graphical User Interfaces
seadoubleplus

BlockChain &
Component Assembly Runtime
elasticblockchain

 

gopherlang

BlockChain &
Carrier
Haskell WEB Server
Haskell Web Servers
Redis in mem cache ServerIn-memory Cacheplpgsql Database Server

PLpgSQL

Data Processing on Database
 
In a similar way to Shipping Containers with Goods, Software “Containers” make life a lot easier for developers: Docker helps in moving software around the world, and with integrating development elements into a single environment.
newDocker
Further than Docker Containers, the Kubernetes Container-Cluster Management System, based on Docker (if so chosen), assists in integrating development elements, with even more adaptability than by Docker alone, into a single, Cloud-ready development and production environment. In Greek Kubernetes is the Helmsman or “Captain” of a vessel.
kubernetes
Desktop Development Environment from the ceiling – mirrored in Production (from a roof) on at least 2 replicated nodes in the cloud. This single development node (node – shown, on cluster – not shown) is actually inside the desktop computer host (192.168.x.z), on a virtual machine (refer to Install Minikube). Your modem would be at 192.168.x.y on the internal LAN. Minimum requirements for the host hardware are 32GB RAM with 250GB SSD (Solid State Drive)

kubeinstallation

Across the entire intercommunication system involved above: between the DApps on desktops, laptops, and mobile devices (as well as IoT device DApps) – and the Kubernetes installations in the Cloud, and between the containers and pods within Kubernetes, the Elastos P2P Carrier Network working with Kubernetes Engineering guarantees security.
If you wished to follow this line of development, the multiplexed update function for the master ledger (also known as the general ledger) would need to be encoded in PLpgSQL on the Postgres database(s), which you need to build for your target enterprise(s) (See for example our own Block ‘n ‘ Tackle). This update function (to be run as a trigger function after the entry of data into your financial transaction journal) is not trivial and must allow for full “ripple-up” updating, as transactions are added out-of-chronological-order; and must follow accounting principles, keeping credits and debits balanced structurally and ensuring financial and accounting integrity & consistency on the database in global terms (programatically).

Study of a fundamental bookkeeping course is strongly recommended here, as is any possible experience you can attain as a voluntary Treasurer, for example of a local club, society or association. You should be trying to ‘reverse engineer’ any BookKeeping software you use. How does it work? The traditional way for a larger organisation was to store data entered via a system such as IBM’s CICS, networked to a database, with data ‘sliced, diced’ (using SQL – Structured Query Language) and presented in Spreadsheets for reporting and planning purposes. Still a very powerful approach .. yet you do need to decide and iteratively design the structure of the databases you want to employ. Having designed one schema, we multiplied and copied them 9 further times on our test development database system, and ended up with well over 6,000 Tables in the 10 Schemae, as an estimate of those required by a more general Full Enterprise-Networked System. The system is currently working for us. You can see the details by clicking here or on the above diagram. Remember: ‘The Database is Everything’. It has taken our 2-man partnership nearly 10 years to get this far, so the task is big, however there are other sections of the project to develop alongside the database, simultaneously.

You would need a table to commence: a financial transaction journal, followed by tables for customers and for suppliers, followed by your general/master ledger (table). How must the master ledger be designed? What fields would you need in the transaction journal? The master ledger? You would also need a Chart of Accounts in some format (some ingenuity required here) which is extensible, and maybe a system of account ‘classes’. How will you handle Invoicing & Charging, Bills, Inventory, Production, Production Scheduling & Quality Assurance, Repairs and Maintenance Scheduling and Response, Payroll, Taxation, Superannuation, Human Resources, Orders Issued and Received, shipments out and in? Do you see why you need to know about Accounting and Bookkeeping, at the very least? There is much more, as this merely scratches the surface, hoping to give leads.

You do need to be able to emulate the strategic thinking of a CEO and Board (or manager and owner) about the status and direction of an organisation (and not just financially). There is much to be guided and managed. You are at business nuts and bolts level here. Together with the more top-level interests of Owners and Managers you are simultaneously caring for the software interests of each of the Board (Owner) and CEO’s (General Manager’s) Staff Members. The risks & responsibilities are further extended by entering the multiplexed smart contract market (assuming you do). All participating organisations on a Business Netork require guaranteed individual Protection of Interests at every stage in every process.

In order to be able to easily (in a browser-based gui) add tables, columns, primary keys, triggers and trigger functions to the database, consider PgAdmin4. You would be running PgAmin4 within its own container separate to the other containers on your desktop. You have to link it (in the “docker run” statement) to the postgres database server container’s network on docker so it finds the server. All this database development is better done outside/without kubernetes.

You would need to become conversant with the docker system and commands. You can update your database structure by regularly performing a pg_dump on the schema. We have found that to “sudo docker push” your images to the cloud regularly is expensive in data, and that it is more reliable and effective to rely on pg_dumps (and restoring the sql dump with psql) in case of system crashes, or just to update, say, a Kubernetes set-up. You “sudo docker pull” to retrieve the bare database image, or let the yml/yaml file do the work. You then enter the container after copying the backup.sql into it from the host, and restore from within the container (see “docker exec”, “kubectl exec”, “docker cp” and “kubectl cp”).

Run “minikube start” then “kubectl create secret docker-registry –help”, and issue a command following the pattern at the bottom of the resulting help page. Later, in order to give minikube access to what it considers a secure private repository, you issue ‘minikube cache add your_repo_name/your_image_name:your_tag’ then ‘minikube cache reload’. You can list your cached repo’s with ‘minikube cache list’. We have found that attempting to rely on the database image as-cached may be disappointing, and that the restore-from-backup procedure above is necessary on the cached images, after they have been built and are running as containers.

“minikube start” needs to run with no error or warning messages at all. If there is a message you need to attend to the errors noted (particularly enabling non-root usage of docker as recommended in the message, exactly). Then “minikube stop” and “minikube delete”. Then “minikube start”, followed (because of the delete) by your “kubectl create secret docker-registry, etc” command, again (and after each delete/start).

The kubernetes system builds a node from pod and pod-service specifications in a .yaml file (kind of similar to a docker-compose.yml file) which is run with “kubectl apply -f path/to/filename.yaml”. The pod and service specs contain the specs for each container in that pod, and the entire yaml file may contain several pod specs, each with their own containers specified.

In our Kubernetes System modeled above, there are 3 ‘deployment’ specs involved, consisting of 2 replicated pods each. One pod houses the elastos blockchain set of 4 containers; and the others the in-memory cache and webserver. The database consists of 2 replicated pods in a Stateful Set. Overall, each deployment or stateful set spec has its own service specification prepended, thus there are 4 services defined, containing the specs of port numbers exposed on each container in each pod. For the stateful set, its node service is a ‘Headless Service’, which is never exposed externally.

If you intended to develop a secure system, consider utilising the Elastos Development Tools and Environment. These tools and environment provide a way to develop Ionic front end gui’s as well as giving access to the necessary blockchains. The key to accessing your database from the Elastos DApps securely is to specify the url as service-name:port/* of your database’s webserver in src/assets/manifest.json (in your Elastos/Ionic DApp’s root folder) together with all necessary url’s to be accessed. In our case, for the database webserver the url is haskell:webserve-https/*. (The port name, here ‘webserve-https’, should be used, not the Port Number, similarly with service-name in place of the IP-Address.)

We develop code and collaborate on GitHub – more of the work of Linus Torvalds!
~

A short word about our apps….

~

As server and general system response times are partially dependent on the relative locations of the Data Centre (the Cloud Centre) and the client, at this stage we are planning to use Sydney and London as our Data Centres.

~

Immutable audit trails and multi-party transactions on Elastos Blockchain; mass Relational Data storage on Postgres postgres secure database
Databases: Elastos P2P Carrier network to connect them (web-socket-safe).

~

by John L. Olsen, Edward B. Whittle

using the Elastos Component Assembly Runtime in C++ : On the SideChains, and connecting to Databases, WebServices and the HIVE file storage system via Carrier.

~

Our MultiPlexed Double-Entry Accounting System
Master Ledger::Transaction Journal catering for Multi-Party Transactions on the Elastos BlockChain::SideChains

ITCSA’s Accounting Solution,

the ‘Block ‘n’ Tackle’™

.. incorporating a convenient Business Process Design Interface ..

Written in JavaScript, C++, GO, PHP and PLpgSQL on Elastos BlockChains and Postgres Databases
[© IT Cloud Solutions Australia, 2011-2020]

chubba

Trade Mark Registered

 

Broad Elastos Application Concept

your same bus procedures

  • Elastos Blockchains are based on modern technology developed under the auspices of the Elastos Foundation.
  • Our databases are built for predictability and reliability.
  • The majority of your Business Transaction Data is stored on a Relational Database off the Chain and certain Business Process Data is copied amongst neighbouring nodes on-chain (devices) across the globe for safety security and redundancy..
  • The Elastos Project is an Open Project involving many corporate and individual participants based on open-source code with strength deriving from its open-source nature.
  • We use Elastos BlockChains as our Accounting Journal/Ledger System, in connection with Postgres Databases.
  • Yours would be a Business Channel on a SideChain sharing a database system on that SideChain with other Business Channel owners in related (networked or non-networked) businesses.

 

 

  • Accordingly, unlike non-blockchain systems (where Superusers may change records on the database), the transactions recorded on BlockChains are not able to be changed by anyone at all, ever. Each device keeps the others honest.
  • Unlike the Blockchains underpinning Bitcoin and others, the Elastos Blockchain is Permissioned not Anonymous so the identities of the users & entities who were involved in each transaction are recorded.
  • With our Blockchains, “coin-mining” is involved, to ensure valid and sealed transactions.
  • Blockchains provide other data processing advantages including Automatic, Real-time, Multi-Party-Validated Transactions a.k.a ‘Smart Contracts’

 

 

Specific Elastos Distributed Application Concept

ApplicationFramework

=DApps

=DApps

ApplicationFramework2

=DApps

ApplicationFramework3

Tablet Application
  DApps access  ^ SideChains v via Carrier:

from any registered user’s device or any
registered thing/system or machine

Blockchain Application

 

Elastos BlockChain     <  >

Proof of    ^   | | | | | |   v  Proof of  
locked    ^   | | | | | |   v  locked   
assets on       | | |       assets on
BitCoin        |        Elastos
BitCoin MainChain
Tablet ApplicationUltimate Trust ProviderTablet Application

via BitCoin BlockChain Miner$$

who must “solve” a network-related cryptographic problem in order to lock down a block of transactions, be rewarded in BitCoin (convertible to cash), and provide proof of work via returned transaction hashes
~

Haskell WEB Server

WebServer < >

^

 <


>     Carrier  < >

 

V

communication via 200,000 + worldwide nodes to WebServices and Users

Postgres

Postgres Server

< Relational

> DataBase

 

< Raw Documents etc >

IPFS Web Server

Elastos Hive

  • Component Assembly Runtime in C++ by Elastos developers
  • Core Accounting Functions via the ‘Block ‘n’ Tackle’™ in C++, GO, Haskell, JavaScript & PLpgSQL © by IT Cloud Solutions Australia 2011-2020

chubba


 

 

ElastosFramework

IonicFramework

The frontend Apps are encoded in HTML (Structure), JavaScript (Function) and CSS (Style), and simultaneously cross-compiled for Native (iOS, Android) Mobile Machine Code, from Use Cases defined by You and us. We use the Elastos Trinity Browser and the Ionic Framework in connection with the Elastos System to target all your platforms; smartphone, tablet, laptop and desktop.

The use of JavaScript and its style of coding – a relatively recent new ‘paradigm’ in enterprise applications programming (with the improvements in performance introduced in the 2008 release of Google’s V8 JavaScript compiler), ensures maximum speed and availability of running applications due to their design of “Non-Blocking Input/Output”.

Input and output channels for the devices & servers, and for the databases, are fully occupied, never blocked or waiting for one or other user’s slower process to complete. Slower processes are immediately resumed upon call back. (“Callback” functions are an important feature of JavaScript). This means bandwidth is used very efficiently and processes are lightning fast.

  • Our method of development ensures 100% functionality on all iOS and Android mobile tablet devices, and communication, contacts, tasks, calendar etc (plus ‘Push’ notifications) integration for mobile phones.
  • This functionality is easy to generalise to desktop/laptop computers.
  • It all just works. And fast.
Haskell WEB Server

 

[ Also note that “JavaScript” is not “Java”; they are owned and licensed by 2 very different realms amongst software corporations. Strictly speaking, Java™ came first historically, owned and created by James Gosling of the now defunct Sun Microsystems in 1995 (bought by Oracle in 2010). JavaScript was created by Brendan Eich in 1995, in competition with other companies, to complement Java, for an old company, Netscape Communications.


Mozilla Foundation took over management of JavaScript in 2000. However with the release of Google’s (Enterprise-capable) V8 JavaScript Compiler in 2008, the two languages are now “complementary/competitors” in the web-based-design software language market. The languages can and do work together in many systems, including our own.
JavaScript’s particular strength is in its “asynchronous” operation. Java operates “synchronously” and is thus “I/O Blocking”. Although its kernel is a Linux Kernel (in C/C++), much of the Android™ Operating System, for example, is written in Java™ or utilises access to Java Libraries. This means our own systems contain machine code (it’s all machine code in the finish – imagined as 1’s and 0’s in memory registers) compiled from these “higher level languages” Java and JavaScript, as well as our webservers in the php language. There is also machine code from the compilation of Assembler, C, C++ (non-Apple devices) and Objective C (Apple devices) lower level software code.

On a typical webpage, JavaScript takes care of the “behaviour” of the page (said to be the ‘glue’ of the web, otherwise thought of as the ‘functional’ part), while structure and content is given in the “html”, with style/ formatting defined in Cascading Style Sheets (CSS).

It all starts in various garbled forms of English written in a very conformal fashion by many people; compiled and built on some computers; and ends by being “deployed” onto these and other computers, and onto networks of other devices, to begin working, making decisions, accepting inputs, transferring data, producing outputs, recording information and using Digital Arithmetic and Boolean Algebra, the hardwired “machinic intelligence” of a computer, to perform logical and numerical computations. ]