A long word about security….

Our “virtual” application servers in the cloud, and our Virtual Database and BlockChain servers run on Commercial Versions of Linux, a popular and secure open source operating system that loosely conforms to Unix specifications, and is at least as secure as any commercial Unix offering. There are multiple layers of security to ensure all files are locked and encrypted and authentication systems cannot be broken, so, although sheer volume of requests in short periods can lead to possible down time in response to an attack, no data is ever at risk. However there are more than Denial of Service attacks to defend against. We operate sophisticated virus and other Malware protection (including on our file and email servers which face our Windows and Mac clients, with protection on behalf of our clients against Windows and Mac Virus and Malware Problems). We also operate protection to defend our own servers directly against Virus, Malware and other exploits. In addition, all our data traffic on the web is encrypted, in both directions (TLS/SSL).

At worst, most of a database (up to within 1/2 hour of the disruption), in the very unlikely event of an extreme disruption (in physical, electrical/electronic, or software-code form) penetrating our defences, a recovery can always be staged from a combination of a Restoration of the last known ‘Clean’ Tarball Backup and rewriting the database from that point in time forward using a sequence of Write Ahead Logs (with the specific bit/byte pattern of any Malware code filtered out and removed at low level from the “write ahead log” files).

Virus and Malware elements that have been discovered can be removed in this way before restoration with WAL files, commencing with a restored database from the last clean Tarball backup file (as above) and using the cleaned (low-level digitally filtered) WAL files (see below) progressively from that point in time, re-writing transactions and re-filling the database, to within 1/2 hour max. prior to the final disruption, with clean data.

Our systems are fast, secure and completely scalable – on the spot and automatically. Entire systems can be easily redeployed in a timely fashion after being taken down. The databases are automatically and regularly backed up to other sites as tarball archives stored in the cloud and the running databases use WAL (Write Ahead Logs) to ensure continuous short term data safety (in the form of sequences of WAL files over time) in all but the most horrendous global scenarios (Our log files are housed in several different sites around the world simultaneously. They are written to multiple sites at once).

how the Write Ahead Logs work….

ISO 27002

A transaction is either logged successfully (then that database transaction is recorded and the database transaction is validated against the log entry by rechecking the database state as having successfully recorded the transaction); or else the log may be written but upon rechecking the state of the database, the database may not reflect the transaction as successfully recorded. In this latter case the log is corrected, the database is rolled back to reverse any effects, and an error message sent to notify ‘transaction unsuccessful’. The client must re-enter the transaction or if an automatic process, a human is notified of the error. Needless to say this all happens very quickly. Notice the way the first thing to happen is that the Write Ahead Log is written, before the actual database transaction is attempted.

These are the only 2 logical possibilities at a point of rupture; the transaction was successfully recorded, or it was not; and the Postgres WAL system covers both possibilities.If a log entry is not successfully written in the first instance, the transaction does not proceed and a similar error is generated. So all is safe. Guaranteed by the Postgres system of Write Ahead Logs (WAL).


By storing sequences of log archives and database “tarball” backups at several offsite locations until the maximum allowed storage space is reached (at which time the oldest archives only are deleted), and keeping these queues moving along, the entire historically recorded database can be resurrected from the logs and tarballs in the unlikely, but possible, event of loss of each of the running copies of the database and the current backup copies, such as in the event of a virus or Malware infection, crash, or other exploit.

In the case of a virus (or other Malware or exploit) in the Linux hosting system, as referred to above, the backup ‘tarball’ archives will be examined to discover the last clean backup copy.This can become the basis for a slower reconstruction to completion of the intervening transactions from the Write Ahead Logs after they are low-level-digitally-filtered to remove the known Malware. The base-system itself (ie the actual running Linux-based webserver and database server) can be re-deployed as clean systems instantly or in the time it takes for the database to be safely and cleanly restored. The servers are very conveniently operated and fresh clean deployments are easy to perform, especially using the modern containerised clouds.

Nevertheless we would make it as clear as possible the precise time from which records were lost. There could be a lapsed time where records, previously successfully entered, may be completely lost, for records entered from up to 1/2 hour before “take-down” to right up to the rupture (depending how close in time to the disruption the last WAL file backup was made). There would still be the “all-but” entire history of the database in filterable Write Ahead Log archives as well as tarball archives at several sites, and relatively straightforward open-source-based programs which are made available to restore a Postgres database (Open Source Based itself) from WAL archives and/or backup “tarballs”, and to filter out Malware. This includes all recorded historical data and structures.

Please note that such a sudden disruption (where up to 1/2 an hour’s work is lost completely) was envisaged in the older times as an event such as “a truck crashing through the walls of the room housing your servers”, or worse, which appears less likely today with the clouds and multiple, secure, centralised data centres located across the world.

ISO 27001

Having said all that, it is worth bearing in mind a quote from Wikipedia:

“There has not yet been a single widespread Linux virus or malware infection of the type that is common on Microsoft Windows; this is attributable generally to the malware’s lack of root access and fast updates to most Linux vulnerabilities”
Yeargin, Ray (July 2005). “The short life and hard times of a linux virus”

This is as true now as in 2005.

All linux machines are involved in networked update systems to protect from problems and known exploits. In addition the whole basis of early Unix systems (from which Linux is descended as an operating system) was to be able to work amongst students on a network at a university and still survive (and if you think of what some of those early “hackers” tried to do…). The design they arrived at involved the so-called ‘root’ or superuser who has total access and control over the entire system. Foreign code can’t access or alter the files on a Unix system as it needs to have ‘root’ permissions which means logging in as root. This requires that the current Root password be known. In short, it cannot be done. There is a permissions system that includes every file on the computer system (and a system is in some senses just a collection of files – some executable). Each file has allocated permissions. Root has access to everything. All other users are restricted to their own files. Foreign code cannot obtain permission independently to run on a Unix system, OPPOSITE TO the case with Microsoft Windows. The owner(s) of the root password in modern linux systems, called “sudoers” (with their own actual user passwords), need only remember to never send an unencrypted password ever in any communication over the web, such as in a chat system, an unencrypted email, really anywhere except at login, in order to keep their system safe. (If you want your users to run as safely you will make the same “no passwords over the web” rule. Use a voicecall. Not SMS). Because Root owns all system files, including those essential for general running, and each system user owns their own files and settings, the computer is safe.


There remains, however a threat which plagues Enterprises. It is the internal threat posed by Fraud. The criminal actions of a person or people otherwise authorised to use an ordering and financial system may be detected and virtually eliminated by imposing certain restrictions, checks and balances in a system. Thus organisations use Internal Auditing to verify transactions, in particular their appropriateness and authenticity. However it has still been the domain of certain Technical Employees and System Administrators to have access to the entire database (Superusers). The use of Block Chains, an idea originating with Bitcoin, the “crypto-currency”, removes the possibility of anyone at all editing the database of transactions on the “Blocks”. Thus a financial transaction journal (for example) running on a BlockChain is completely immutable. You have to perform corrections via the normal (internally and externally auditable) accounting processes.

Please note that these new Enterprise-Ready versions of BlockChain technology do not require intensive data algorithms or schemes to “pay-off” the effort to create trusted Block records, as is the case with BitCoin networks. This is because all our BlockChain members have verified and authenticated real Identities, whereas anonymity is preserved with BitCoin transactions – hence their name “crypto-currency”. There are also other benefits in BlockChains relating to ‘Smart Contracts’ and the capacity for automated multi-party Validation of Transactions. IT Cloud Solutions Australia uses IBM BlockChains.

IT Cloud Solutions Australia also employs other standard methods such as ensuring separation of Order and Requisitions functions from Accounts Payable functions, as well as other safeguards against internal fraud.

Mobile Device Security:
The security status of every mobile device linked to our systems via (usually) clients’ mobile phones and tablets, is monitored in real time by IBM’s MaaS360 Security System.
This system manages mobile device security seamlessly, with software that includes its own advanced cognitive capacities, including ability to separate users’ private from work data and software. Privacy is always respected.

Therefore we remain vigilant but confident.

In general, Offsite Data Storage itself and Cloud Operations are neither particularly restrictive in terms of (authorised) accessibility, nor expensive. It is, in fact, very safe and secure when done properly.
“One of the keys to Computer Security is to assume that every connecting computer is potentially hostile to your server.”

Our standard meets or exceeds ISO 27001 and ISO 27002 (Information Security Management Systems, including Best Practice Recommendations).


Thanks to:
  • Big Blue, (since the 1880’s)
  • the Unix Operating System (since it began to escape from AT&T’s Bell Laboratories in the early 1970’s)
  • and the Open Software Foundation (1988 – 1996), whose members helped set it free
  • unix

  • the Free Software Foundation (since 1985)
  • and of course Linus Torvalds, who originally licensed and studied an educational version of the Unix Operating System for PC’s (or “microcomputers”) called “Minix” from Andrew S Tanenbaum in the form of a book with included source code – on Floppy Disks – (published by Prentice Hall) for $US69, based on the 1980’s series IBM/Intel-XT Personal Computer Architecture. (Unix was written originally for Minicomputers and Mainframes in networked multi-user environments).
    On January 5, 1991 he purchased an Intel 80386-based (“80386” CPU or processor) IBM PC XT/AT “clone” computer before obtaining his MINIX copy, which in turn enabled him to begin work on Linux.
    He commenced work on Linux in mid-March, 1991 (see below “Tanenbaum” link).
  • MINIX:

    Relationship with Linux


    Early influence
    “…The design principles Tanenbaum applied to MINIX greatly influenced the design decisions Linus Torvalds applied in the creation of the Linux kernel…. Torvalds used and appreciated MINIX, but his design deviated from the MINIX architecture in significant ways, most notably by employing a monolithic kernel instead of a microkernel. This was disapproved of by Tanenbaum in the Tanenbaum–Torvalds debate. Tanenbaum explained again his rationale for using a microkernel in May 2006…”

    [Nevertheless Tanenbaum (see the above link) admits that the demand for performance from users of Linux outweighed the capacity of a microkernel system and militated in favour of developing a monolithic kernel, for practical reasons. The reasons for Tanenbaum’s preference for a “microkernel” lie in its security advantages. -Ed.]

    “..Early Linux kernel development was done on a MINIX host system, which led to early Linux inheriting various features from MINIX, such as the MINIX file system.

    Samizdat claims
    In May 2004, Kenneth Brown of the Alexis de Tocqueville Institution made the accusation that major parts of the Linux kernel had been copied from the MINIX codebase, in a book called Samizdat. These accusations were rebutted universally—most prominently by Andrew Tanenbaum himself, who strongly criticised Kenneth Brown and published a long rebuttal on his own personal Web site, also pointing out that Brown was funded by Microsoft.

    At the time of its original development, the license for MINIX was considered to be rather liberal. Its licensing fee was very small ($69) compared to those of other operating systems. Although Tanenbaum wished for MINIX to be as accessible as possible to students, his publisher was not prepared to offer material (such as the source code) that could be copied freely, so a restrictive license requiring a nominal fee (included in the price of Tanenbaum’s book) was applied as a compromise. This prevented the use of MINIX as the basis for a freely distributed software system.

    When free and open-source Unix-like operating systems such as Linux and 386BSD (386BSD is an ancestor of Apple’s MacOSX -Ed.) became available in the early 1990s, many volunteer software developers abandoned MINIX in favor of these. In April 2000, MINIX became free/open source software under a permissive free software license, but by this time other operating systems had surpassed its capabilities, and it remained primarily an operating system for students and hobbyists….” Wikipedia (see “MINIX” link above).

  • the Unix/Linux open-source ecosystem (since 1991),
  • all contributors, under the various open source based licences, past and present.
  • linux

    and our own Risk Management Practices

    @IT CloudSolutions