A long word about security….

There are multiple layers of security to ensure all files are locked and encrypted and authentication systems cannot be broken, so, although sheer volume of requests in short periods can lead to possible down time in response to an attack, no data is ever at risk. But there are more than Denial of Service attacks to defend against. We operate sophisticated virus and other Malware protection (including on our file and email servers which face our Windows and Mac clients, with protection on behalf of our clients against Windows and Mac Virus and Malware Problems). We also operate protection to defend our own servers directly against Virus, Malware and other exploits. In addition, all our data traffic on the web is encrypted, in both directions (TLS/SSL).

At worst, most of a database (up to within 1/2 hour of the disruption), in the very unlikely event of a major disruption (in physical, electrical/electronic, or software-code form) penetrating our defences, a recovery can always be staged from a combination of a Restoration of the last known ‘Clean’ Tarball Backup and rewriting the database from that point in time forward using a sequence of Write Ahead Logs (with the specific bit/byte pattern of any Malware code filtered out and removed at low level from the “write ahead log” files).

Virus and Malware elements that have been discovered can be removed in this way before restoration with WAL files, commencing with a restored database from the last clean Tarball backup file (as above) and using the cleaned (low-level digitally filtered) WAL files (see below) progressively from that point in time, re-writing transactions and re-filling the database, to within 1/2 hour max. prior to the final disruption, with clean data.

Our systems are fast, secure and completely scalable – on the spot and automatically. Entire systems can be easily redeployed in a timely fashion after being taken down. The databases are automatically and regularly backed up to other sites as tarball archives stored in the cloud and the running databases use WAL (Write Ahead Logs) to ensure continuous short term data safety (in the form of sequences of WAL files over time) in all but the most horrendous global scenarios (Our log files are housed in several different sites around the world simultaneously. They are written to multiple sites at once).

how the Write Ahead Logs work….

ISO 27002

A transaction is either logged successfully (then that database transaction is recorded and the database transaction is validated against the log entry by rechecking the database state as having successfully recorded the transaction); or else the log may be written but upon rechecking the state of the database, the database may not reflect the transaction as successfully recorded. In this latter case the log is corrected, the database is rolled back to reverse any effects, and an error message sent to notify ‘transaction unsuccessful’. The client must re-enter the transaction or if an automatic process, a human is notified of the error. Needless to say this all happens very quickly. Notice the way the first thing to happen is that the Write Ahead Log is written, before the actual database transaction is attempted.

These are the only 2 logical possibilities at a point of rupture; the transaction was successfully recorded, or it was not; and the Postgres WAL system covers both possibilities.If a log entry is not successfully written in the first instance, the transaction does not proceed and a similar error is generated. So all is safe. Guaranteed by the Postgres system of Write Ahead Logs (WAL).


By storing sequences of log archives and database “tarball” backups at several offsite locations until the maximum allowed storage space is reached (at which time the oldest archives only are deleted), and keeping these queues moving along, the entire historically recorded database can be resurrected from the logs and tarballs in the unlikely, but possible, event of loss of each of the running copies of the database and the current backup copies, such as in the event of a virus or Malware infection, crash, or other exploit.

In the case of a virus (or other Malware or exploit) in the Linux hosting system, as referred to above, the backup ‘tarball’ archives will be examined to discover the last clean backup copy.This can become the basis for a slower reconstruction to completion of the intervening transactions from the Write Ahead Logs after they are low-level-digitally-filtered to remove the known Malware. The base-system itself (ie the actual running Linux-based webserver and database server) can be re-deployed as clean systems instantly or in the time it takes for the database to be safely and cleanly restored. The servers are very conveniently operated and fresh clean deployments are easy to perform, especially using the modern containerised clouds.

Nevertheless we would make it as clear as possible the precise time from which records were lost. There could be a lapsed time where records, previously successfully entered, may be completely lost, for records entered from up to 1/2 hour before “take-down” to right up to the rupture (depending how close in time to the disruption the last WAL file backup was made). There would still be the “all-but” entire history of the database in filterable Write Ahead Log archives as well as tarball archives at several sites, and relatively straightforward open-source-based programs which are made available to restore a Postgres database (Open Source Based itself) from WAL archives and/or backup “tarballs”, and to filter out Malware. This includes all recorded historical data and structures.

Please note that such a sudden disruption (where up to 1/2 an hour’s work is lost completely) was envisaged in the older times as an event such as “a truck crashing through the walls of the room housing your servers”, or worse, which appears less likely today with the clouds and multiple, secure, centralised data centres located across the world.

There remains, however a threat which plagues Enterprises. It is the internal threat posed by Fraud. The criminal actions of a person or people otherwise authorised to use an ordering and financial system may be detected and virtually eliminated by imposing certain restrictions, checks and balances in a system. Thus organisations use Internal Auditing to verify transactions, in particular their appropriateness and authenticity. However it has still been the domain of certain Technical Employees and System Administrators to have access to the entire database (Superusers). The use of Block Chains, an idea originating with Bitcoin, the “crypto-currency”, removes the possibility of anyone at all editing the database of transactions on the “Blocks”. Thus a financial transaction journal (for example) running on a Blockchain is completely immutable. You have to perform corrections via the normal (internally and externally auditable) accounting processes. IT Cloud Solutions Australia employs IBM Blockchains


ISO 27001

Having said all that, it is worth bearing in mind a quote from Wikipedia:

“There has not yet been a single widespread Linux virus or malware infection of the type that is common on Microsoft Windows; this is attributable generally to the malware’s lack of root access and fast updates to most Linux vulnerabilities”
Yeargin, Ray (July 2005). “The short life and hard times of a linux virus”

This is as true now as in 2005.

All linux machines are involved in networked update systems to protect from problems and known exploits. In addition the whole basis of early Unix systems (from which Linux is descended as an operating system) was to be able to work amongst students on a network at a university and still survive (and if you think of what some of those early “hackers” tried to do…). The design they arrived at involved the so-called ‘root’ or superuser who has total access and control over the entire system. Foreign code can’t access or alter the files on a Unix system as it needs to have ‘root’ permissions which means logging in as root. This requires that the current Root password be known. In short, it cannot be done. There is a permissions system that includes every file on the computer system (and a system is in some senses just a collection of files – some executable). Each file has allocated permissions. Root has access to everything. All other users are restricted to their own files. Foreign code cannot obtain permission independently to run on a Unix system, OPPOSITE TO the case with Microsoft Windows. The owner(s) of the root password in modern linux systems, called “sudoers” (with their own actual user passwords), need only remember to never send an unencrypted password ever in any communication over the web, such as in a chat system, an unencrypted email, really anywhere except at login, in order to keep their system safe. (If you want your users to run as safely you will make the same “no passwords over the web” rule. Use a voicecall. Not SMS). Because Root owns all system files, including those essential for general running, and each system user owns their own files and settings, the computer is safe.

Mobile Device Security: The security status of every mobile device linked to our systems via (usually) clients’ mobile phones and tablets, is monitored in real time by IBM’s MaaS360 Security System. This system manages mobile device security seamlessly, with software that includes its own advanced cognitive capacities, including ability to separate users’ private from work data and software. Privacy is always respected.

Therefore we remain vigilant but confident.

In general, Offsite Data Storage itself and Cloud Operations are neither particularly restrictive in terms of (authorised) accessibility, nor expensive. It is, in fact, very safe and secure when done properly.
“One of the keys to Computer Security is to assume that every connecting computer is potentially hostile.”

Our standard meets or exceeds ISO 27001 and ISO 27002 (Information Security Management Systems, including Best Practice Recommendations).


Thanks to: Big Blue, (since the 1880’s)

the Unix Operating System (since it began to escape from AT&T’s Bell Laboratories in the early 1970’s)

and the Open Software Foundation (1988 – 1996), whose members helped set it free


the Unix/Linux ecosystem (since 1991),

and of course Linus Torvalds,

and all contributors, past and present.


and our own Risk Management Practices

@IT CloudSolutions