Computers as Machines

.. and the People who made them possible
The English mathematician

George Boole
(1815-1864) invented mathematical, or symbolic, logic and uncovered the algebraic structure of deductive logic, thereby reducing it to a branch of mathematics


Alan Turing
(1912-1954)

is often called the father of modern computing. He was a brilliant mathematician and logician. He developed the idea of the modern computer and artificial intelligence. During the Second World War he worked for the Allies breaking the enemies’ codes and Churchill said he shortened the war by two (up to four) years. (See Bletchley Park)

Turing was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of theoretical computer science and artificial intelligence.[10] Despite these accomplishments, he was never fully recognised in his home country during his lifetime, due to his homosexuality, which was then a crime in the UK.

Sadly, in actions which must have been devastating to him, Turing was prosecuted in 1952 for homosexual acts; the Labouchere Amendment had mandated that “gross indecency” was a criminal offence in the UK. He accepted chemical castration ‘treatment’, with DES, as an alternative to prison. Turing died in 1954, 16 days before his 42nd birthday, from cyanide poisoning. An inquest determined his death as a suicide, but it has been noted that the known evidence is also consistent with accidental poisoning.

John von Neumann (1903 – 1957)

In 1945, mathematician
John von Neumann
undertook a study of computation that demonstrated that a computer could have a simple, fixed structure, yet be able to execute any kind of computation given properly programmed control without the need for hardware modification. Von Neumann contributed a new understanding of how practical fast computers should be organized and built; these ideas, often referred to as the stored-program technique, became fundamental for future generations of high-speed digital computers and were universally adopted. The primary advance was the provision of a special type of machine instruction called conditional control transfer–which permitted the program sequence to be interrupted and reinitiated at any point, similar to the system suggested by Babbage for his analytical engine–and by storing all instruction programs together with data in the same memory unit, so that, when desired, instructions could be arithmetically modified in the same way as data. Thus, data was the same as program.

The von Neumann architecture is a design model for a stored-program digital computer that uses a processing unit and a single separate storage structure to hold both instructions and data. It is named after the mathematician and early computer scientist John von Neumann. Such computers implement a universal Turing machine and have a Sequential Architecture. The terms “von Neumann architecture” and “stored-program computer” are generally also used interchangeably.

A stored-program digital computer is one that keeps its programmed instructions, as well as its data, in read-write, random access memory (RAM). Stored-program computers (eg von Neumann’s work on EDVAC which led to a proposal by Turing to develop the Automatic Computing Engine – ACE) were an advancement over the program-controlled computers of the 1940s, such as the Colossus and the ENIAC, which were programmed by setting switches and inserting patch leads to route data and control signals between various functional units. In the vast majority of modern computers, the same memory is used for both data and program instructions.

System Design:

Sequential Machine Architecture

A fundamental type of computing device is called a “Combinational” Machine. Basically such a machine can only be programmed to output one permutation of its inputs per machine cycle. A set of bits or bytes as input will generate the same “length” set of bits or bytes in the output, just reordered as per the current instruction. This was basically the first incarnation of a code cracking machine at Bletchley Park during World War 2. It could generate a full set of permutations of a coded message (in German), one per “instruction” (ie permutation of symbols).

Coding by the Germans basically relied on a scrambling of letters of the message according to some key. The key had to be known to decode the transmission. Initially the only way the Allies had of unscrambling these codes was to progressively work through each permutation of the possibilities (the various mappings, of the full set of symbols in German coded messages, onto itself) until the output from the “Combinational” Machine gave a message which made sense in German. The key for the period (applying to many messages) was then recognised by the hard example of a single decoded message which could be read, and made sense in German. The key was simply given by the ‘instruction’ being executed at the time of output of the readable message.

To work through these permutations of letters in a message took a long time and was labour intensive because in its first incarnation (pre-Colossus as built by Alan Turing) the machine was “programmed” by flicking switches by hand to generate the different combinations. Note that Turing’s Colossus, so influential in shortening the war, was designed and built before Turing developed his theoretical “Turing Machine”.

After a centuries-long history of mechanical computing, the essence of the work on taking a “Turing Machine” from theory into practice embodied in ENIAC, completed in 1945 and built by J. Presper Eckert and John V. Mauchly and improved upon by von Neumann, Eckert and Mauchly in EDVAC, led to demonstrating the method by which a machine (at the end of World War 2) could store it’s own set of Instructions electrically and step forward sequentially in time from one instruction to the next (but where jumps to different positions in the instruction ‘stack’ are possible on the fly), and amazingly (at the time) able to load its own “next” instruction from storage (memory) into a register and execute it.

The “inputs” here can be taken as given by those from the outside world plus the machine’s own program instructions, and the outputs are a re-combination of these inputs but with output-to-input feedforward included, such that outputs from previous cycles can be fed forward and added alongside future inputs to influence future cycles. A sort of Combinational Machine that can step forward in time, with filterable feedback of selectable outputs put back into future steps alongside other inputs. Or a “Programmable Sequential Machine”.

 

From Hardware to Software:
Although ENIAC was a revolutionary machine, as one of the “girls” themselves observed “it was like the situation with Jet airplanes. The new machine was one thing as Hardware, but was nothing without newly developing pilots to fly it.” The “pilots” referred to were a group of 6 elite women, so-called “computers”. It was these girls who developed the first ever connections and processes necessary to actually solve computable problems on ENIAC in a sink-or-swim effort towards the end of the war. They were the first programmers. The first ‘sort’ algorithm for example (and so much more) was developed by these girls, without any tools besides screwdrivers and such, especially no software tools, for “software” did not exist. They used logic diagrams and machine specifications, and developed everything else themselves, including process specifications, such that the engineers found they could leave the “debugging” of ENIAC to the “girls”. The girls produced the solution for a ballistics trajectory, involving a complete specific partial differential equation solution, which ENIAC ran in only seconds. With a desktop calculator of the time it would take 40 hours in human labour, and be prone to errors. Although it took time to develop the program, once developed it could solve for any particular ballistics trajectory simply by changing the initial and boundary conditions (target range, atmospheric conditions, drag characterisics, missile tare weight, etc). The logic diagrams given to the girls were based on a ‘finite difference’ method. This made it possible to digitise (or ‘discretise’) the otherwise naturally continuous differential equation, making it possible for a computer to closely approximate the solution for the trajectory, in time and space, including impulse required and fuel load, using only logical and arithmetic operations on a fine ‘mesh’ network of space and time co-ordinates, between source and target.

For some time in computing there was no such thing as an Operating System. A program was a single process which entirely controlled (and had to control) everything necessary to get consistent outputs from the entire machine. A lot of behaviour was “hard-wired” in to a computer, but everything not hard-wired had to be specifically coped with in the single-process machine code (imagined as 1’s and 0’s in storage addresses, same as the “data” dealt with by the machine code) of the computer program. Eventually and progressively, there was the need to bind all the memory elements and storage, all the electonic, electromagnetic, electro-mechanical, optical and audio elements of a system, so that the elements can work properly together, taking the housekeeping duties away from the users’ programs and allowing humans to better utilise computers. This is the role of an Operating System, at the most basic level. At this juncture was created the distinction between Systems Programmers and Applications Programmers. Although the computer as a device relied historically on the development of the concepts around a Central Processor Unit based on the requirements of Sequential Architecture, there are many “peripheral” devices that need to be included, to which some repetitive or separable activities can be delegated eg in a “driver”. For example a keyboard, the modern mouse, speakers, a screen, random access memory, hard disk drive controller, graphics processor, sound processor, serial ports, etc. The Operating System “glues” these components together and presents an interface or interfaces to the outside world. It also generates and runs processes on behalf of users and tracks them all. Processes can be bound to memory, devices, ports, pipes, queues, semaphores and sockets, for example.

On top of the operating system are further ‘layers’ of Software which, together with the operating system, cover everything a good housekeeping system has to do in a computer, from keeping track of the internal processes which are running and communicating with the outside world safely ,to performing useful computations and communicating with the user. It is worth noting that Sequential Machines (Computers) can only perform one complete instruction cycle at a time per processor – albeit very quickly, allowing rapid “interleaving” of multiple processes, giving the impression of total continuity (usually!). The operating system swaps between executing different processes so that all active processes are covered. This behaviour is called multi-tasking. Linux, MacOSX, Windows, iOS, Android are operating systems.

More recently has come the idea that ‘THE NETWORK’ is THE COMPUTER (for example the ‘Elastos’ System), recognising the modern state of connectivity and its limitations; it being far slower and less safe to communicate between devices than within one. It is within this market that we operate as SaaS Providers concerned with Networked High Level (Application) Software as opposed to System Level development. (SaaS: a method of software delivery and licensing in which software is accessed online via a subscription, although parts of SaaS systems may be installed on customers’ devices.)

Enter Quantum Computing:
And we have now, finally arrived at the commercial birth of the entirely different type of computing opened up by Quantum Computers !!.

The usual 2-state system in normal computer registers or memory locations represented by “1” or “0”, is augmented in Quantum Computing. Very fast computations may be performed, making formerly intractable problems solvable, but only for certain classes of computational situations.

So, 70 years after the first IBM Sequential Architecture Mainframe offering in 1949, comes the unveiling of IBM’s – and the world’s – first commercial Quantum Computing offering in 2019.

The Future, Artificial Intelligence & Quantum Computing:

There is much consternation from all quarters about the threats posed to humanity by AI. The only thing we can observe with certainty is that in any programming project the Governance of Coding is crucial. This will be even more crucial with AI and the laws, checks and balances that we will need to bring to bear on machines which will eventually outsmart us in intelligence terms. This does not have to mean they will outsmart us strategically or from the point of view of a power advantage unless we allow it. We should be looking to see the dangers of potential or existing rogue “centres of power” in control of AI systems, which increases our responsibilities in Oversight and Governance. We need to be planning ahead .. since AI can drive weapons. Then consider the unknown capacities opened up by linking Quantum Computers with AI Systems in terms of the “services” a quantum computer might be able to offer a related AI system.