Computers as Machines

.. and the People who made them possible
The English mathematician

George Boole
(1815-1864) invented mathematical, or symbolic, logic and uncovered the algebraic structure of deductive logic, thereby reducing it to a branch of mathematics

Alan Turing
(1912-1954)

is often called the father of modern computing. He was a brilliant mathematician and logician. He developed the idea of the modern computer and artificial intelligence. During the Second World War he worked for the Allies breaking the enemies’ codes and Churchill said he shortened the war by two (up to four) years.

Although he did not manage all this alone (see Bomba, Bletchley Park, Bombe, Colossus), Turing was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of theoretical computer science and artificial intelligence.[10] Despite these accomplishments, he was never fully recognised in his home country during his lifetime, due to his homosexuality, which was then a crime in the UK.

Sadly, in actions which must have been devastating to him, Alan was prosecuted in 1952 for homosexual acts; the Labouchere Amendment had mandated that “gross indecency” was a criminal offence in the UK. He accepted chemical castration ‘treatment’, with DES, as an alternative to prison. Turing died in 1954, 16 days before his 42nd birthday, from cyanide poisoning. An inquest determined his death as a suicide, but it has been noted that the known evidence is also consistent with accidental poisoning.

John von Neumann (1903 – 1957)

In 1945, mathematician
John von Neumann
undertook a study of computation that demonstrated that a computer could have a simple, fixed structure, yet be able to execute any kind of computation given properly programmed control without the need for hardware modification. Von Neumann contributed a new understanding of how practical fast computers should be organized and built; these ideas, often referred to as the stored-program technique, became fundamental for future generations of high-speed digital computers and were universally adopted. The primary advance was the provision of a special type of machine instruction called conditional control transfer–which permitted the program sequence to be interrupted and reinitiated at any point, similar to the system suggested by Babbage for his analytical engine–and by storing all instruction programs together with data in the same memory unit, so that, when desired, instructions could be arithmetically modified in the same way as data. Thus, data was the same as program.

The von Neumann architecture is a design model for a stored-program digital computer that uses a processing unit and a single separate storage structure to hold both instructions and data. It is named after the mathematician and early computer scientist John von Neumann. Such computers implement a universal Turing machine and have a Sequential Architecture. The terms “von Neumann architecture” and “stored-program computer” are generally also used interchangeably.

A stored-program digital computer is one that keeps its programmed instructions, as well as its data, in read-write, random access memory (RAM). Stored-program computers (eg von Neumann’s work on EDVAC which led to a proposal by Turing to develop the Automatic Computing Engine – ACE) were an advancement over the earliest computers of the 1940s, such as the Colossus and ENIAC, which were programmed by setting switches and inserting patch leads to route data and control signals between various functional units, having separately stored program instructions and data. In the vast majority of modern computers, the same memory is used for both data and program instructions.

System Design:

Sequential Machine Architecture

A fundamental type of computing device is called a “Combinational” Machine. Basically such a machine can only be programmed to output one permutation of its inputs per machine cycle. A set of bits or bytes as input will generate the same “length” set of bits or bytes in the output, just reordered as per the current instruction. This was basically the first incarnation of a code cracking machine at Bletchley Park during World War 2 (though beginning earlier, in Poland, with the Bomba in 1938). It could generate a full set of permutations of a coded message (in German), one per “instruction” (ie permutation of symbols), detecting and removing at the same time a lot of extraneous a priori ‘impossible’ keys. Effort had to be made to reverse engineer the German encoding machines for the various communication media of the time, by analysing their behaviour from the outside as ‘Black Boxes’, in an electro-mechanical crypto-decrypto strategic war.

Coding by the Germans basically relied on a scrambling of letters of the message according to some key. The key had to be known to decode the transmission. Initially the only way the Allies had of unscrambling these codes was to progressively work through each permutation of the possibilities (the various mappings, of the full set of symbols in German coded messages, onto itself) until the output from the “Combinational” Machine gave a message which made sense in German. The key for the period was then recognised by the hard example of a single decoded message which could be read, and made sense in German. The key was simply given by the ‘instruction’ being executed at the time of output of the readable message.

To work through these permutations of letters in a message took a long time and was labour intensive because in its first incarnation (as Turing’s Bombe in 1941), the machine was “programmed” by flicking switches and inserting cables via plugs, by hand, to generate the different combinations, to say nothing of the multitude of human translators (up to 10,000 by war’s end) needed to discover the successfully decoded messages.

After a centuries-long history of mechanical computing, the essence of the work on taking a “Turing Machine” from theory into practice was embodied in the next generation British GPO’s Colossus in 1943, and ENIAC, completed in 1945, and built by J. Presper Eckert and John V. Mauchly. The machine was improved upon by von Neumann, Eckert and Mauchly in EDVAC, which led to demonstrating the method by which a machine (at the end of World War 2) could store it’s own set of Instructions electrically and step forward sequentially in time from one instruction to the next (but where jumps to different positions in the instruction ‘stack’ are possible on the fly), and amazingly (at the time) able to load its own “next” instruction from storage (memory) into a register and execute it.

The “inputs” here can be taken as given by those from the outside world, plus the machine’s own program instructions. From a purely machinic point of view, the outputs are a re-combination of these inputs but with output-to-input feedforward included, such that outputs from previous cycles can be fed forward and added alongside future inputs to influence the next cycle. A sort of Combinational Machine that can step forward in time, with filterable feedforward of outputs into future steps alongside other inputs (especially the next instruction). Or a “Programmable Sequential Machine”.

From Hardware to Software:
Although ENIAC was a revolutionary machine, as one of the “girls” themselves observed “it was like the situation with Jet airplanes. The new machine was one thing as Hardware, but was nothing without newly developing pilots to fly it.”

The “pilots” referred to were a group of 6 elite women, so-called “computers”. It was these girls who developed the first ever connections and processes necessary to actually solve computable problems on ENIAC in a sink-or-swim effort towards the end of the war. They were the first programmers. The first ‘sort’ algorithm for example (and so much more) was developed by these girls, without any tools besides screwdrivers and such, especially no software tools, for “software” did not exist. They used logic diagrams and machine specifications, set switches and developed everything else themselves, including process specifications, such that the engineers found they could leave the “debugging” of ENIAC to the “girls”. The girls produced the solution for a ballistics trajectory, involving a complete specific partial differential equation solution, which ENIAC ran in only seconds. With a desktop calculator of the time it would take 40 hours in human labour, and be prone to errors. Although it took time to develop the program, once developed it could solve for any particular ballistics trajectory simply by changing the initial and boundary conditions (target range, atmospheric conditions, drag characterisics, missile tare weight, etc). The logic diagrams given to the girls were based on a ‘finite difference’ method. This made it possible to digitise (or ‘discretise’) the otherwise naturally continuous differential equation, making it possible for a computer to closely approximate the solution for the trajectory, in time and space, including impulse required and fuel load, using only logical and arithmetic operations on a fine ‘mesh’ network of space and time co-ordinates, between source and target.

For some time in computing there was no such thing as an Operating System. A program was a single process which entirely controlled (and had to control) everything necessary to get consistent outputs from the entire machine. A lot of behaviour was “hard-wired” in to a computer, but everything not hard-wired had to be specifically coped with in the single-process machine code (imagined as 1’s and 0’s in storage addresses, same as the “data” dealt with by the machine code) of the computer program.

Eventually (first in 1956, with General Motors, as a customer running IBM Hardware) and increasingly, there was the need to bind all the memory elements and storage, all the electonic, electromagnetic, electro-mechanical, optical and audio elements of a system, so that the elements can work properly together, taking the housekeeping duties away from the users’ programs and allowing humans to better utilise computers. This is the role of an Operating System, at the most basic level. At this juncture was created the distinction between Systems Programmers and Applications Programmers. Although the computer as a device relied historically on the development of the concepts around a Central Processor Unit based on the requirements of Sequential Architecture, there are many “peripheral” devices that need to be included, to which some repetitive or separable activities can be delegated eg in a “driver”. For example a keyboard, the modern mouse, speakers, a screen, random access memory, hard disk drive controller, graphics processor, sound processor, serial ports, etc. The Operating System “glues” these components together and presents an interface or interfaces to the outside world. It also generates and runs processes on behalf of users and tracks them all. In Unix and similar systems processes can be bound to memory, devices, ports, pipes, queues, semaphores and sockets, for example.

On top of the operating system are further ‘layers’ of Software which, together with the operating system, cover everything a good housekeeping system has to do in a computer, from keeping track of the internal processes which are running and communicating with the outside world safely ,to performing useful computations and communicating with the user. It is worth noting that Sequential Machines (Computers) can only perform one complete instruction cycle at a time per processor core – albeit very quickly, allowing rapid “interleaving” of multiple processes. Nevertheless each instruction brings the machine from a previous single state to the next single state, only giving the impression of total continuity (usually!). The operating system swaps between executing different processes so that all active processes are covered. This behaviour is called multi-tasking. Linux, MacOSX, Windows, iOS, Android are operating systems. At any one time a processor is either in one state or transitioning between its previous state to its next state. This is the situation that hardware and their resident operating systems deal with continually, at ferocious speed.

From Big to Small:
Austrian-Hungarian physicist Julius Edgar Lilienfeld first filed a patent on a solid-state device in 1925 which, although it could not be built at the time, has later shown to be a working design for an electronic amplification device. Apparently independently of this work, in 1947 a team under William Shockley at Bell Labs developed the first of two types of Transistor (the Point Contact Transistor and also the Bi-Polar Junction Transistor in 1948). The ubiquitous “MOSFETs (Metal Oxide Semiconductor Field Effect Transistors) that actually have become the most common electronic device in the world, were invented by Atalla and Kahng at Bell Labs. They fabricated the device in November 1959. These MOSFETs became the basis for “chips” of logic gate systems making up the equivalent in Transistor Technology of Valve based Computers of the second and third generations. The first single Transistor (before Integrated Circuits) was already much smaller and consumed less power than a valve.

With the processes discovered for embedding connected parts (micro-transistors) into silicon wafers, and the relentless miniaturisation of those parts, the development of mini computers, followed by the hobby microcomputers of the 1970’s (with “Computers on a Chip” and sometimes even Audio Cassette tapes acting as memory storage) had begun.

The personal computer industry truly began in 1977, with the introduction of three preassembled mass-produced personal computers: Apple Computer, Inc.’s (now Apple Inc.) Apple II, the Tandy Radio Shack TRS-80, and the Commodore Business Machines Personal Electronic Transactor (PET).

This was made possible by major advances in semiconductor technology. In 1959, the silicon integrated circuit (IC) chip was developed by Robert Noyce at Fairchild Semiconductor,[11] and the metal-oxide-semiconductor (MOS) transistor was developed by Mohamed Atalla and Dawon Kahng at Bell Labs.[12] The MOS integrated circuit was commercialized by RCA in 1964,[13] and then the silicon-gate MOS integrated circuit was developed by Federico Faggin at Fairchild in 1968.[14] Faggin later used silicon-gate MOS technology to develop the first single-chip microprocessor, the Intel 4004, in 1971.[15] The first microcomputers, based on microprocessors, were developed during the early 1970s. Widespread commercial availability of microprocessors, from the mid-1970s onwards, made computers cheap enough for small businesses and individuals to own.

In what was later to be called the Mother of All Demos, SRI researcher Douglas Engelbart in 1968 gave a preview of features that would later become staples of personal computers: e-mail, hypertext, word processing, video conferencing, and the mouse. The demonstration required technical support staff and a mainframe time-sharing computer that were far too costly for individual business use at the time.

Early personal computers‍—‌generally called microcomputers‍—‌were often sold in a kit form and in limited volumes, and were of interest mostly to hobbyists and technicians. Minimal programming was done with toggle switches to enter instructions, and output was provided by front panel lamps. Practical use required adding peripherals such as keyboards, computer displays, disk drives, and printers.

Micral N was the earliest commercial, non-kit microcomputer based on a microprocessor, the Intel 8008. It was built starting in 1972, and a few hundred units were sold. This had been preceded by the Datapoint 2200 in 1970, for which the Intel 8008 had been commissioned, though not accepted for use. The CPU design implemented in the Datapoint 2200 became the basis for x86 architecture[16] used in the original IBM PC and its descendants. [17] (Wikipedia)

More recently, with the development of the World Wide Web in 1990 (Tim Berners-Lee), has come the idea that “The Network IS the Computer” (for example the ‘Elastos’ System), recognising the modern state of connectivity and its limitations; it being far slower and less safe to communicate between devices than within one. It is within this market that we operate as SaaS Providers concerned with Networked High Level (Application) Software as opposed to System Level development. ‘SaaS’: Software as a Service. (SaaS: a method of software delivery and licensing in which software is accessed online via a subscription, although parts of SaaS systems may be installed on customers’ devices.)

Enter Quantum Computing:
And we did, in 2019, finally arrive at the commercial birth of an entirely different type of computing opened up by Quantum Computers.

The usual 2-state system in normal computer registers or memory locations represented by “1” or “0”, is augmented in Quantum Computing. Very fast computations may be performed, making formerly intractable problems solvable, but only for certain classes of computational situations.

So, 70 years after the first IBM Sequential Architecture Mainframe offering in 1949, came the unveiling of IBM’s – and the world’s – first commercial Quantum Computer offering in 2019.

The Future, Artificial Intelligence & Quantum Computing:

There is much consternation from all quarters about the threats posed to humanity by AI. The only thing we can observe with certainty is that in any programming project the Governance of Coding is crucial. This will be even more crucial with AI and the laws, checks and balances that we will need to bring to bear on machines which will eventually outsmart us in intelligence terms. This does not have to mean they will outsmart us strategically or from the point of view of a power advantage unless we allow it. We should be looking to see the dangers of potential or existing “centres of power” in control of AI systems, which increases our responsibilities in Oversight and Governance. We need to be planning ahead .. since AI can drive weapons. Then consider the unknown capacities opened up by linking Quantum Computers with AI Systems in terms of the “services” a quantum computer might be able to offer a related AI system.