Networks .. and those who made them possible

On Computer Network Communication
There were Networks before the World Wide Web! (Please refer to History Of Computer Communication Networks (wiki).)

The following link is also a wikipedia article. History Of The Internet. (wiki)

timberners-lee

I should have just called it “web”.

Packets of Hypertext

transmitted across a web

ordered by the TCP protocol

with destinations mapped onto

the Domain Name System.

Tim Berners-Lee,
:- inventor of the World Wide Web -:

“HyperText is a way to link and access
information of various kinds as a web
of nodes in which the user can browse at
will. It provides a single user-interface
to large classes of information (reports,
notes, data-bases, computer documentation
and on-line help). We propose a simple
scheme incorporating servers already
available at CERN… A program which
provides access to the hypertext world
we call a browser… ”

Tim Berners-Lee , R. Cailliau.
12 November 1990, CERN

“I just had to take the hypertext idea

and connect it to the

Transmission Control Protocol

and domain name system ideas

and—ta-da!—the World Wide Web” [33]

However it should be known that in the nature of Internet Operations, unencrypted data packets, in their journeys, may be “sniffed” by sufficiently interested hackers, operators or authorities, and that modern JavaScript allows the collection of many types of network-behavioural and other information about people and devices in the form of “cookies”. Cookies themselves are very powerful. The cookies we implicitly accept into our devices when we connect to a webserver, communicate information to other private servers. Despite the use of secure protocols such as the Secure Socket Layer (SSL a.k.a. “https://”), and a multitude of other practices to defend us, such as Anti-Virus and Anti-Malware Software, and firewalls and “tripwires”, the modern World Wide Web remains a very wild and dangerous place, brimming with schemes, strategies and tactics to take advantage of unsuspecting users. For example, “Ip-Spoofing” uses the fact that the source address in the Header section of any data packet may be altered, to mount Denial of Service Attacks as well as allowing Device Masquerading .. see Security for an explanation of the method (entirely open-sourced) Rong Chen has used to replace the Domain Name System with his own secured node numbering system for intercommunication, backed by a blockchain for user and micro-component identity generation, storage and process recording. Rong’s system enables secure defence from external and internal threats, while the transactions on the blockchain (in personal or commercial systems) are kept honest automatically by all participants. You could refer to our Mobile First Development page or the BlockChains page for details of how ‘Trust’ is guaranteed in the system via the BitCoin “Miners”. Rong Chen’s Elastos Smart Web constitutes a safe world-wide “network operating system” of which devices (with their own operating systems and hardware components) are merely a part. Obviously owners of devices retain all the usual capacitites and software, but would largely restrict “work” to the Elastos System.

The history of communication on Earth is extensive. Every living thing has means of communication. Trees, insects, molluscs, echinoderms, flowers, the entirety of the plant and animal kingdoms all communicating between and within individuals and communities of organisms, as they, and the cells which compose them, organise to feed, reproduce and defend themselves.

While some communication in nature involves transfer of matter, some involves heat transfer, and some optical and auditory events, there is touch, a lot of communication is chemically based (genetic codes; hormonal codes; pheromone codes) and electro-chemically based (brain, nerves, sensory organs and muscle). Humans distinguish themselves by having developed telecommunication systems, from simple messages in the form of smoke signals, ambulatory verbal messages and message sticks, semaphores and written messages, to modern electronic systems such as telegraph, telephone, radio, radar, sonar, television, facsimile, and now, the internet.
Our capacity to utilise electronic networks for communication commences in Europe, at Kӧnigsberg, in the 18th century.
There is much barely-penetrable mathematical network theory, based around the foundations of network analysis as embodied in the Seven Bridges of Kӧnigsberg problem, solved by Leonhard Euler in 1735 (Euler Circuits and Walks).
Euler Paths and Cycles are concerned with crossing every edge in a “graph” exactly once without repeating. The vertices may be crossed more than once (thus modeling the problem of crossing every bridge in Kӧnigsberg exactly once on a single ‘walk’).
By contrast, a Hamilton Path is concerned with crossing every vertex in a “graph” exactly once without repeating. The edges may be crossed more than once. (in this link both Euler and Hamilton approaches to graphs are explained).

In an Euler Cycle, the Path ends where the Path began. In a Hamilton Cycle, the Path, likewise, ends where it began, such that the initial and final vertices are identical (the only allowable repeated vertices in a Hamilton Cycle or Path)

Notice that in Euler Paths we are looking at crossing (following) edges. In Hamilton Paths it’s vertices we are looking at.
leonhardeuler williamhamilton
The so-called Hamilton Cycles are clearly inversely related to the Bridges of Kӧnigsberg problem in network theory (see how the edge based approach would suit an analysis of paths for a postal network where each street must be visited once, whereas a vertex based approach suits a traveling salesman problem, where only individual addresses need to be visited).
Although often attributed to William Rowan Hamilton, Hamiltonian cycles in polyhedra had been studied a year earlier by Thomas Kirkman, who, in particular, gave an example of a polyhedron without Hamiltonian cycles.[1] Even earlier, Hamiltonian cycles and paths in the knight’s graph of the chessboard, the knight’s tour, had been studied in the 9th century in Indian mathematics by Rudrata, and around the same time in Islamic mathematics by al-Adli ar-Rumi [fr]. In 18th century Europe, knight’s tours were published by Abraham de Moivre and Leonhard Euler.[2] (See Hamilton Cycles and Paths)
So, since all human and computer-based communication needs to involve the sharing of coded messages between many individuals and devices, that might be connected in a polyhedron-shaped electrical network surrounding the Earth, modern computerised networks have had to evolve from simple point-to-point connections which ran “bitstreams” between 2 points, into current “packet-switching” networks which convey packets of signals, in coded forms, according to a protocol (TCP/IP) which is expected or “understood” by all nodes (or “vertices” if mapped as a graph) on the polyhedron, through which the packet passes as it undertakes its own journey to find its own destination, and such that the packet can be assembled in the correct order (with other asynchronously arriving packets in the full transmission) and the contents of the transmission can be decoded (HTML/JavaScript/CSS) and “read”, displayed or understood at the receiving end (identified as an Address on the Domain Name System (Map)).
It is obviously a far cry from an analysis of “walks” (Euler Walks and Cycles), in a city divided by a river, with 7 bridges, and with two islands in the river (Kӧnigsberg), to the converse properties of graphs (Hamilton Paths and Cycles), to end with a worldwide web of optoelectronic, electronic and wireless “Packet-Switching” networks, interlinked to successfully support the modern “hypertext”-based html browsers, and so much more.
A history of Computers as Machines needs to be considered alongside the history of Electrical and Electronic Networks themselves.
On Electicity
Some of the key people (Morse, Bell, Maxwell, Hertz, Marconi, Wiener and Shannon) involved in the development of electical communications are mentioned here: Electrical Communication Networks. For those who followed the Maxwell link, it may be interesting to know that when you involve an understanding of Einstein’s theory of Special Relativity (1905) in an analysis of an oscillating electron (such as you get in a vertical “rod” antenna driven by a radio transmitter, for example), Maxwell’s equations may be deduced but with the revelation of a specific relation between electricity and magnetism.
[The value of mathematical equations in physics is that they can be used to predict otherwise unforeseeable phenomena yet to be confirmed, and also help encapsulate systems succinctly. Words, diagrams and pictures alone are inadequate to the tasks before physicists and engineers. Words, diagrams and pictures alone may help assemble and operate an electrical machine, for example, but they are insufficient to design one. Physics (and science generally) is phenomenology, and electricity is a phenomenon. Nevertheless equations are not Reality itself, merely representing the current state of scientific hypotheses in a field of enquiry. “All science depends on codes of many types. Reality can always outflank codes. We are only human. Ask any doctor or engineer.” 🙂 Ed.]
Electricity “is” all the things we know in the formats of words, diagrams, pictures, experiments, experiences and equations about the phenomenon called ‘Electricity’.
The underlying relativistic relationship means it is only necessary to specify the behaviour of an Electric Field (E) to completely determine the behaviour of what Maxwell had to call a separate Force – Magnetism – (though related by Maxwell’s Equations). The related Magnetic Field is often referred to a as H, but also called B. That’s correct .. Electricity and Magnetism are both parts of a single physical phenomenon, when understood in the light of relativity.

Following the “Wu Experiment” (1956), Glashow, Salam and Weinberg later went further, helping physicists to understand that while “current-temperature” Weak Nuclear interactions are distinct from Electromagnetic Interactions, at around temperatures of 1015K (very close to the Big Bang) the Electromagnetic Forces and the Weak Nuclear Force become unified, in the so called Electroweak Force’.
The remaining Universal Forces are Gravitation and the Strong Nuclear Force.
Heat energy and Mechanical forces (including Sound, Pressure, Stress and the Work done by a Heat Engine) are actually enabled only by the interaction of these 3 fundamental forces (or really 4, at our temperatures, the Strong Force, the Weak Nuclear Force, the Electromagnetic Force and Gravity), supporting the atoms, crystals and molecules of matter, and giving the appearance of large scale surfaces and bodies of varying strengths and characteristics, etc, or relying on the electrochemical properties of substances in reactions.
If we leave out Gravity, then we have the Strong Force which is confined to an electrically “positively charged” nucleus, binding neutral & positively charged particles very tightly (compared to the “negative” electron clouds, which are light and relatively distant from the nucleus). Outside of the Strong Force, whose region of influence is restricted to the tiny nuclei at the centre of atoms, all chemical, electrical, electro-chemical and mechanical properties of materials (atoms, ions, molecules and crystals) and inter-material forces are fundamentally electrical in nature. The polar nature of electricity determines a lot when it comes to the form of the equations governing the orbits of the negatively charged electrons around the positive nuclei, and the forces between molecules, atoms and crystals. The Gravitational force is comparatively weak with respect to Electricity on Earth, yet near a Black Hole, Gravity wins .. for a while ..
Astronomical history began with the formation of electrons and quarks during the Big Bang (13 billion years B.C.E.). There was then a phase (mysteriously) where the ‘Higgs Field’ came into effect, conferring mass on quarks and electrons (but not photons). Next came the formation of atoms and ions in galactic clouds, and then in stars, (with quarks forming protons and neutrons in atomic nuclei, their electrons orbiting in strict patterns first uncovered by Chemists). Also see the Quantum Chromodynamics of the Strong Nuclear Force, quantifying the way those galactic forces have managed to bring neutrons and protons together in the nuclei of atoms, with attracted electrons bound to them, so that most of the universe’s MASS has “condensed” from the ENERGY involved in the stellar-forced (ie where stellar forces bring matter into enough proximity to cause the attractive Strong Nuclear Force to come into effect) binding of quarks together to form protons and neutrons in the nuclei of atoms. The fundamental particles associated with the Strong Force are called Gluons. Those Associated with the masses of quarks and electrons are called Bosons. It has recently been discovered and confirmed that “Gravitons” – being the fundamental particle associated with Gravity – exist.
Einstein’s authority, scientifically, for bringing the new so-called “transformation metric” of a non-Euclidean space, (a foundation of Electricity as much as Relativity) into his theory of (universal-scale) physical reality, (the famous 1/√[ 1-v2/c2 ] factor) came only after Hertz’s confirmation of the existence of the radio waves,  that had been predicted originally by Maxwell himself. It also required the putting to rest of the concept of an ‘Aether’ as the medium which transmits light in the universe. Einstein was able to confidently assume that light (electromagnetic radiation) is conveyed directly along rays embedded (as it were) in space-time itself. There is no medium besides. This was conclusively demonstrated in the Michelson- Morley Experiment.
“In physics, Lorentz transformations became [well] known [by Physicists] at the beginning of the 20th century, when it was discovered that they exhibit the symmetry of Maxwell’s equations. Subsequently, they became fundamental to all of physics, because they formed the basis of special relativity in which they exhibit the symmetry of Minkowski spacetime, making the speed of light invariant (as demonstrated in the Michelson-Morley experiment) between different inertial frames. They (the Lorentz Transformation Equations) relate the spacetime coordinates of two arbitrary inertial frames of reference with constant relative speed v. In one frame, the position of an event is given by x,y,z and time t, while in the other frame the same event has coordinates x′,y′,z′ and t′.” (See link above). The work done on what came to be known as Lorentz Transforms was crucial to Einstein’s ideas.
Architects, (non-Electical) Engineers and Builders work in Euclidean Space. Einstein, Lorentz (et al) broke that mould.
Light and Radio Waves, Photons, Black Bodies, Planck’s Experiment, Einstein’s ‘thought experiments’, Electrons, Quantum Mechanics, Phonons, Mass, Energy, Bosons and Gravitation
All of this opened the door to non-Euclidean spaces, previously mathematical curiosities only, when the discrepancies between Newton’s Euclidean foundations and scientific reality began appearing, at first here in Electromagnetism (but also in Astronomy and Cosmology).  Ever since the early 1900’s there were no “right angles” and the parallel lines (that never crossed according to Euclid’s Fifth Axiom of Geometry) now crossed, more like meridians of longitude on the earth’s surface, than the parallels of latitude, except immersed in the cosmos rather than confined to the earth’s surface.  By the way, Einstein simply needed to apply two traditional conservation principles (although with a relativistic flavour) to a situation in a thought experiment where a single photon collides with a mass at rest (zero acceleration). By equating the Kinetic Energy before (with the mass considered at rest, all the Kinetic Energy of the system under consideration is the photon’s) to the system’s Kinetic Energy after the collision, and requiring the mass to be a perfect Black Body, thus absorbing the photon entirely), but also utilising the relativistic conservation of momentum principle, he is able to easily demonstrate that the Total System Energy, (of the Black Body plus photon absorbed, now all slightly “boosted” – moving – because of conservation of momentum, and the fact that the photon has some momentum), E= mc2. The result sort of ‘drops out’ of the relativistic conservation principles, when you solve the equations representing the 2 Principles simultaneously. The same principles (Conservation of Momentum and Energy) are applied in modern particle colliders. In Newton’s Mechanics there is an ancestral set of Principles. The main point is to remark at the way Relativity has unified Electricity and Magnetism (and, to a certain extent, the Weak Nuclear Force), as well as unifying our concepts of Mass and Energy. :: Euclid’s Fifth had to fall to make way for this new knowledge.
E & H are examples of  dependent “Vector Fields”. At every point in a vector field there exists a force on a body with magnitude and direction ie a vector. Another example of a vector field is Earth’s familiar Gravitational Force Field (approximately pointing to the centre of our planet). Fields of temperature, mass or energy values are examples, on the other hand, of “Scalar” fields. Multi-dimensional “Tensor fields” exist in physics (eg Mechanical Stress in solids and viscous fluids, and space-time curvature Tensors in General Relativity). 

.. Just for clarity, Special Relativity (1905) is concerned with reconciling physics to the space-time transformation ‘metric’ revealed in Hertz’s Experiment’s verification of the existence of radio waves, but predictable from Maxwell’s equations before him. A ‘metric’ here refers to finding a formulaic way we may consistently model relative velocities, and other physical properties, between 2 observers traveling separately in space-time, in “light” of the fact that there is really nowhere to be taken as a zero velocity point and that we may only consistently measure velocities relative to the local velocity of light (radio waves equally). Einstein discovered the metric inherent in Maxwell’s equations, relating to electromagnetism, (building on earlier work completed by Hendrik Lorentz) and argued that there can be only one metric in this universe, and that therefore Newton, a man who is rumoured to have walked out of his first opera, was wrong (in 1687) to assume a Euclidean ‘orthogonal’ space for physical reality, which naturally seemed to separate time from space. The metric of Maxwell’s equations revealed skewed axes in 4 dimensions where the upper limit to universal velocities is the speed of light. By progressing from this point, Einstein was able to unite our concepts of Electricity and Magnetism as well as uniting Energy and Mass.  Our concepts of Energy and Mass were unified by Einstein applying the Conservation of Energy Principle followed by the Conservation of Momentum Principle (in special relativistic formats) to a collision (in a thought-experiment) where a so-called Black Body absorbs a colliding photon fully. By equating the total Energy and Momentum prior to the collision with the total Energy and Momentum after, and utilising the Planck Relation for Energy of a photon E = hf, Albert was able to show with simple algebra and calculus that the total Energy of that system post collision E = mc2, where m is the mass of the black body (the photon has zero mass itself) and c is the velocity of light.

Planck’s experiment some few years after Hertz’s experiment, revealed to Einstein an interpretation of results stating light (and thus radio waves) is actually composed of light “particles” or quanta called photons. Albert deduced this publicly in a paper in 1904, giving birth to a field of research still continuing today, called quantum mechanics. As noted above, Planck’s 1904 experiment helped Einstein deduce the relationship between Energy and Mass of a body by giving him the concept of a “photon”, which he invented himself to account for the results of Max Planck’s experiment. Due to the incompatibility of the Classical Wave based theory of light with a particle based theory at the time, the effort to find the linking equations between a new ‘quantum mechanics’ and the classical theory (adequate until Planck’s Experiment) led scientists to a probabilistic theory which Einstein always disowned. Incidentally Erwin Schrödinger, one of the inventors of quantum mechanics, also believed that there exists a deterministic underlying continuous theory possible in physics. The possibility that an event could happen simultaneously in many spaces is required for the theory to work. “Strings”?

The answer in any case seems to lie in the success Schrӧdinger had in 1926, (and Werner Heisenberg at the same time), with an approach that replaced the (classical) value for total field Energy (E) with (quantum) hf in a single frequency (laser) light field’s (classical) theoretical “Work Function”. The basic experimental relation E = hf (Energy of a photon = its frequency f multiplied by Plank’s constant h) is reliable and verifiable by anyone who wants to repeat Planck’s Experiment. The result of Schrӧdinger’s substitution in the classical field equation gave only the quantum field equations for the particular case of a “geometrical optics” light field. Schrӧdinger had to uncover a Partial Differential Equation (which would be the General Quantum Mechanical Wave Equation) whose solution space allowed these “Quantum Mechanical-Geometrical Optics” Field Equations as solutions, even though they were at that stage particular to a generalised (massless and unbound) photon field (not to other particles, such as electrons). The solution had to approach the behaviour of the Classical Wave Equation, as the value of Planck’s Constant is made to approach zero. (This requirement is a way of mimicking the idea that Energy in a classical light field is taken as independent of frequency). The equations required also had to provide solutions which closely match observed experimental results for other particles when applied as theoretical models of those particles in experiments.

In 1926, Schrӧdinger, and at the same time, independently, Heisenberg, succeeded in finding slightly different versions of the same Equations. The results showed the required tendency towards classical behaviour (say, at more human-sized scales) as Planck’s constant was forced towards zero (in the theory).

The theoretical Schrӧdinger and Heisenberg treatments also matched actual results from practical energy absorption spectral experiments with Hydrogen, in a model of the Hydrogen Atom, with its single electron as a wave-particle, obeying the new Wave Equation, absorbing energy in stable quantum stages as predicted. This meant the Quantum Wave Equation applied to electrons as well as photons (theoretically).

Then, the wavelike nature of electron beams themselves was experimentally established when Electron diffraction, in fact, was observed (1927) by C.J. Davisson and L.H. Germer in New York and by G.P. Thomson in Aberdeen, Scotland, thus supporting an underlying principle of quantum mechanics, “Wave Particle Duality”.

The Quantum Mechanical Wave Equation also applies to Sound/Pressure/Stress waves in the limit, and thus there exist “Phonons” or stress/pressure particles, since the Principles of Energy Quantisation apply equally to sound/pressure/stress energy, including both types of seismic wave: the direct wave (or primary shock) and the shear waveform (or aftershock); similarly to the sonic boom and aftershock occurring with supersonic objects (where the shear wave also arrives last). These waves (all sound/pressure and stress) are carried in the final result by phonons in rays spreading out from the source(s) at the speed of sound in the medium. (Incidentally, heat energy is also, at the submicroscopic level, a quantised phenomenon, being stored and transferred in the form of phonons no different to sound/pressure/stress waves. It’s all about vibrating matter with phonons.) The full classical treatment of mechanical waves involves the three dimensional Stress Tensor T in a space and time continuum (Euclidean). A Tensor Field in space has 3 perpendicular “normal” or “principal” stress values at a point and 3 perpendicular “shear” stress values at the same point. There are thus 6 stress values per point in space in the Tensor Field. Shear corresponds to rotation or torsion (the aftershock) and normal refers to tensile or compressive forces (the primary wave). The elements of the stress tensor at each point in the spatial field vary (“vibrate”) in time and space. A “solution” to a particular Tensor Wave Field Differential Equation (the particular Classical Wave Equation in the medium or continuum) is required as a “function” specifying the 6 values of stress at every point in the spatial & temporal field. The treatment needed to involve and connect the molecular, atomic and sub-atomic levels (Quantum vibrations and phonons) to the higher level continuum mechanics treatment (classical stress waves – think dynamic structural or fluid loading and forcing) is more complicated.

In the early days of Quantum Mechanics, everything was done in pseudo-Euclidean Spaces (although involving “imaginary” numbers and “complex planes”), however Paul Dirac was influential in pushing back boundaries towards reconciling General Relativity with Quantum Mechanics. Albert’s Theory of General Relativity (1915) had gone further than his Special Theory, as Special Relativity still rested upon a ‘flat’ or ‘inertial’ (non-accelerating) space-time cosmology, whereas the General Theory concerned itself with further revelations, now about Gravitation: specifically, that it is linearly related to the local magnitude-and-direction-of-curvature (a vector, perpendicular in space-time, to the tangent hyperplane on the curved surface of our universe at the local point to be measured) of ‘our’ space-time inside a “hyper-volume” (possibly a multiverse) of a larger number of dimensions (larger than 4, but otherwise unspecified) which is outside our universe. Much of the reasoning around the nature of Gravity and Acceleration came down to the question as to why should a body’s inertial mass be identical to its gravitational mass? In Newton’s terms: why should the ‘m’ in F = ma (where ‘a’ represents a real temporal rate of change of velocity) be the same as the ‘m’ in F = mg (where g represents a potential, giving a body’s “weight” in a field of Gravity)? There is no doubt that the identical “m”s are reliable facts, so backtracking from truth to cause was what was called for. Another little human feat accomplished.

As a result of Einstein’s deliberations and reasoning, he was able to develop equations enabling him to accurately predict the amount by which the planet Mercury would appear earlier than astronomers expected (in a Euclidean Space with no Gravitational influence on the path of light rays) from behind the Sun due to the curvature of Space-Time – and the light rays in it – caused by the Sun. See Geodesics in General relativity.

This actual curvature of space-time is caused by the presence of matter (such as the earth or the sun or a pencil or a galaxy), and Einstein gave equations which accurately predict the behaviour of our solar system as well as real galaxies, contrary to Newton’s inconsistent predictions.  (Although, Einstein was never good with pencils .. they never weigh enough and they move too slowly – moreover it was Newton’s Mechanics that enabled the development of the fundamental Impulse/Momentum Equation of Rocketry (refer to our page Computers as Machines regarding “the girls” who programmed the first computer – not using software as it did not exist – and the solution they produced) and the now ubiquitous Finite Element Method of Stress and Strain analysis, for the Apollo rocketships, that took men to the moon successfully. In addition, any navigator since well before Newton would have been happy to plot the course to the moon given the available technology in 1969 – as it was done in Euclidean space, using regular timekeeping devices and astronomical maps based on observations little different to those of prior centuries). Incidentally there was a similar “zeitgeist” moment between Isaac Newton in England and Gottfried Leibniz in France (as for example between Schrӧdinger and Heisenberg), where both men appear to have invented the same ‘Calculus’ ideas at similar times, but developed them slightly differently. Actually Leibniz’ formulation lends itself more readily than Newton’s to Finite Element Analysis, and to many other areas of physics and engineering, as it employs generalised co-ordinates from the outset.

However, much has happened in physics since publication of the General Theory of Relativity in 1915 ..  starting with quantum mechanics, Schrӧdinger, Heisenberg, Dirac and Stephen Hawking’s life devoted to reaching past Einstein (predicting the existence of Black Holes, now confirmed, and even their “evaporation” with the return of matter and “information” to this universe!) .. go search .. and remember that although God may not play dice, people may be required to, in physics, because the human intellect needs a way to comprehend wave-particle duality, and many other probabilistic phenomena, such as the question of how is the particle, the Higgs boson, the particle thought to be responsible for conferring “mass” on some sub-atomic objects (such as electrons and quarks), related to the universal curvature-of-space-time tensor in a generally-relativistic quantum mechanics, or, how would a Higgs boson be related to Gravitons (the quantum particle, existence confirmed, associated with Gravitational disturbances or gravity waves), as emitted by the action of pulsars?