There’s a new strain of old thinking going around in the transhumanist-quantum computing world called “reservoir computing.” Moore’s Law of computing power/transistor size has smashed headlong into a brick wall of late as scientists have begun working on nano scales. Say goodbye to electrons bouncing around inside silicon chips, and say hello to a bucket of water:
(A team of German scientists) demonstrated that, after stimulating the water with mechanical probes, they could train a camera watching the water’s surface to read the distinctive ripple patterns that formed. They then worked out the calculation that linked the probe movements with the ripple pattern, and then used it to perform some simple logical operations. Fundamentally, the water itself was transforming the input from the probes into a useful output—and that is the great insight.[1]
Reservoir computing is based on the idea that stimulating a material in a certain electromagnetic fashion can vibrate its molecular structure into tiny computing units:
Reservoir computers exploit the physical properties of a material in its natural state to do part of a computation. This contrasts with the current digital computing model of changing a material’s properties to perform computations. For example, to create modern microchips we alter the crystal structure of silicon. A reservoir computer could, in principle, be made from a piece of silicon (or any number of other materials) without these design modifications.[2] (emphasis added)
Some abracadabra flim-flim is possibly going on here. In the world of quantum computing news there is always a problem with the noise-to-signal ratio: media hype (for funding purposes) vs. what pans out as an actual breakthrough. Things like reservoir computing get boosted, but most times are eventually found to be dead-ends. The water experiment paper cited is from 2003. What has happened in the interim? Its author, Matthew Dale, cites his own paper dated January 2017 on the latest RC work. The truth is most real technological advances such as these occur in the dark, in a military program or military-funded university lab far away from the media spotlight, with an average lag-time of ten years from the initial breakthrough to the public revelation (if it gets revealed at all). Years can go by before such discoveries have achieved some kind of societal application (if they at all do). Only then do the military-corporate patent-holders allow the breakthrough articles to hit the presses. Several years later, we begin to see their widespread application in the civil domain (the internet, of course, is the primo example).
Anyway, Dale discusses how reservoir computing parallels the discoveries in current “global computation” models of the brain. The “wet” aspect (no pun) comes into the picture in another paper he cites:
The “input layer” couples the input signal into a non-linear dynamical system (for example, water or the kinetic movement of gases) that constitutes the “reservoir layer”. The internal variables of the dynamical system, also called “reservoir states”, provide a nonlinear mapping of the input into a high dimensional space. Finally the time-dependent output of the reservoir is computed in the “output layer” as a linear combination of the internal variables. The readout weights used to compute this linear combination are optimized so as to minimize the mean square error between the target and the output signal, leading to a simple and easy training process.[3] (clarification added)
What this amounts to is using the non-linear vibrating movements of an analog phenomenon (large-scale Newtonian nonlinear systems like the ripples in disturbed water) to perform calculating work. Dale draws parallels with the brain’s “wet” environment in its helping process the perception of say, a light. Specific areas of the brain have been shown to process incoming visual signals, but they receive “computing” help from the entire “wet global workspace” of the brain's neurochemical soup.
Parallel to these researches, Randal A. Koene proposes “substrate independent” pattern-copying of neural networks and, ostensibly, whole biological entities. This means that one could retain the core relationships of a pattern in space-time, such as a brain, but embody it in something other than carbon-based forms. With respect to use of the word “independent” in substrate independence, journalist Mark O’Connell writes:
This latter term, I read, was the ‘objective to be able to sustain person-specific functions of mind and experience in many different operational substrates besides the biological brain.’ And this, I further learned, was a process ‘analogous to that by which platform independent code can be compiled and run on many different computing platforms.’[4]
Koene’s work is funded by Russian millionaire Dmitri Itskov, who founded the 2045 Initiative, whose goal is “to create technologies enabling the transfer of an individual’s personality to a more advanced nonbiological carrier, and extending life, including to the point of immortality.”
If these avenues prove successful (and that’s a big if), couldn’t some other civilization that long ago discovered matter's computable properties already have “hacked” the space (the zero-point field and/or space dust/gases) between stars/planets to transmit information over vast distances at the speed of light? Could they send their own DNA (or “substrate-independent” copies) as coherent pulses of light over these distances? Or even “instructions” to build/grow vehicles from the elements found in the “dust” and gases in transit along the way, or at the beams' destination solar system? The idea of sending information via photon streams is not far-fetched, and has recently been hypothetically advanced enough to be testable: https://www.livescience.com/61993-quantum-message-double-speed.html?utm_source=notification
Their first problem would be to overcome the entropy that might occur to the traveling luminal signal that contains the “shipbuilding” information. Suppose an ET civilization around Alpha Centauri shot a massive series of photon beams (lasers) from their home planet that were encoded with information, instructions folded within instructions, using DNA or its “photonic substrate equivalent” (I choose this uninhabited system simply because it's closest to us). Primary amongst its instructions is the maintenance of microscopic nano-assembling units it will create upon reaching its destination. The beam is structurally designed to draw energy from the photon/electron streams emitted by the gas clouds it passes to surmount its own tendency to disintegration. Or perhaps it draws energy directly from the “quantum foam” of Planck space, essentially recreating itself continuously as it moves along, like the cells which continuously replicate within a biological body.
As it nears its target, say a billion miles out from our sun, it begins to accumulate particles of interstellar/intrastellar dust and as its mass increases slows significantly. Upon arrival at our target solar system, it would interact with the sun’s magnetic field. It would “stop.” Let’s say by this time it is the size of a baseball. Its first programmed task would be to gather enough stray material (gases, dust, particles) that its form (a large “dot” at this point) would attain significant mass, just as planets are supposedly formed. This means it would need to induce an “eddy” of centripetal motion in the magnetic field to form a “core.” Simultaneous with this self-creation is the manufacturing, as it grows in size, of nano-assembler-units that function like microscopic “bees.” Over time, the dot becomes a sphere the size, say, of quarter the Moon’s size, and the “bees” into forms ranging in size from microscopic to the size of a VW. Its instructions continue to unfold, the bees working, differentiating the parts of the sphere’s chemical-metallic form just like the ontogeny of a living creature in the womb. The parts begin to function/interact like a large-scale machine. It creates for itself a power plant that functions either like its transit method (drawing energy inherent in Planck space) or by solar, or nuclear, or all of these combined.
When its self-assembly is complete, it contains an “incubatorium.” By using records of its “parent” race’s DNA, and begins to fashion from the atoms/molecules up replicas of its biological parents. These beings are not alive in the sense we normally think of it; they are essentially cyborg-copies of their parents.
Or maybe the parents have decided to dispense with their biological form altogether and decide to create their surrogates as inchoate energy patterns capable of taking on any form, like the Organians on Star Trek. The biologically-modeled “ship” and its “crew” now begin to investigate our solar system, continuously sending back information at light-speed back to Alpha Centauri—an operation that took a mere 5+ years, and not the hundreds of thousands of years ET debunkers always say it would take for another civilization to reach us at subluminal speeds.
Now suppose this ET civilization shot millions of these beam-clusters into space, in all directions, towards every star system containing "M-class" planets. It would have AI outposts all over the galaxy.
This is one far-out-there hypothetical scenario. But doesn’t this information-only “trans-life” substrate hypothesis scientists like Koene are working on imply that signs of, or representatives of, extraterrestrial intelligences could be around us in a myriad of camouflaged forms and we wouldn’t know it? An aggressively-symmetrical tree? A strange meteor? A weird patch of fog? A quivering blade of grass? The octopus?
[1] https://theconversation.com/theres-a-way-to-turn-almost-any-object-into-... citing Fernando C., Sojakka S. (2003) Pattern Recognition in a Bucket. In: Banzhaf W., Ziegler J., Christaller T., Dittrich P., Kim J.T. (eds) Advances in Artificial Life. ECAL 2003. Lecture Notes in Computer Science, vol 2801. Springer, Berlin, Heidelberg
[2] Ibid.