[written 1998-06-04] Vernor Vinge has talked about a "Technological Singularity" as being a threshold in our advancement of the technologies after which the world gets too plain weird for us to comprehend. After discussing it with other people, it became clear to me that what Vinge was talking about was actually two different, albeit related, ideas, the epistemological Technological Singularity and a physical Technological Catastrophe. The difference is that between not knowing what is going to happen, and something really bad happening. He sort of wavers over from one to the other in his treatise. The focus of my paper is on briefly outlining the forms the Technological Catastrophe might take, independent of a Technological Singularity. The Technological Catastrophe can generally be described as a technological innovation which either removes humanity of control of its environment, or at least exerts an irresistable force on humanity (for good or for ill), removing humanity from the position of universal domination for the first time in many thousands of years. Some examples of scenarios which could constitute the Catastrophe: * The grey goo scenario -- microtechnology run amok, microscopic Van Neumann machines pulling everything apart to make more Van Neumann machines until the entire Earth's surface (and everyone and everything on it) is converted. * The outbreak scenario -- genetically engineered bacteria or virii getting loose into the environment and killing everyone. * The Homo Superior scenario -- creating genetically engineered human beings with superior abilities against whom "normal" human kind cannot compete (qv the FAQ: "Would a superior being want to kill us?" The short answer is "Not necessarily, but that's beside the point"), * The Borg scenario -- mating human beings with cybernetic systems creating an elite class of humanity against whom "normal" human kind cannot compete (note -- it can be argued that this has already happened to an extent; a college student without a home computer is at a disadvantage when competing for limited resources against a college student with a home computer, and an engineer with access to a well-equipped workstation can max out any intelligence test), * The Frankenstein scenario -- creating a superintelligent AI entity whose cognitive capabilities are as beyond ours as ours are beyond an animal's; this scenario usually assumes that the AI is capable of self-direction. ie, The TC is presumably apocalyptic, either destroying humanity or placing it under the domination of a higher power (maybe benign, maybe not, either way taking humanity off the top of the food chain). Even if one doesn't believe in cataclysms on the scale Vinge does, it is irrefutable that the advancement of technology will have a big impact on our society. Most of us are familiar with Moore's "Law" (more of a rule of thumb than a law) as it applies to computer tech- nology, but less familiar with the similarly huge strides made in recent years in mechanical and biological technologies. Even by very conservative estimates, it is likely that we will have the technology necessary for implementing any one of the scenarios I outlined here in no more than 20 years, possibly less. Even if these "oopses" are avoided, we can expect technology of this caliber to appear in our everyday tools -- our computers, our medicines, our automobiles, etc. By "no more than 20 years, possibly less", I mean possibly much, much less. Amdahl's Law states that the performance of a parallel system is limited by the time it takes to perform dependent tasks. We run into it all the time in computer architecture and industrial architecture (eg, designing assembly lines). The more parallelism a system uses and the higher its per-operation latency, the more harshly it is limited by Amdahl's law. The human brain is an extremely highly-parallel, high- latency system; it has trillions of synapses and billions of neurons, which signal each other at a peak frequency of about 1000 Hz (1 ms latency). This signal activity is very nonuniform. Different parts of the brain are active at different times. This latency is very high compared to that of modern microprocessors, which have exceeded 500 MHz operational frequencies (2ns latency) and are rapidly approaching 1,000 MHz operational frequencies (1ns latency). So depending on how well we can leverage the lower latency of uprocessor tech to solve the same kinds of problems the human brain solves with its low- latency nervous system, we might be able to create a computer system capable of doing what the human brain does which is very much "smaller" than the human brain in terms of operations of a given complexity per second. It is my suspicion that already existing 400-processor cluster systems might exceed the gross compute power needed to construct a super-intelligence, and that the missing ingredients are sufficient storage and the necessary software. The storage problem might not be a problem much longer. The densities of secondary storage units (ie, hard drives) have been increasing at a rapidly accelerating rate. Where storage devices were getting denser at a rate of 100% per year before, they are now getting denser by about 150% per year. In four years, multi-terabyte storage will be relatively inexpensive and readily obtainable. The software would seem to be the hardest part of the equation; there are many arguments (some of them very good ones, most of them not) that it is not possible at all to implement a sentient being in software. On the other hand there are many enthusiastic programmers who are more optimistic and have some good ideas on how to go about implementing such a thing. The basic theory necessary (ie, what are thought, intelligence, and sentience in the first place?) is still being hammered out, however, and it is my belief that we are several years from seeing interesting software. The hardware will be available long before we know how to make it work. Vinge suggests that we might *accidently* create a sentient being, but I find this very doubtful. The future might look back at our ideas of the TC and laugh, as we laugh at Malthus, who observed that increases in farm yields was linear and population growth is exponential and predicted massive famine in the moderate future. Maybe not. I hope it does, though, and the TC never manifests. Benign or not, it would utterly destroy our current way of life. Maybe if Bill Gates takes over the world and all computers run Microsoft software, no computer in existence will be stable enough to support an AI for very long. :-) We can hope.