[an error occurred while processing this directive]
Bill Joy On Extinction of Humans
Science Posted by Hemos on Sunday March 12, @11:22AM
from the sufficently-advanced-technology dept.
e3 writes "The Washington Post is running a provocative article in which Bill Joy is quoted as, "...essentially agreeing, to his horror, with a core argument of the Unabomber, Theodore Kaczynski -- that advanced technology poses a threat to the human species." " As it stands, the title sounds sensationalistic - but read the article, and think about what point he's trying to make. Bill Joy's a pretty level-headed guy, and I think we need to consider these issues /now/ so that they don't come true.

RMS writes to Tim O'Reilly about Amazon | Learning About Genetic Engineering On The Net  >

 

Slashdot Login
Nickname:

Password:

Don't have an account yet? Go Create One. A user account will allow you to customize all these nutty little boxes, tailor the stories you see, as well as remember your comment viewing preferences.

Related Links
  • e3
  • Washington Post
  • article
  • More on Science
  • Also by Hemos
  • Science
  • Nanomedicine
  • Bill Joy On Extinction of Humans
  • Largest Carnivorous Dinosaur Found
  • Galileo And Cassini Team Up
  • $6 System-On-A-Chip Mimics Human Vision
  • Pictures Of Life Forming Elements From Hubble
  • Genome Project Squabbling
  • Bigger Rockets For 'Heavy' Lifting
  • JPL Accomplishes Laser Sail First
  • NASA and Boeing Lock Heads Over Space Station
  • "Bill Joy On Extinction of Humans" | Login/Create an Account | 335 comments | Search Discussion
    Threshold:
    The Fine Print: The following comments are owned by whoever posted them. Slashdot is not responsible for what they say.
    ( Beta is only a state of mind )
    (1 ) | 2 (Slashdot Overload: CommentLimit 50)
    Wired (Score:1)
    by -ryan (me at ryanmarsh dot com) on Sunday March 12, @11:27AM EST (#1)
    (User Info) http://www.ryanmarsh.com
    There is a great article in the latest issue of Wired covering Bill and this interesting topic.

    -ryan

    "Any way you look at it, all the information that a person accumulates in a lifetime is just a drop in the bucket."
    -- Bateau / Ghost In The Shell

    [ Reply to This | Parent ]
    Wired (Score:1)
    by -ryan (me at ryanmarsh dot com) on Sunday March 12, @11:35AM EST (#9)
    (User Info) http://www.ryanmarsh.com
    BYW: There is also a few pages from CmdrTaco's diary in there (pg 128)....

    It's always been my dream to be featured in Wired. Rob, you are my hero! One day I'll be in there, .... oh yes, one day.. I WILL be in there....

    -ryan

    "Any way you look at it, all the information that a person accumulates in a lifetime is just a drop in the bucket."
    -- Bateau / Ghost In The Shell

    [ Reply to This | Parent ]
    Re:Wired (Score:1)
    by Tuxedo Mask on Sunday March 12, @12:18PM EST (#67)
    (User Info)

    There is a great article in the latest issue of Wired covering Bill and this interesting topic.

    Actually, that article is what the article that this article is about is about! :-)


    [ Reply to This | Parent ]
    ya got me there! (Score:0, Troll)
    by -ryan (me at ryanmarsh dot com) on Sunday March 12, @01:12PM EST (#117)
    (User Info) http://www.ryanmarsh.com
    ya got me there!

    "Any way you look at it, all the information that a person accumulates in a lifetime is just a drop in the bucket."
    -- Bateau / Ghost In The Shell
    [ Reply to This | Parent ]
    Bill Joy, God of vi! (Score:0)
    by Anonymous Coward on Sunday March 12, @12:22PM EST (#75)
    Eat flaming death emacs scum!
    [ Reply to This | Parent ]
    Re:Bill Joy, God of vi! (Score:0)
    by Anonymous Coward on Sunday March 12, @01:17PM EST (#121)
    Vi that's one nasty thing, it WILL leed to death of childern. EMACS is true..
    [ Reply to This | Parent ]
    "Scientific Advances" - What a joke. (Score:0)
    by Anonymous Coward on Sunday March 12, @12:24PM EST (#78)
    We still cannot cure the common cold.
    We still dont have a cure for AIDS or cancer.
    We still use operating systems with their roots from the 1960s.
    Artifical intelligence and nanotechnology are going nowhere fast.

    What a joke. We haven't progressed enough technologically as he thinks we have. I guess to a person who thinks Java is revolutionary...
    [ Reply to This | Parent ]
    Re:"Scientific Advances" - What a joke. (Score:0)
    by Anonymous Coward on Sunday March 12, @05:55PM EST (#273)
    If you think that AI is going nowhere, then you're misinformed. It's true that most of the stuff you read in AI textbooks (rule-based approaches) won't get you anywhere towards real intelligence. But huge advances are being made now in the long neglected field of neural networking. We're very close to having working mathematical models for implementing human level intelligence. You won't find this in any books written today. You'll find some amount in papers. But some of the really amazing discoveries aren't even ready for publication yet (and of course I can't disclose that information). It turns out that we don't even have to be able to understand all the nitty-gritty details of the human brain. We now know almost all we need to know to be able to implement intelligent systems. In a few years, some of this technology will be ready for commercial application. The goal of passing the Turing Test is not as far away as one might think.
    [ Reply to This | Parent ]
    Idiot Hemos (Score:0)
    by Anonymous Coward on Sunday March 12, @01:19PM EST (#123)
    "we need to consider these issues /now/ so that they don't come true"

    Are you really serious? When our Sun [celestial body in this case] goes through it's life cycle, as it must, we are all toast anyway. Why panic about the inevitable?

    And so what if we all end up like The Matrix. Neo will come and save us all.
    [ Reply to This | Parent ]
    Re:Idiot Hemos (Score:1)
    by workingman on Sunday March 12, @09:24PM EST (#309)
    (User Info)
    I would hope that by the time our sun finally finishes it's life cycle we as a race are long gone from this solar system.
    [ Reply to This | Parent ]
    Re:Wired (Score:1)
    by noahb on Sunday March 12, @02:45PM EST (#187)
    (User Info)
    It seems inevitable that eventually we will create something that Ďreplacesí us. The difference is that I donít see this as a bad thing. Evolution is amazing, and is responsible for creating humans, but ultimately slow and unreliable. All species are able to adapt to their environment. It is generally accepted that the species with the best survival advantage are those species that are able to quickly adapt to environmental changes, and able to adapt in the most flexible ways. Evolution is one way that species adapt over long periods of time, and species are able to adapt without an evolutionary design change. The human body is able to quickly adapt to different diets depending on what food is available for example.
    But evolution is a kind of random generate-and-test algorithm. Advances in technology are directed, intentional, and much more efficient. Eventually human life (as we know it) will not be able to compete with a system (that we created) that is able to directly modify and improve itís own design.
    The idea that this system will be purely mechanical, computerized, or nano-tech based seems a bit far fetched, at least for the immediate future. Far more likely to replace us is a hybrid genetically modified version of ourselves, combined with mechanical, computer, and nano-technology.
    You get genetic information from your parents, but you donít directly get the experience or knowledge from them Ė you have to relearn everything from scratch. Computers and robots donít have this limitation.
    If the planet was to see dramatic environmental changes, either due to technology, pollution, etc., or due to a natural but radical event, the human race will not be able to adapt quickly enough. But a self-modifying organism that is able to directly and purposefully modify itís own (genetic) design will have the ultimate ability to adapt, and therefore have the ultimate chance of survival.
    It is silly to think that the human race will last forever, given the mere blip of time that we have existed in the history of life on this planet, and given that virtually all species eventually become extinct. (Not to mention that we are generally speaking resistant to radical change)
    I look at technology as the natural evolutionary next step, from a random, inefficient process to a directed efficient one. The ability for improvements (adaptations) to be implemented in a single generation instead of millions of years will make evolution obsolete.

    [ Reply to This | Parent ]
    Re:Wired (Score:0)
    by Anonymous Coward on Sunday March 12, @10:51PM EST (#319)
    Well said and I agree. Humans are always making babies and then freaking out when they arrive.

    [ Reply to This | Parent ]
    Goddamn (Score:0)
    by Anonymous Coward on Sunday March 12, @02:47PM EST (#192)
    And this guy is a BILLIONAIRE???????
    [ Reply to This | Parent ]
    In response to noahb (Score:1)
    by saBBath on Sunday March 12, @11:25PM EST (#327)
    (User Info)
    As you read this, please don't lose sight of the fact that it is the morality of the issue we're discussing here. I don't see how can you view the technological advances as a step in evolutionary progression. Evolution happens in accordance with the natural laws. Technological progress may be analogous to the Darwinian evolution but it is not the same thing, because it involves intelligent, structured, and conscious development of things. Another thing that we need not forget is that humans are not the only species on this planet (although we often act like that) and are not the only ones to suffer the consequences of the possible disaster. Perhaps due to our short-sidedness we humans deserve to be extinct. But if we fall, do we have the moral right to take down the rest of this planet's life along with us? I also disagree with the comment that humans are able to adjust their diet based on the food that's available. That simply isn't true. Humans need the basic nutriens (like protein, vitamins, and certain minerals) to survive, or at least stay in good health. In summary, I tend to think of technological progress as a disruption, not a continuation of the evolution. I also view it as something quite unnatural (by "natural" I mean without intervention of human >>or other intelect).
    [ Reply to This | Parent ]
    Re:Wired (Score:0)
    by Anonymous Coward on Sunday March 12, @11:44AM EST (#30)
    the next time you want to post something anonymously, i suggest you take out your signature. -notryan
    [ Reply to This | Parent ]
    Re:Wired (Score:0)
    by Anonymous Coward on Sunday March 12, @12:19PM EST (#70)
    oops, hit submit by mistake. i meant to say, 'i'm a stupid idiot.' -notryan
    [ Reply to This | Parent ]
    1 ph33r j00 (Score:1)
    by -ryan (me at ryanmarsh dot com) on Sunday March 12, @01:31PM EST (#132)
    (User Info) http://www.ryanmarsh.com
    <sarcasm>

    oh, how could you. how hurtful. great way to start a flame war...

    d00d j00 4r3 s0 31337!!! j00 |-|4XX0r3d /\/\Y |-|4|\|dL3!!!
    c4|\| j00 +34c|-| m3 h0\/\/ +0 |-|4XX0r /. ???

    </sarcasm>

    "Any way you look at it, all the information that a person accumulates in a lifetime is just a drop in the bucket."
    -- Bateau / Ghost In The Shell

    [ Reply to This | Parent ]
  • 1 reply beneath your current threshold.
  • Ishmael (Score:1)
    by eomir on Sunday March 12, @11:29AM EST (#2)
    (User Info)
    If this interests you, I would recommend the book Ishmael by Daniel Quinn.

    I don't know how to sign my name with a keyboard!
    [ Reply to This | Parent ]
    Re:Ishmael (Score:1)
    by CWCarlson on Sunday March 12, @12:06PM EST (#53)
    (User Info)
    And if Ishmael grabs your attention, don't stop there!


    Go on to The Story Of B, My Ishmael, and Beyond Civilization. All good stuff, assuming you can open your mind enough...

    [ Reply to This | Parent ]
    Pathetic (Score:0, Insightful)
    by Anonymous Coward on Sunday March 12, @12:19PM EST (#69)
    I'm barely able to understand the references to homosexuality here. First, they are off topic. Second, these comments are supposed to degrade both Rob and gay people. Third, whether or not Rob is gay is totally irrelevant to anything. Fourth, being gay is no better or worse than being straight. And finally, those who post such comments are obviously seriously troubled with regard to their sexuality. I pity you.
    [ Reply to This | Parent ]
    Re:Ishmael (Score:0)
    by Anonymous Coward on Sunday March 12, @03:46PM EST (#222)
    _Ishmael_ is terribly soppy. It doesn't present any kind of rational argument at all. (I really enjoyed the blanket condemnation of people who practice agriculture. Good move, Quinn.)
    [ Reply to This | Parent ]
    Re:Ishmael (Score:1)
    by delong on Sunday March 12, @03:57PM EST (#229)
    (User Info)
    "And if Ishmael grabs your attention, don't stop there! Go on to The Story Of B, My Ishmael, and Beyond Civilization. All good stuff, assuming you can open your mind enough..." LOL! Open your mind enough. Ishmael is moronic hippie stoner-circle material. Givers and Takers indeed. This is the same kind of material that enthusiastically makes a case for american indians having been some sort of eco warrior race. Get out of Oregon while you still have clue.
    [ Reply to This | Parent ]
    Re:Ishmael-to criticize one must first understand (Score:1)
    by saBBath on Sunday March 12, @11:35PM EST (#331)
    (User Info)
    I found Ishmael to be indeed very rational and well drawn. It speaks from a different perspective, which is why one needs an open mind to understand its deed. I see that Deldong did not understand a bit of it as the above message is completely off the point the author is making. I suggest taht our friend deldong gets out of whatever secluded place he is in and gets a clue that there are other than the mainstream trains of thought out there. BTW, it's LEAVERS (not givers) and Takers.
    [ Reply to This | Parent ]
  • 1 reply beneath your current threshold.
  • Full article in Wired (Score:2, Informative)
    by Carey on Sunday March 12, @11:30AM EST (#3)
    (User Info) http://carey.myip.org
    Bill Joy's full article on this subject appeared in this month's Wired. He warns us against three technologies he feels could be dangerous to the human race: Genetic Engineering, Nanotechnology and Robots.

    (Also in Wired, see the Rob Malda diaries)

    I thought the article was very well researched and raised some provocative points. It's always good to re-hash ethical arguments in science, and I think the article is very balanced in the way it addresses the luddite mindset.
    [ Reply to This | Parent ]
    Re:Full article in Wired (Score:0)
    by Anonymous Coward on Sunday March 12, @01:34PM EST (#135)
    There is a picture of CowboyNeal, also. Based on his trremendous girth, his name should be IAteACoupleOfCowboysNeal.
    [ Reply to This | Parent ]
    A Lot Like Medicine... (Score:1)
    by Bilbo (bilbo@NOSPAM.questra.com) on Sunday March 12, @02:23PM EST (#168)
    (User Info) http://home.rochester.rr.com/baggins
    > It's always good to re-hash ethical arguments in science ...

    It's a lot like medical research. We come up with new and interesting technologies, but just because something can be done doesn't mean it should be done...

    I think sometimes the nay-sayers are written off as hopeless Ludites and crackpots, but they can make us think.... if we just take the time to listen.

    -- Your Servant, B. Baggins

    [ Reply to This | Parent ]
    Re:A Lot Like Medicine... (Score:1)
    by delong on Sunday March 12, @04:18PM EST (#236)
    (User Info)
    Medical ethics are, IMO, more interesting that speculating about nanites and robots snuffing us out.

    For instance, medicine has this nasty habit these days of trying to preserve a person's life, irregardless of the persons stated wishes or the quality of life that person may expect. Does a doctor have a moral right to extend a failing life when such extension is futile and only extends the suffering of the patient? Is it MORE ethical, with the consent of the individual in question, to end that life? Premature infants are another case in point. Extremely premature infants, around 3 months premature, will be preserved whether the parents wish to or not. The STATE overrides all parental considerations, preserves the life of the child, and then hands it off to the parents, along with the bill. Does medicine, and especially the State, have moral responsibility to preserve a life whose quality may be less than fair (extremely premature children are susceptible to all manner of problems, including downs syndrome), and what more damn the parents to caring for a terminally handicapped child, against their wishes?

    These are interesting ethical questions. And immediately applicable. To hell with the Unabomber. Lets discuss Kevorkian.
    [ Reply to This | Parent ]
    Re:A Lot Like Medicine... (Score:0)
    by Anonymous Coward on Sunday March 12, @09:34PM EST (#310)
    Babies may be premature because they have down's syndrome. They *never* have down's syndrome because they are premature - down's syndrome is a genetic defect consisting of an extra copy of a particular chromosome, and therefore the abnormality can arise, at the latest, before the first cell division the fused sperm and egg make.
    [ Reply to This | Parent ]
    I think he is DEAD on. (Score:0)
    by Anonymous Coward on Sunday March 12, @02:26PM EST (#171)
    This is something that I have been thinking about alot lately. One of the key tenets of the "Third Wave" is individual empowerment. Joe Blow is able to do a heck of alot more than he was 300 years ago. Crude example, 300 years ago one guy could take a musket, run to the town center and maybe take out 5-6 people. Now the guy takes a couple of Uzi's and heads to down town NY. Watch out!. Compound the mayhem if instead their 25 people instead of just 1. It just seems that as we add more people and and potentialy destructive / dangerous technology we are adding energy to the system, and at some point chaos will set in and watch out!

    Now I disagree that it will be Genetic Eng, Nanotech, or robots. Chances are we won't identify it before it's too late.
    [ Reply to This | Parent ]
    Re:Full article in Wired (Score:1)
    by saBBath on Sunday March 12, @11:50PM EST (#334)
    (User Info)
    Everyone here is so terribly concerned with humans. What about other species of this planet? If our own technology leads to our own extinction, then it's our own fault, and maybe a right price to pay for our short-sidedness. But what about other species on this planet? Do we have a moral right to destroy it along with us?
    [ Reply to This | Parent ]
  • 1 reply beneath your current threshold.
  • Wipe out the world (Score:2, Funny)
    by cowscows on Sunday March 12, @11:31AM EST (#4)
    (User Info) http://www.zoomnet.net/~cowscows/
    Ah, so now I understand Hemos' obsession with nanites...I think we know where the plague will be coming from.
    [ Reply to This | Parent ]
  • 2 replies beneath your current threshold.
  • Not really surprising... (Score:1)
    by No Such Agency on Sunday March 12, @11:32AM EST (#5)
    (User Info)
    Hmmm, perhaps Kaczynski believed that technology would bring about the extinction of humanity because he thought everyone else was a sick murderous fu*k like him... Hell, all he needed was stamps, some dynamite and a couple of cut-up pie plates to kill people.

    Of course, what Joy really means to say is "Technology will bring about the downfall of our species, unless you all start running 'Jini' on your toasters RIGHT NOW!" :-)
    [ Reply to This | Parent ]
    Re:Not really surprising... (Score:1)
    by Carey on Sunday March 12, @11:36AM EST (#11)
    (User Info) http://carey.myip.org
    The quote from Kaczynski in the article is surprisingly coherent. The context of the quote in the article is what is important.

    Joy explains the controversy about having Kaczynski's work published under the threat of continuing terrorist acts.

    He also says its a good thing that Kaczynski was a mathematician and not a computer scientist.

    Jini is harmless compared to the potential horrors this article discusses.
    [ Reply to This | Parent ]
    Re:Not really surprising... (Score:0)
    by Anonymous Coward on Sunday March 12, @04:19PM EST (#237)
    Have you ever read Kaczynski's work? Yeah I know he was a psycho killer, but his manifesto is very well written and thought provoking. I hate to admit this but there is a lot of truth in it.
    [ Reply to This | Parent ]
    Sounds a little familiar... (Score:0)
    by Anonymous Coward on Sunday March 12, @11:33AM EST (#7)
    Doesn't this sound a little like the terminator movies? Doesn't this sound a little like "The Matrix"? Maybe hollywood has some decent ideas about technology?
    [ Reply to This | Parent ]
    It's Bill Joy who's clueless here (Score:1)
    by mangu (orlo_porter@hotmail.com) on Sunday March 12, @12:54PM EST (#103)
    (User Info)
    Catastrophist stories about the future of technology are nothing new. The Terminator movies and The Matrix are nothing but remakes of the Frankenstein story, first written in the early 1800s.

    A catastrophe is always on the near future, according to those predictions, yet never materializes. Why? Because technology is made by engineers. To be an engineer, there is one initial condition: you can't be stupid. Engineers have far more foresight than writers believe.

    [ Reply to This | Parent ]
    Re:It's Bill Joy who's clueless here (Score:0)
    by Anonymous Coward on Sunday March 12, @01:17PM EST (#122)
    You mean like the engineers that did the Titanic. I know that they didn't drive it in to the berg, but it still sank.
    [ Reply to This | Parent ]
    Re:It's Bill Joy who's clueless here (Score:1)
    by mangu (orlo_porter@hotmail.com) on Sunday March 12, @02:01PM EST (#155)
    (User Info)
    You mean like the engineers that did the Titanic. I know that they didn't drive it in to the berg, but it still sank.

    The Titanic was a well engineered ship that had bad luck. It was divided into several watertight compartments, a sound engineering practice which has kept many ships from sinking. But it just happened to glance an iceberg in a way that too many compartments were punctured at the same time. Its twin, the Olympic, lived its full planned life and was scrapped when it became obsolete.

    The sinking of one ship doesn't mean it was badly designed, much less that the entire science of ship engineering is doomed to failure. What Bill Joy is saying seems more something like "all the ships in the world will suddenly sink at once".

    [ Reply to This | Parent ]
    Re:It's Bill Joy who's clueless here (Score:0)
    by Anonymous Coward on Sunday March 12, @04:23PM EST (#240)
    Actually, the design of the Titanic was not at all at fault, it was the construction. The metal used in the hull was not formulated properly, and was too brittle at the freezing temperature of water. The ship hit the iceberg, and the hull cracked (not punctured) for quite a distance. I saw a show about the _two_ sister ships of the Titanic, one of which was noted for its' durability. One of them, in it's lifetime, collided with (from memory, but probably still close):
    a battleship (almost took it out, too, iirc)
    a harbor tug
    a torpedo
    the sub that fired the torpedo (sent it straight down)
    a couple of other things too, I think.
    I think the other sister ship was sunk by torpedoes in WW2.

    Sort of a side note, there were several American ships (Liberty ships) lost as late as WW2 due to brittle fracture of their hulls in cold water.

    -M

    [ Reply to This | Parent ]
    Re:It's Bill Joy who's clueless here (Score:1)
    by mangu (orlo_porter@hotmail.com) on Sunday March 12, @05:12PM EST (#262)
    (User Info)
    The ship hit the iceberg, and the hull cracked (not punctured) for quite a distance.

    I read John Ballard's book on how his team found the Titanic. He says it was more likely a long series of small punctures, rather than one big gash as has been conjetured, because the ship floated for several hours after hitting the iceberg. IIRC, he said that a 100 meter long hole, which would damage enough compartments to sink the ship, would be no more than one inch wide in order for the ship to sink so slowly. Unfortunately, the ship is lying on the side that hit the iceberg, so it's hard to verify this.

    The Titanic had two sister ships, the Britannic, which was sunk by a torpedo in WW1, and the Olympic, which was cut up in pieces and sold as scrap sometime in the 1930s. Its piston steam engines had become obsolete by that time, turbines were much more efficient.

    I think the Liberty ships were designed intentionally flimsy, to economize metal. They figured those ships would not stand a very high probability of surviving long in the war, anyway, so they were never designed for durability.

    [ Reply to This | Parent ]
    Re:It's Bill Joy who's clueless here (Score:0)
    by Anonymous Coward on Sunday March 12, @01:31PM EST (#131)
    Actually the matrix is a blatant rip of Descarte's first meditations, writen mid 17th century I believe. The whole Evil Genious argument which spurred on various arguments on epistemology, etc.
    [ Reply to This | Parent ]
    Re:It's Bill Joy who's clueless here (Score:1)
    by delong on Sunday March 12, @04:24PM EST (#241)
    (User Info)
    Yes and it was a very bad attempt as far as Philosophy goes. Interesting premise - appearance and reality. Then they mucked it up with all sorts of incoherent and inconsistent shite about fate. They royally screwed the whole free will/determinism dichotomy.


    [ Reply to This | Parent ]
    Re:It's Bill Joy who's clueless here (Score:1)
    by zigzag (mzauzigSPAMMENOT@atl.mediaone.net) on Sunday March 12, @03:35PM EST (#216)
    (User Info)
    To be an engineer, there is one initial condition: you can't be stupid

    I dunno. I'm an engineer and I'm pretty stupid.
    [ Reply to This | Parent ]
    huh? (Score:0)
    by Anonymous Coward on Sunday March 12, @04:23PM EST (#239)
    Engineers are smart but they aren't gods. They fuck up ALL THE TIME. And not only that, but deployment of technologies is not really up to engineers, it's the suits who decided that stuff. Look at nuclear technology.
    [ Reply to This | Parent ]
    Of course engineers fuck up. They do *NEW* stuff. (Score:0)
    by Anonymous Coward on Sunday March 12, @09:16PM EST (#307)
    Engineers are smart but they aren't gods. They fuck up ALL THE TIME. And not only that, but deployment of technologies is not really up to engineers, it's the suits who decided that stuff. Look at nuclear technology.

    Instead of sitting on their ass being a "pundit" making "predictions" about the future, engineers are always tinkering with new stuff. Most of it will end up in the junk bin, but some of it becomes the tech of tomorrow. Some of it behaves badly, whaddya expect? It's new. It's experimental. It's stuff the suits haven't asked for nor even looked at yet. It was done of the engineer's off time.

    [ Reply to This | Parent ]
    I have discovered a new theorem! (Score:0)
    by Anonymous Coward on Sunday March 12, @11:34AM EST (#8)
    the proof of which can't fit in this post, but anyway:

    Hemos + CmdrTaco = Homos

    [ Reply to This | Parent ]
    Ahhh then you've never.... (Score:1)
    by Jonathan Hamilton on Sunday March 12, @01:48PM EST (#147)
    (User Info)
    You must have never seen all the hot slashdot groupies. They follow Rob to all the Linux Showcases.

    (Ok their not all hot, but their are a couple of them.)
    [ Reply to This | Parent ]
    Re:I have discovered a new theorem! (Score:1)
    by zigzag (mzauzigSPAMMENOT@atl.mediaone.net) on Sunday March 12, @03:45PM EST (#221)
    (User Info)
    What is it with all of this gay bashing stuff?

    Here's a clue:

    Men who gay bash are not confident about their own sexual orientation.

    In other words, Methinks he doth protest too much.

    Heh, I got it. You've got the hots for CmdrTaco. You're just are having a hard time admitting it to yourself.
    [ Reply to This | Parent ]
    Artificial Intelligence (Score:2, Insightful)
    by IO ERROR (...nospam!blackout.net!error) on Sunday March 12, @11:36AM EST (#12)
    (User Info) http://underground.ath.cx/
    Sounds to me like Bill Joy watched The Matrix. What's truly frightening is he's not at all off base.

    What he's obliquely referring to in this article is Artificial Intelligence. It doesn't seem unreasonable to me, given Moore's Law, that by 2030 we could have computers that exceed the capacity of the human brain to process information. That being a given, it doesn't take much of a leap of logic to conclude that some of those machines might just be capable of hosting an artificial intelligence.

    The question nobody even has a coherent theory for right now is: what would an (artificially) intelligent computer do? What would be its desires? Would it also have emotions? If so, what would it feel?

    They're questions we can't really answer right now. But we really need to be thinking about these things. If we don't NOW, then we might just find ourselves living in the Matrix.
    ---
    Lost: gray and white female cat. Answers to electric can opener.

    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:1)
    by cowscows on Sunday March 12, @11:41AM EST (#22)
    (User Info) http://www.zoomnet.net/~cowscows/
    We could ask the intelligent computer what operating system it prefers to run, and end the argument forever.
    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:0)
    by Anonymous Coward on Sunday March 12, @11:43AM EST (#27)
    and if it gives the answer we don't like, it's obvously incapable of human intelligence + emotion
    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:1)
    by Chandon Seldon (nat-at-calug-dot-net) on Sunday March 12, @12:52PM EST (#102)
    (User Info) http://www.calug.net/

    Because it would obviously answer with whatever OS it was currently running on.

    -------- The act of censorship is always worse than whatever is being censored. -Chandon Seldon

    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:0)
    by Anonymous Coward on Sunday March 12, @11:42AM EST (#24)

    The question nobody even has a coherent theory for right now is: what would an (artificially) intelligent computer do?

    The logical answer - preserve self and reproduce. It would interesting to see if they are more foresighted than humans, and don't rape their environment for shortsighted goals.
    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:0)
    by Anonymous Coward on Sunday March 12, @01:06PM EST (#109)
    how do you rape a piece of hardware?
    [ Reply to This | Parent ]
    Raping hardware (Score:1)
    by Dr. Spork (spork@clerk.com) on Monday March 13, @12:10AM EST (#335)
    (User Info)
    You wouldn't think it's possible, but Microsoft found a way...
    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:1)
    by zigzag (mzauzigSPAMMENOT@atl.mediaone.net) on Sunday March 12, @03:48PM EST (#224)
    (User Info)
    Motivation gets to the heart of the matter.
    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:1)
    by delong on Sunday March 12, @04:30PM EST (#245)
    (User Info)
    Keep yer environmental blathering on track there skippy. They would be MACHINES. MACHINES dont have to worry about global warming, carrying capacity, animal extinctions, degradation of the environment, etc etc name your favorite eco cause here. A MACHINE would have no reason to worry. A MACHINE could thrive perfectly happy on a dead planet. What makes you think an intelligent machine wouldnt be WORSE than humans?
    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:5, Insightful)
    by ucblockhead (sburnapSPAMSUXlinux@attSPAMSUX.net) on Sunday March 12, @11:51AM EST (#36)
    (User Info)
    It didn't seem unreasonable to people in 1950 that we'd have artificial intelligence by 1965. It didn't seem unreasonable to people in 1970 that we'd have artificial intelligence by 2000. It didn't seem unreasonable to many of my professors and or fellow students to think we'd have it by 2010 when I studied it in 1985.

    The error is in thinking that AI is just a matter of getting enough transisters together. Hardly! The real problems in AI are not hardware speed so much as what to do with that hardware to make it intelligent. This is not a trivial problem. it is an extremely difficult problem, IMHO probably the hardest problem the human race has ever faced.

    The question nobody even has a coherent theory for right now is: what would an (artificially) intelligent computer do? What would be its desires? Would it also have emotions? If so, what would it feel?

    And this is really the key thing. You can't build an artificially intelligent computer unless you have a damn good idea of those things. You can't build something with desires, emotions, etc. unless you know, in detail, what desires and emotions are, at a far deeper level than we do now.


    Those who will not reason, are bigots, those who cannot, are fools, and those who dare not, are slaves. -George Gordon Noel Byron (1788-1824), [Lord Byron]

    [ Reply to This | Parent ]
    Top-down vs. bottom-up AI design (Score:4, Insightful)
    by Kaufmann (kaufmann@toostupidtoremovethis.infolink.com.br) on Sunday March 12, @12:16PM EST (#64)
    (User Info)
    You can't build an artificially intelligent computer unless you have a damn good idea of those things. You can't build something with desires, emotions, etc. unless you know, in detail, what desires and emotions are, at a far deeper level than we do now.

    Your entire argument is based on the premise of top-down design - that the Right Way to build an AI is the classical engineer's approach of designing the thing as you would design any other machine or piece of software.

    Fortunately, most people now recognise that this approach is doomed, for the exact reason that you point out: an "intelligence" of any sort is much more complex and less well-understood than anything we've ever had to design.

    So, what's the alternative? Automated bottom-up design. Specifically, the idea is to first work out the building blocks - the equivalents of neurons - and then have a GA or somesuch start trying to put together a "brain" out of these neurons, which is fit for a specific purpose. Note that this alternative doesn't require one to understand in excrutiating detail (or at all) the high-level abstractions which we consider as "intelligence" - it only requires a good GA and a good understanding of the brain at the cellular and subcellular level.

    Now this I don't consider far-fetched at all.

    (Of course, it's always worth mentioning that we could go the other way - first using nanotech to completely redesign ourselves into super-intelligent cybergods, then analysing our own new brains and replicating them to create completely new, fully artificial intelligent beings.)

    Kaufmann's First Law: All following laws are true. Kaufmann's Second Law: All preceding laws are false.

    [ Reply to This | Parent ]
    Re:Top-down vs. bottom-up AI design (Score:2)
    by Harvey on Sunday March 12, @12:36PM EST (#88)
    (User Info)
    (Of course, it's always worth mentioning that we could go the other way - first using nanotech to completely redesign ourselves into super-intelligent cybergods, then analysing our own new brains and replicating them to create completely new, fully artificial intelligent beings.)

    I don't see how we can make ourselves into cybergods, at least in terms of intelligence, without having a much fuller understanding of our brains than we do now.

    Another issue is that unless we copy the brain exactly, it's impossible, or at least extremely difficult, to make a machine emulate the brain until we know what the brain does and how it does it. However, your approach implies that we know everything about the neuron, and that the neuron is the only thing that matters in the nervous system. Hormonal levels and the extracellular fluid also play a role.

    It seems to me the most expedient way to make a brain is to either do a "black box" copy, e.g see how we behave and write a program to copy that, or a full "white box" copy, see how the brain works to the necessary level of detail and then write an implementation from there.
    [ Reply to This | Parent ]
    Re:Top-down vs. bottom-up AI design (Score:2)
    by ucblockhead (sburnapSPAMSUXlinux@attSPAMSUX.net) on Sunday March 12, @12:42PM EST (#92)
    (User Info)
    Specifically, the idea is to first work out the building blocks - the equivalents of neurons - and then have a GA or somesuch start trying to put together a "brain" out of these neurons, which is fit for a specific purpose.

    yes, and you've got to problems. 1) What exactly does a neuron do? and 2) how are they organized into a brain? Neither are easy questions.

    Yes, neural nets don't have to be explicitly designed at a low level. But that doesn't mean that you can just throw one together, throw data at it, and get it to work. First, you've got to design your network, then you've got to figure out how to train it.

    One thing we do know about the brain is it is not just a bundle of neurons. Those neurons have an organization that is genetically programmed.


    Those who will not reason, are bigots, those who cannot, are fools, and those who dare not, are slaves. -George Gordon Noel Byron (1788-1824), [Lord Byron]

    [ Reply to This | Parent ]
    Re:Top-down vs. bottom-up AI design (Score:2)
    by Kaufmann (kaufmann@toostupidtoremovethis.infolink.com.br) on Sunday March 12, @01:55PM EST (#151)
    (User Info)
    yes, and you've got to problems. 1) What exactly does a neuron do? and 2) how are they organized into a brain? Neither are easy questions.

    No, but they are much more easy to figure out than the Big Question of "what exactly constitutes intelligence".

    Yes, neural nets don't have to be explicitly designed at a low level. But that doesn't mean that you can just throw one together, throw data at it, and get it to work. First, you've got to design your network, then you've got to figure out how to train it.

    We don't have to do even that - all it takes is rudimentary understanding of the way the neurons are organised. Once you know that, you can have the GA do the rest.

    One thing we do know about the brain is it is not just a bundle of neurons. Those neurons have an organization that is genetically programmed.

    Yes, of course. But we also know that this organisation can't be too complex - specifically, it must be possible to describe using a fraction (I don't know how large a fraction, though) of the storage space of human DNA. By the way, this also hints at the possibility that a fuller understanding of the genome may provide an additional insight into the composition and organisation of the brain.

    Kaufmann's First Law: All following laws are true. Kaufmann's Second Law: All preceding laws are false.

    [ Reply to This | Parent ]
    Re:Top-down vs. bottom-up AI design (Score:2)
    by gargle on Sunday March 12, @10:27PM EST (#315)
    (User Info)
    Yes, neural nets don't have to be explicitly designed at a low level. But that doesn't mean that you can just throw one together, throw data at it, and get it to work. First, you've got to design your network, then you've got to figure out how to train it.

    Neural nets can be evolved through Genetic Programs. You basically have a genetic program that decribes how to grow the neural net (I don't have a reference handy at the moment unfortunately). So it's not necessary to design it.

    One thing we do know about the brain is it is not just a bundle of neurons. Those neurons have an organization that is genetically programmed.

    Well then evolve the organization through genetic programming!

    [ Reply to This | Parent ]
    Re:Top-down vs. bottom-up AI design (Score:0)
    by Anonymous Coward on Sunday March 12, @02:21PM EST (#166)
    We've gotten a decent little cockroach this way. Of course, they only require 6 basic instincts.

    Intelligence could possible arise from bottom-up. A recognizably human-like intelligence and one we could easily communicate with [the ultimate goal] would require a certain amount of top-down. This means that an AI would have to be somewhat based on our own limited understanding of ourselves. And if an AI is as neurotic and self-obsessive as we are, we'll do okay. =)

    Or we could just not let them self-reproduce...
    [Sci-Fi Movie Rule #5 - never let the automated AI control the life-supportand weapons systems.]


    [ Reply to This | Parent ]
    design vs synthesis (Score:1)
    by Zorikin (zorikin@nearmiss.com.com.com) on Sunday March 12, @02:54PM EST (#197)
    (User Info)
    > So, what's the alternative? Automated bottom-up design.

    I'm hesitant to call that process design - it's being grown like a plant, not constructed like a house.

    Such a synthesis is a good form of empirical study. Ultimately it won't be a replacement for design, but it will give many clues as to how design must take place.

    > have a GA or somesuch start trying to put together a "brain" out of these neurons, which is fit for a specific purpose.

    Don't do that. Doing that will produce an animal brain (of a particularly dumb animal). Instead, fit simultaneously for a wide variety of specific purposes, including competitive interaction. Humans have many mental abilities which seem to be selected for naturally.

    This kind of bottom-up synthesis could work as a means of creating intelligence, but the prerequisites for this approach are as hairy as for classical design, it's just that the design has been taken care of by a GA. It has to be able to interact with people. This is required to make sure that the program forms mental patterns connected to behaviors we can understand, so that when we have the Turing test for the final test of fitness, we have some way of telling whether or not it worked. And you obviously can't have a computer do it for you.

    > Note that this alternative doesn't require one to understand in excrutiating detail (or at all) the high-level abstractions which we consider as "intelligence"

    That's fine as far as creating disposable intelligence goes (once we're finally through with all that brute-force testing), but as far as science goes, it puts us right back where we started. The mind, though suddenly inexpensive, remains the mystery it was before.

    Also keep in mind that the mind may not really be the inseperable gestalt we tend to think of it as. It may be possible to replicate the various mental abilities separately, and gradually integrate them as we come to understand them more fully. There's really no reason to expect that we will get it all in one shot. Infinite improbability drives aside, no other technology has worked that way. Rather, AI will continue to be approached in incremental steps, building on each other. Probably for a very long time, and perhaps forever (though by then the AI will be doing the AI research ;).

    I think the long view advocates extensive research (including bottom-up synthesis), practical implementations, more specific domains, and perhaps most importantly, patience.

    Bottom-up (of this kind, and the ALife kind) has been a big deal for a while now, but the check is still in the mail as far as implementation goes. Chances are good that there will be at least one more reframing of the question, and probably several, before we lick the Turing test.

    I think the long view advocates research (including bottom-up synthesis), practical implementations which make incremental steps, focus on more specific domains, and patience.
    [ Reply to This | Parent ]
    -1, Redundant (Score:1)
    by Zorikin (zorikin@nearmiss.com.com.com) on Sunday March 12, @02:59PM EST (#199)
    (User Info)
    preview first, preview first, preview first ...
    [ Reply to This | Parent ]
    Re:Top-down vs. bottom-up AI design (Score:1)
    by Zarf (hartsock@ModZer0.cs.uaf.edu) on Sunday March 12, @04:56PM EST (#256)
    (User Info) http://i.am/hartsock
    Good GA's aren't easy to come up with... what is the "fitness factor" for a "brain"? What keeps that from falling inside a local minimum? As anyone who's coded with GA's knows, sometimes you can't hit your mark by starting out with any-old set of assumptions. Neural Nets and Genetic Algorithms both require some intelligent design choices to ensure that they come close to the desired goal... and then the previous point still holds, you can't make something you don't sufficiently understand.

    I personally think that top-down and bottom-up AI are both idealistic, a realistic designer has to compromise. The amount of compromise that is needed between the two approaches is totally beyond knowing... even after someone succeeds in building a human-level AI or AI-generator.

    Then there is the other problem of what is "intelligent". A nanite that's as smart as an ant in a colony with sufficent number may have a collective intelligence... is that our dreaded human-race-ending AI?

    - // Zarf //
    Live to Code, Code to Live!
    [ Reply to This | Parent ]
    Re:Top-down vs. bottom-up AI design (Score:2)
    by Abigail-II (abigail@delanet.com) on Sunday March 12, @08:34PM EST (#296)
    (User Info) http://www.foad.org/%7Eabigail/
    So, what's the alternative? Automated bottom-up design.

    Excuse me? Bottom up design isn't a magic wand. If you don't understand the problem, no design, whether bottom or top down will work. If you don't have a deep understanding of what you want to simulate - you won't simulate it.

    -- Abigail

    [ Reply to This | Parent ]
    Re:Top-down vs. bottom-up AI design (Score:2)
    by stripes (stripes at eng dot us dot uu dot net) on Sunday March 12, @10:05PM EST (#312)
    (User Info) http://www.eng.us.uu.net/staff/stripes/
    Excuse me? Bottom up design isn't a magic wand. If you don't understand the problem, no design, whether bottom or top down will work. If you don't have a deep understanding of what you want to simulate - you won't simulate it

    No, but Genetic Programming is. Sort of. It can, given enough time, work out a rough program (very rough) that can solve a problem the programmer can't descibe an algorithm for.

    "All" you need to provide is a fitness function that indicates how close the answer is (say 0.0 for not at all, and 1.0 for perfect), primitaves to be used to solve the problem (turn left, move forward, pick-up-food...) and a genetic cross over function (which is almost trivial, they can normally be reused from one GA to another).

    And a shitload of time.

    If you look at some of the GA derived programs for simple problems like an ant colony collecting food, they suck. Full of dead code (like "if (next to water) then if (not next to water) then 100-lines-of-never-reached-code-here"). But they work. At least for the sample problem set, and problems that are similar.

    If you look at some of the GA FPGA programs you will see designs with far fewer transistors then a person would have used. But they also only work within (roughly) the tempature range used during the GA test runs. And they have circuits that don't appear to do anything, but if you remove them the design stops working (capatictance issues I expect), and other crap a humon designer would avoid like the plague.

    In both cases it took a really long time for the GA to find the winning "program". GA uses the same sort of techniques that it is beleved "mother nature" uses to "design" plants and animals. In other words lots of trials, a handfull of mutations, some sexual reproduction (or asexual, but that is less effecent), culling the less efficent, and time. The results are somewhat more comprehensable to man, but only (in my opnion) because the fitness functions is so much simpler. The real one changes over time.

    GA is a magic wand that may give us AIs. But I don't think it will give us ones we can understand the working of any better then the natural intelegences we allready have to study.

    On the plus side, it can give us some kick-ass smart simulated ants :-)

    [ Reply to This | Parent ]
    Re:Top-down vs. bottom-up AI design (Score:2)
    by gargle on Sunday March 12, @10:37PM EST (#317)
    (User Info)
    If you look at some of the GA derived programs for simple problems like an ant colony collecting food, they suck. Full of dead code (like "if (next to water) then if (not next to water) then 100-lines-of-never-reached-code-here"). But they work. At least for the sample problem set, and problems that are similar.

    Imo, this is a strong piece of evidence that natural life did evolve (rather than get created). Because in natural organisms, like in GPs, there is a lot of redundancy, or dead code so to speak, in the DNA (and no doubt in our brains as well).


    [ Reply to This | Parent ]
    Re:Top-down vs. bottom-up AI design (Score:2)
    by Kaufmann (kaufmann@toostupidtoremovethis.infolink.com.br) on Sunday March 12, @10:05PM EST (#313)
    (User Info)
    If you don't have a deep understanding of what you want to simulate - you won't simulate it.

    That's not really true. A GA-based approach requires you only to understand the behaviour expected of the subject, not its necessarily its internal workings (even though, as another poster pointed out, it won't help in enlightening us as to how the mind actually works). My memory fails me, but I remember reading last year about a FPGA, configured by a genetic algorithm for a specific purpose, which was __BIGNUMBER__ times faster than special-purpose chips, but which operated in ways that its original designers didn't understand at all. This FPGA was relatively simple - only 100x100 IIRC - and yet GA-based design made it do completely unexpected things. What knows what can happen with a really large FPGA... or with a big bunch of nano-engineered artificial neurons.

    Kaufmann's First Law: All following laws are true. Kaufmann's Second Law: All preceding laws are false.

    [ Reply to This | Parent ]
    Exactly (Score:1, Interesting)
    by Anonymous Coward on Sunday March 12, @12:27PM EST (#82)
    Please mod up. There is almost no correlation between computing power and advances in AI. If there were, then we would have seen significant advances in AI already (which we haven't).
    [ Reply to This | Parent ]
    Have you ever heard of Deep Blue? (Score:2, Insightful)
    by mangu (orlo_porter@hotmail.com) on Sunday March 12, @01:44PM EST (#145)
    (User Info)
    There is almost no correlation between computing power and advances in AI. If there were, then we would have seen significant advances in AI already (which we haven't).

    Don't you consider the creation of a computer that no human can beat at chess a "significant advance in AI"?

    Before Deep Blue, the inexistence of a computer that could defeat a human grand-master at chess was considered evidence of "no significant advances in AI". Now that this computer exists, it's dismissed as nothing important. The entire field of Artificial Intelligence suffers from this public perception problem. Whenever a significant milestone is reached, the problem is immediately redefined to be something else.

    The funny thing is that the same people who say "we have no idea at all on how human intelligence works" are the same who say "Deep Blue isn't really intelligent, all it's doing is a very fast search on different possible plays". If they really have no idea on what is intelligence, how can they say intelligence is not the ability to do a quick search on different possibilities?

    [ Reply to This | Parent ]
    Re:Have you ever heard of Deep Blue? (Score:3, Insightful)
    by friedo (mnf7228@spam-me-not.osfmail.rit.edu) on Sunday March 12, @02:13PM EST (#161)
    (User Info) http://friedo.rh.rit.edu/
    The funny thing is that the same people who say "we have no idea at all on how human intelligence works" are the same who say "Deep Blue isn't really intelligent, all it's doing is a very fast search on different possible plays". If they really have no idea on what is intelligence, how can they say intelligence is not the ability to do a quick search on different possibilities?

    Well, because it's not. Deep Blue is able to beat chess masters because it has enough computing power to permutate all possible moves several generations into the future and pick the best one. Obviously, no chess master's brain can do that. Deep Blue's accomplishments are NOT that significant at all. The mathematics of what it does could have easily been worked out centuries ago - it's simply the first machine capable of actually doing the math. Human chess players have intuition. Because they've played several thousand games during their lifetime, they can see a certain combination of positions on a board and just know what play to begin excercising and what predictions to focus on. They can stare at their opponent to try and see if he's bluffing. They can make instinctual decisions without predicting every move in the future. When a computer can do that, please let me know - I'll be impressed.

    Every day you are confronted with thousands of choices. Most of them you make without really thinking, and most have several factors involved. Everything that you've done prior to that moment has a bearing on your current decision. You weigh actions vs. consequences. Priorities vs. Wants, etc., etc., etc. I have yet to see a machine that can make these types of decisions appropriately.

    Take the example of something more fast-paced than Chess like Soccer. If you're playing defense, and a forward is running the sideline with the ball, you have very little time to move. There are a million different things you could do, but only one will save the day. The only way you could know which one is to be in that situation right then - and have to make a split second decision. So, no, we don't have AI. I don't predict we will for quite some time.

    My DeCSS mirror is here. Where's yours?

    [ Reply to This | Parent ]
    Re:Have you ever heard of Deep Blue? (Score:0)
    by Anonymous Coward on Sunday March 12, @02:36PM EST (#180)

    "Take the example of something more fast-paced than Chess like Soccer."

    Have a look at the RoboCup homepage. These people are designing some really cool soccer-playing robots.
    [ Reply to This | Parent ]
    Re:Have you ever heard of Deep Blue? (Score:1)
    by mangu (orlo_porter@hotmail.com) on Sunday March 12, @03:09PM EST (#203)
    (User Info)
    Deep Blue is able to beat chess masters because it has enough computing power to permutate all possible moves several generations into the future and pick the best one. Obviously, no chess master's brain can do that.

    How do you know? Actually, Deep Blue has far less computing capability than a human brain. It's able to beat humans at chess only because it's so specialized.

    You make many assertions, such as "humans have intuition", "they can make instinctual decisions", "without really thinking", etc, which all mean the same thing: we are not really sure about the detailed paths which our minds follow when we make some decisions. We do have some fairly detailed knowledge about how neural nets work, however. We have created artificial neural nets which exhibit a lot of those same "intuition" characteristics, it's not entirely obvious at first how they achieve some results.

    The main obstacles to human-like AI today are two: we still do not have powerful enough hardware, and we need databases for all the little facts that constitute "common sense". Look here for some information on the generation of artificial common sense.

    [ Reply to This | Parent ]
    can't have a forest without some saplings (Score:2, Interesting)
    by Zorikin (zorikin@nearmiss.com.com.com) on Sunday March 12, @03:28PM EST (#210)
    (User Info)
    Your argument seems to be, I understand it and can express it mathematically, therefore it isn't inteligence, and isn't what's going on in the brain, but this doesn't address the challenge at all.

    You don't know what's going on in the brain, and you don't know what intelligence is. If intuition, or primary-process thinking, isn't understandable and expressable mathematically, then the goal of AI is literally impossible, and Turing-machine completeness is a crock.

    > They can stare at their opponent to try and see if he's bluffing.

    This is not a measure of intelligence, unless you think that a polygraph is intelligent.

    > Priorities vs. Wants, etc., etc., etc. I have yet to see a machine that can make these types of decisions appropriately.

    My operating system doesn't run a distributed.net client if other programs are taking up all the CPU. That's a decision based on a priority.

    If what you want is a program that can make decisions that are human enough and complex enough for a human to fret about, well, there's a lot of work in that, and pretending that the incremental steps don't count just puts you that much farther from the goal. They do count.

    > Take the example of something more fast-paced than Chess like Soccer.

    Uhh ... I think it's pretty clear that the problem here has nothing to do with intelligence. It's a question of motor coordination and perception. Reliance on intelligence may actually make the game harder.
    [ Reply to This | Parent ]
    Re:can't have a forest without some saplings (Score:0)
    by Anonymous Coward on Sunday March 12, @11:21PM EST (#326)
    Assuming that humans are the only intelligent, i.e. sentient, life in existance, and assuming that however we are sentient is the only way to have sentience... If we can understand how something does what it does while still not understanding how we think, it is not intelligent.
    [ Reply to This | Parent ]
    Re:Have you ever heard of Deep Blue? (Score:1)
    by NovaX (maneben@charlie.cns.iit.edu) on Sunday March 12, @03:56PM EST (#228)
    (User Info)
    I'd be surprised if Deep Blue just tried every possibility (or close) between movies. It would be impossible, as there are at least 10^18 possible (and this is a small estimate). To play chess effectively, heuristics are used. The more advanced, the better the computer plays. This is artificial inllegence, in its simplist form. Read A.M. Turing's "Computing Machinery and Intellegence" or better yet, just grab Mind Design II. Human players work by seeing patterns along with heuristics, and only old-fashined (GOFAI) AI believes in just dumping all the data into the computer and letting it sort through it. That failed, and newer AI designs are great at pattern recognition.

    Computers are still at a disadvantage because of this "intuition" that people have. Its difficult to build a computer and just plug an adult brain into it. How can you write a program that passes the Turing test when your asked an extremely wide range of questions from political/historical/scientific, emotional, common knowledge, etc? Tests right now show grammer problems, and are to tough. Turing's answer was to make a computer like a new-born, and let it learn. That's when computers will get the intuition. Until we get so far, Deep Blue's AI capabilities are quite good. Its damn tough to design AI. The computing power really doesn't matter in the end.
    [ Reply to This | Parent ]
    Re:Have you ever heard of Deep Blue? (Score:1)
    by QuadPro (jurjen@stupendous.org) on Sunday March 12, @04:04PM EST (#230)
    (User Info) http://www.stupendous.org

    Deep Blue is able to beat chess masters because it has enough computing power to permutate all possible moves several generations into the future and pick the best one. Obviously, no chess master's brain can do that.

    'Obviously'? Why do you dismiss that as a possibility that easily? It's unknown how human brains play chess, so I wouldn't rule out the 'brute force' method that quickly.
    - Jurjen
    [ Reply to This | Parent ]
    Re:Have you ever heard of Deep Blue? (Score:1)
    by Wolfbaine (wolfbaine@NO.hotmail.SPAM.com) on Sunday March 12, @08:29PM EST (#295)
    (User Info)
    However, it could be argued that intuition is merely a way of providing the best possible alternative to the conscious mind from the probabilities processed from the subconscious mind.

    Weighing actions and consequences is not a problem; it is based on ratings we have established since childhood. Whilst a machine may not have accomplished this, perhaps we havent really defined the problem; perhaps a machine's understanding of consequences is different from ours.

    My 2c anyway.

    Deep Though v0.1 Alpha

    int main() {
    †/* Fix this later */
    †sleep(10^30);
    †cout<<"The meaning of life is 42"
    †return 0;

    [ Reply to This | Parent ]
    Re:Have you ever heard of Deep Blue? (Score:0)
    by Anonymous Coward on Sunday March 12, @02:27PM EST (#173)
    Deep Blue was powerful, not intelligent. It was a slave to its own best-method algorithm, which was entered by chess masters. It could not adapt to Kasparov on its own. It was changed between matches 2 and 3 to react to him better. It did not and could not do this on its own. Beep Blue was a giant calculator running a single equation. It merely ran is faster than Kasparov did. [In fact, it would most likely struggle against a chess master who played differently than Kasparov] DB had no adaptative abilities. It was not intelligent at all. No more than ChessMaster x000 is, merely better programmed and with a better processor.
    [ Reply to This | Parent ]
    Re:Have you ever heard of Deep Blue? (Score:2)
    by ralphclark (ralph_clark (at) bigfoot (dot) com) on Sunday March 12, @05:17PM EST (#264)
    (User Info)
    <i>Beep Blue</i> [sic] <i>was a giant calculator running a single equation</i>
    <br><br>
    According to quantum physics, so is the entire universe as a whole...

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
    The self does not exist
    [ Reply to This | Parent ]
    Re:Have you ever heard of Deep Blue? (Score:2)
    by Abigail-II (abigail@delanet.com) on Sunday March 12, @10:49PM EST (#318)
    (User Info) http://www.foad.org/%7Eabigail/
    Don't you consider the creation of a computer that no human can beat at chess a "significant advance in AI"?

    No. In fact, it shows we have barely made the first steps. Chess is an utterly trivial process compared to what goes on in humans. It's small, bounded domain, which can be formalized easily. It took decades to match humans - and that in an area where computers should excell compared to humans. And also note that the computations done by chess computers in no way simulate the thinking process of humans behind the boards. Another small, bounded domain with trivial rules is Go. There's no Go equivalent for Deep Blue, and it isn't likely there will be one anytime soon. Humans wipe the floor with computers, in what should be the computers home turf.

    The human brain and though process have been studied for longer and by more people, than the concept of automated computing. We still understand little of it, and there's no useful formal model.

    The effort and time it took to create Deep Blue makes me think that noone reading slashdot right now will ever see a computer(program) passing the Turing test.

    -- Abigail

    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:0)
    by Anonymous Coward on Sunday March 12, @01:33PM EST (#133)
    Is AI even possible? I just read Searle's Chinese box argument along with his reply to the replies, and all of a sudden I lost a great deal of certaintiy in the feasibility of AI. Does anyone have a good website or know of a good counter response to Searle?
    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:1)
    by Pinball Wizard on Sunday March 12, @05:18PM EST (#265)
    (User Info)
    www.kurzweiltech.com

    Also, if you haven't read Ray Kurzweil's Age of Spiritual Machines, thats a mind blowing book you definitely want to read. AI is already composing poetry, creating art, and it is beginning to be able to hold conversations. Kurzweil is another elite scientist in the same league as Bill Joy, having started Kurzweil Music Systems, and his speech recognition software, which became Lernout & Hauspie. The book especially demonstrates convincingly that AI is reaping the benefits of Moore's law and will meet human levels of intelligence by the year 2020.

    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:1)
    by Pinball Wizard on Sunday March 12, @04:55PM EST (#255)
    (User Info)
    I think Artificial Intelligence has already come a long way. Consider the following poem written by Ray Kurzweil's Cybernetic Poet(must have been running on a Windows machine)

    I think I'll crash. Just for myself with God peace on a curious sound for myself in my heart? And life is weeping From a bleeding heart of boughs bending such paths of them, of boughs bending such paths of breeze knows we've been there

    I don't know about you but I'm starting to see signs of life

    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:1)
    by Izubachi (Izubachi@mechpilot.com) on Sunday March 12, @12:15PM EST (#62)
    (User Info)
    A point that should be made here is, why does everyone assume that we'll ever be able to make a real "intelligent" computer? We don't even have the faintest idea how the brain exactly works in humans, much less recreating it in the context of a computer. And even if we could, what happens when we do that and find that one factor of intelligence is still missing? Or perhaps we have the intelligence, but not the emotions. I'm not exactly a very devout person or anything, but I think there's something more to us then just neurons in a series of connections. The human mind is an incredable thing, and I don't see it being replicated very easily at all.

    The truth does not set you free, it just makes everyone irratable. Your mother lied to you.

    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:1)
    by Zorikin (zorikin@nearmiss.com.com.com) on Sunday March 12, @03:30PM EST (#211)
    (User Info)
    We like a challenge.
    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:0)
    by Anonymous Coward on Sunday March 12, @12:47PM EST (#97)
    So let's all make sure the first person to create, train and nurture a computer consciousness will teach it morale, good values and understanding.
    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:1)
    by Blue Lang (blue@gator.net) on Sunday March 12, @02:11PM EST (#160)
    (User Info) http://www.gator.net/~blue
    So let's all make sure the first person to create, train and nurture a computer
                                  consciousness will teach it morale, good values and understanding.


    I don't think I've ever been in a room with any ten human beings who agreed on morality, good values, and understanding. If we as a species do not agree on these things, then how should our progeny be so imbued?

    What if, for instance, someone built a robot, and did exactly that - but that person was Muslem? Mormon? The proper morals would be very, very, very different from mine.

    And, all in all, it just plain does not matter by what method humanity is extincted - it will happen. Is there any real difference in us doing it to ourselves or it being the results of unforseen external factors? Nopers. We all die, and in 3 million years, the cockroach religious right argues about the true nature of the human fossils.

    Whee.

    ---
    blue
    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:0)
    by Anonymous Coward on Sunday March 12, @02:38PM EST (#181)
    So let's all make sure the first person to create, train and nurture a computer consciousness will teach it morale, good values and understanding.

    I don't think I've ever been in a room with any ten human beings who agreed on morality, good values, and understanding. If we as a species do not agree on these things, then how should our progeny be so imbued?


    Uhhh... the original poster said "MORALE" not "MORALITY". I'd rather have a robot with self-confidence rather than one who always preaching to me!
    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:0)
    by Anonymous Coward on Sunday March 12, @01:13PM EST (#119)
    Artificial intelligences or genetically engineered organisms would be just as much our descendants as biological offspring would be. What you leave behind when you're gone is information. Whether this includes your genes is fairly irrelevant.
    [ Reply to This | Parent ]
    Read Isaac Asimov (Score:1)
    by mangu (orlo_porter@hotmail.com) on Sunday March 12, @01:25PM EST (#127)
    (User Info)
    what would an (artificially) intelligent computer do? What would be its desires? Would it also have emotions? If so, what would it feel?

    These are the questions that Asimov's robot stories answer. First of all, there's the Three Laws of Robotics:

    1- A robot may not injure a human being, or, through inaction, allow a human being to come to harm
    2- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    3- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    You can be absolutely certain that all intelligent robots or computers will always have these or similar laws built in. Asimov's robot stories have many interesting considerations on what would be the thoughts and feelings of robots built around these laws.

    If we don't NOW, then we might just find ourselves living in the Matrix.

    The Matrix is really stupid, a brain-dead remake of Frankenstein. For a film based on a similar story, yet infinitely more intelligent, with some really deep considerations on the ethics of artificial intelligence and the simulation of human minds, try "The 13th Floor".

    [ Reply to This | Parent ]
    Re:Read Isaac Asimov (Score:0)
    by Anonymous Coward on Sunday March 12, @01:38PM EST (#139)
    what happens if the person programming the AI has his own agenda? What about viruses?
    [ Reply to This | Parent ]
    Re:Read Isaac Asimov (Score:1)
    by mangu (orlo_porter@hotmail.com) on Sunday March 12, @02:15PM EST (#163)
    (User Info)
    what happens if the person programming the AI has his own agenda?

    AI is too complex for one single person. We would need a whole bunch of mad scientists working together and, as we know from Holywood, mad scientists always work alone. Seriously, the bigger and more complex a project is, the less likely it is that a group of evil-minded maniacs will act together to dominate it for their purposes. For a practical example, look at the nuclear weapons systems that several governments have developed. Despite being intended for extremely destructive purposes, they all have very sophisticated built-in systems to avoid their illegal use.

    What about viruses?

    Viruses are a problem. They are the reason why we catch cold, and our bodies have immune systems to take care of them. People with impaired immune systems, such as AIDS patients, often die of viruses.

    Oh, you mean computer viruses? Sure, we will have computer immune systems to take care of those. A computer or robot catching a virus and starting to kill people as a result is far less likely than your human neighbour catching a virus in his brain and starting to kill people as a result.

    [ Reply to This | Parent ]
    Re:Read Isaac Asimov (Score:2, Funny)
    by quonsar (quonsar@meepzorp.com) on Sunday March 12, @06:37PM EST (#283)
    (User Info) http://meepzorp.com

    AI is too complex for one single person.

    So, what you are saying is, it takes a village to raise an AI entity.

    :-)

    ======
    "Rex unto my cleeb, and thou shalt have everlasting blort." - Zorp 3:16
    ======

    [ Reply to This | Parent ]
    Re:Read Isaac Asimov (Score:1)
    by Steve Bergman on Sunday March 12, @01:53PM EST (#150)
    (User Info) http://www.netplus.net/~steve
    My first thoughts as well. Isaac dealt with at least part of this issue decades ago. In his robots, the three laws were so inate to the brain that designing a positronic brain *not* based on them required a *HUGE* redesign and investment. We need to start thinking about the laws (or something like them, but Isaac's three laws + the "Zeroth" law are quite elegant and concise.)

    Of course, in Isaac's universe, robots turned out to be not such a good idea anyway, or at least having very many of them. Although in the end it *was* robots (Giskard and later, R. Daneel Olivaw) that saved humanity. One thing is certain. We can't go back. The only direction we can go is forward. If we do things right we can create something wonderful and enriching to the human condition.

    And about Isaac Asimov, I will also say that even 8 years after his death, few days go by that I don't find myself consciously thinking at some point how much I miss that guy. His science fiction was great but his science fact was incredible.

    -Steve Bergman
    [ Reply to This | Parent ]
    Re:Read Isaac Asimov (Score:0)
    by Anonymous Coward on Sunday March 12, @11:25PM EST (#328)
    There was a STTNG episode where the ship started to take control of itself, and 'gave birth' to a child ship. Picard didn't panic because he decided that the ship was the sum total of all their missions, or the crew's values, etc., and the ship and its offspring would reflect this sum of being.

    Rules aside, whatever the human race creates will ultimately reflect the human race. That's not to say a duplicate - more like the true wish of humans, what's really in their hearts.

    Trying to overcontrol the process is like a parent who preaches good, but swears, beats their kids, etc., and wonders why their kid hate them and is generally as fscked up as they are.

    [ Reply to This | Parent ]
    Re:Read Isaac Asimov (Score:1)
    by Embedded (spam.jjef.proof@inforamp.net) on Sunday March 12, @04:47PM EST (#251)
    (User Info)
    Asimov's laws of Robotics will not be used. You will note that some of the "best" software systems are used for Nuke Guidance. Including "ARPANET" (Did you notice Asimov's laws in IPV4/6). As for the Matrix it is a real exploration of SF creating a believable world based on technology that could happen. It even has a "back door". Remember A. C. Clarke was laughed at when he invented geosynchrounous satellite's that broadcast all over the globe.... CNN anyone!
    [ Reply to This | Parent ]
    Re:Read Isaac Asimov (Score:2)
    by Abigail-II (abigail@delanet.com) on Sunday March 12, @11:02PM EST (#321)
    (User Info) http://www.foad.org/%7Eabigail/
    You can be absolutely certain that all intelligent robots or computers will always have these or similar laws built in.

    Aside from the fact that those rules are very difficult to formalize, what makes you think all (if any) robot and/or computer maker/programmer will want to build this in? What fun would it be to make smart bombs if they have Asimovs robot laws build in? Not even Robocop obayed rule 1.

    -- Abigail

    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:0)
    by Anonymous Coward on Sunday March 12, @01:36PM EST (#137)
    The Matrix is a blatant rip on Descarte's Evil Genius argument writen only a mere 350 years earlier. The original argument was suppose to spur thought (on epistimology) unlike the holywood peice of shit that was meant to suck $'s from idjits.
    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:0)
    by Anonymous Coward on Sunday March 12, @03:51PM EST (#226)
    "given Moore's Law, that by 2030 we could have computers that exceed the capacity of the human brain to process information." -- We're a bunch of walking monkeys, not a bunch of walking calculators. That kind of direct comparison is as meaningless as "your typical can-opener in 2030 will have more transistors than any automated teller machine has now, and will thus be superior."
    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:0)
    by Anonymous Coward on Sunday March 12, @05:00PM EST (#258)
    > It doesn't seem unreasonable to me, given
    > Moore's Law, that by 2030 we could have
    > computers that exceed the capacity of the
    > human brain to process information.

    Uh, hello? I think we've had computers that
    "exceed the capacity of the human brain to
    process information" for at least 40 years
    now. How many numbers can you add in your
    head in one second?

    Yet we still can't figure out how things like
    face recognition work, and have yet to devise
    algorithms which are anywhere near as good as
    humans at this task.

    Simply arguing that faster processing will be
    available says nothing about the possibility
    of real AI ever existing.

    [ Reply to This | Parent ]
    Re:Artificial Intelligence (Score:0)
    by Anonymous Coward on Sunday March 12, @11:11PM EST (#324)
    I think truth would be paramount to a computer. Life in general revolves around truth. That is why humans like to tell stories, play roles, be liars, and get to the bottom of Clinton scandals. They're exploring truth.

    Computers also explore truths (and falses). Binary. AI attempts to recognize a blurred line, or consider half truths (face recognition, etc.). A truth is like a fractal or an onion, it unravels to reveal more questions.

    Humans built computers to process data, to find truths better, faster, cheaper (uh oh). So this is their task. But humans are not their masters, only their makers.

    [ Reply to This | Parent ]
    And you thought T1 and T2 were just movies. (Score:0)
    by Anonymous Coward on Sunday March 12, @11:37AM EST (#13)
    . . . that's all we need, Arnold S. enforcing patent infringements . . .
    [ Reply to This | Parent ]
    Re:And you thought T1 and T2 were just movies. (Score:0)
    by Anonymous Coward on Sunday March 12, @11:58AM EST (#44)
    Buy my stuff, or I'll be back!
    [ Reply to This | Parent ]
    Hasta La Vista BillJoy (Score:1)
    by suss on Sunday March 12, @11:37AM EST (#14)
    (User Info)
    This is one of the reasons you shouldn't watch Terminator 25 times in a row...
    [ Reply to This | Parent ]
    What's the big deal? (Score:1)
    by BigGaute on Sunday March 12, @11:37AM EST (#15)
    (User Info)

    Wow. He agrees with the Unabomber: Advanced technology offers a threat to the human species.

    What's the big deal? The existence of things such as nuclear bombs and biological weapons would not at all be possible without advanced technology. As such, advanced technology is certainly a threat to the human species (and the rest of the planet as well).

    Don't get me wrong here, I'm not some sort of luddite. I don't think we're all going to kill ourselves tomorrow. I happen to be a bit more optimisitic than that, and I think that we'll avoid any planet-devastating mishaps in the forseeable future. But that doesn't mean that the dangers do not exist.


    [ Reply to This | Parent ]
    Re:What's the big deal? (Score:1)
    by Tuxedo Mask on Sunday March 12, @12:27PM EST (#83)
    (User Info)
    I'm not quite so optimistic, but the way I figure it, we've been barrelling headlong down this path since the last ice age. (if not longer) And even if we all went back to the trees, I bet we'd be down again in a few hundred years at the most. Might as well enjoy the ride while it lasts...
    [ Reply to This | Parent ]
    Re:What's the big deal? (Score:0)
    by Anonymous Coward on Sunday March 12, @10:57PM EST (#320)
    I think what Bill is trying to say is that just as the tools to create a malicious computer virus are available to the general public today, the tools to create a malicious self-replicating physical machine will be available to the general public in 30 odd years time. The scary thing is, there are enough people in the general public that are willing to write computer viruses for whatever reason. This sort of lends itself to the idea that there will be people willing to create a malicious replicating machine in 30 years time. It's not AI taking over and doing the Terminator/Matrix thing that Bill is afraid of. It's some wally who decides he can build a nanotech replicator with no stop button. Mark
    [ Reply to This | Parent ]
    Along these lines.... (Score:2)
    by Denor (reo8@antispam.hotmail.com) on Sunday March 12, @11:37AM EST (#16)
    (User Info) http://www.egr.msu.edu/~ostran14
    the problem will not be "rogue states, but rogue individuals."

      This statement reminds me quite a bit of Frank Herbert's "The White Plague". The basis is that a scientist - one lone genius - creates a plague to wipe out humanity.
      Of course, we've all seen doomsday scenarios. Our world may end up like the Diamond Age, or it may end up like The Matrix. More likely, I think things will just keep happening :)
      In the event that it doesn't, I quote Futurama's Bender: "Time to start lootin'!"


    -Denor
    [ Reply to This | Parent ]
    Technology isn't really != humanity (Score:2, Interesting)
    by Count Spatula (f_springer@nospam.hotmail.com) on Sunday March 12, @11:39AM EST (#18)
    (User Info)
    Tech sure as hell *can* be progressive for the human race, however. Unfortunately, many people want to advance tech without putting in the time necessary to maintain vigilance against abuse of said progressions. For instance, I can see bionics being abused very easily, especially by governments, but even by private sector corporations. Why pay a secretary all that money to type when you can just have him/her implanted with recording and playback cyberware? Where does her/his life go once she/he is implanted?

    OTOH, cyberware and bionics is a Good Thingô in that it can assist the blind and deaf and can help those with birth defects (such as malformed feet) to become more self-reliant.

    What we *must* do is keep check on private and government interests. We have to hold them from abusing these progressions and trashing basic humanity.


    -- Count Spatula: The Culinary Vampire "Fear my barbeque tongs, mortal!"
    [ Reply to This | Parent ]
    This is not a new idea (Score:3, Insightful)
    by BBB on Sunday March 12, @11:39AM EST (#19)
    (User Info)
    The mathematician and AI researcher (and SF writer!) Vernor Vinge came up with this a long time ago. Basically he points out that if we create a machine that is smarter than ourselves, it will do the same with respect to itself. Vinge, however, doesn't see this as necessarily bad -- for humans it would, on some interpretations, be "like living in a universe alongside benevolent gods." After all, given that these machines could satisfy our every whim without sacrificing more than a fraction of their productive/computing power, why should we fear them?

    That is just one view, of course. To read Vinge's original paper on this idea, go here. Also, I think the comment in the original story is pretty lame. It implies that if we smart people get together and discuss these problems, we'll figure out a way to prevent them from occurring. That's ridiculous. The only thing that happens when technocrats get together is that we get new rules and new ways of controlling the future. No way, I say. Let the future happen in its unpredictable fashion, and we'll all be better off for it.

    BBB

    [ Reply to This | Parent ]
    Re:This is not a new idea (Score:1)
    by cowscows on Sunday March 12, @12:03PM EST (#47)
    (User Info) http://www.zoomnet.net/~cowscows/

    I don't know if I like the idea of living alongside computers that could be viewed as "gods". I think a bigger threat than the computers rising up against us is us purposely replacing ourselves with them. I don't see a "The Matrix" scenario happening, but in that movie, the one agent was talking about how once humans let the computers do their thinking for them, it was no longer the humans' civilization.

    So say we make these ultra powerful, problem solving computers, that happily chug along solving our problems, that's all well and good, but then what would keep us from becoming lazy complacent slobs?

    I think "artifical intelligence" might be a misleading term. People have intelligence, yet at the same time, they make a lot of stupid decisions. Who's to say computers won't be the same way? Would they always agree? Would they argue with each other? With us? In movies like the matrix, and terminator, the computers and robots all have the same agenda. I'm not sure that would be the case.

    [ Reply to This | Parent ]
    Re:This is not a new idea (Score:1)
    by Ded Mike (mcannon@enteract.com) on Sunday March 12, @12:56PM EST (#104)
    (User Info) http://www.enteract.com/~mcannon/
    The thing that seems to be forgotten, in Mr. Joy's article, but that you hint at in your post, is the balance that the Universe(s?) came into being/was created with (pick your own philosophical seed reality)...entropy.

    Dinosaurs emerge/are created, have their 250-odd million years in the Sun and...comes an 'Event,'...no more dinosaurs...line continues to develop birds (and saurian survivors/retrogrades)

    Humans next arise as the superior life form on this little blue ball...through arrogance, perpetrate a Malthusian Event (overpop, lack of clean H2O, nuclear Armegeddon, Global Warming), or 'Nature' (manifested by Ebola/virii, or some other predator on the Human species), or another 'Event;'...line continues to develop...??? maybe cybernetic/human amalgam and pre-cybernetic human survivors/retrogrades...

    Point is, chaos (or Chaos/God) moves to change and keeps the game of Life going...stop moving/evolving and die, or continue to advance/move and live...THAT is the ultimate answer to Mr. Joy's troubling thesis. We have, as Humans, by our very awareness, both the seeds of our own destruction and our ultimate salvation/redemption wihin us. Only by moving forward and communicating/caring for and with each other do we survive. Otherwise, Nature will relegate us to the ashheap of history, along with the dinosaurs, to be remembered only as fossils in the geologic record.

    I think it ironic that this came from Bill Joy, the ultimate in corporate 'virii' who began a project to take Sun into the embedded systems field, and found himself leading a team of 'small mammals' to become a threat to the biggest, baddest corporate dinosaur there ever has been (Microsoft), just as the global environment was changing (the Net and distributed, ubiquitous communications media).

    Finally, I think that Kaczinsky's screed, while interesting, left out the most vital aspect of the game we call Life...that virii always exist and are demanded by the entropy/chaos variables in the equation of Life...(even in the Matrix movie: witness the 'agents' that existed apart, yet within the AI corpus, illustrated by the scenes in the sub)...when our arrogance overcomes us, they will either get our attention and we will overcome the threat or we will continue in our arrogance and exist in the future only as fossils...but past experience says that SOMETHING of us will survive...

    It's called Evolution...it continues...read about it and learn to deal with it.


    SIG: Put nacho cheese on the W2K Pro Workstation....or else!
    [ Reply to This | Parent ]
    Re:This is not a new idea (Score:2)
    by Weezul (weasel@havoc.spam.gtf.org) on Sunday March 12, @01:30PM EST (#129)
    (User Info) http://havoc.gtf.org/weasel
    I think a bigger threat than the computers rising up against us is us purposely replacing ourselves with them.

    Why is this a bad thing? A race that could redesign their own brains would kick ass. Humans designing such creatures to replace humanity IS evolution. We should want to improve ourselves, even if it means replacing ourselves!

    I don't see a "The Matrix" scenario happening, but in that movie, the one agent was talking about how once humans let the computers do their thinking for them, it was no longer the humans' civilization.

    The computers would be a part of our culture and for a long time they would value our culture for the stability (and some types of stimulation) it brings to their subset of our culture. Eventually, they would repace us, but it would take a while. It's kinda like liberals and libertarians replacing concervatives. The two L's have MUCH better ideas, but people still look to the concervatives for stability.

    So say we make these ultra powerful, problem solving computers, that happily chug along solving our problems, that's all well and good, but then what would keep us from becoming lazy complacent slobs?

    Nothing. That is why the replacment process will probable be painfull. It will eventually become clear that it is stupid to have more children, so fewer people will have children.. and the population will shrink to reasonable museam levels. The computers will be nice to them because earth is one huge museam of the computers culturral past. The remaining people will not be the part of society making advances, but they will not be slaves. Most people understand that they are not the most intelegent person in the world.. and they are happy well adjusted people anyway. I think life will be pretty good for these people.. at least by 99% of humanities standards.

    People have intelligence, yet at the same time, they make a lot of stupid decisions. Who's to say computers won't be the same way? Would they always agree? Would they argue with each other? With us? In movies like the matrix, and terminator, the computers and robots all have the same agenda. I'm not sure that would be the case.

    People are animals which are designed to make decissions based on VERY little information. This is why we have shit like religion. We would eventually manage to create computers without these problems:

        (a) They would have the scientific method build into them at a more fundamental level.

        (b) They would have a personality_fork() function which would help them think abouty multiple things at once. This would allow them to more effectivly concider multiple possitions at once.

    These are just the natural features you would want to achive a major increase in intelegence.. and they would also help you resolve conflicts.

    Actually, the computers might not be stand alone systems, but intelegent human surogates, i.e. they are attached to a human. We would do this bcause it would be very hard to simulate some parts of our brains on a computer. This would mean that for a LONG time the computers who replace humanity would really be humans who have this little voice inside their head which is very logical and has that fork() function I was talking about.

    The Christian religion has been and still is the principal enemy of moral progress in the world. -- Bertrand Russell
    [ Reply to This | Parent ]
    Re:This is not a new idea (Score:0)
    by Anonymous Coward on Sunday March 12, @02:35PM EST (#179)
    I think a bigger threat than the computers rising up against us is us purposely replacing ourselves with them.

      Why is this a bad thing? A race that could redesign their own brains would kick ass. Humans designing such creatures to replace humanity IS evolution. We should
      want to improve ourselves, even if it means replacing ourselves!
    ----
    Nanotech, Gen Eng, and AI all have the possibility of doing just that. One says let the carbon do it. One says let the silicon do it. One says "Can't we all just get along?"

    [ Reply to This | Parent ]
    Re:This is not a new idea (Score:0)
    by Anonymous Coward on Sunday March 12, @02:31PM EST (#177)
    Someone's got to maintain the machines...

    That and human vanity will require bigger and better computers, and we just love making things in our own image.

    [ Reply to This | Parent ]
    This brings up an interesting philosophical questi (Score:2, Interesting)
    by fluxrad (fluxrad@/dev/null) on Sunday March 12, @12:51PM EST (#100)
    (User Info)
    You bring up the argument of gods. Who's to say that what we view now as "god" is just what you're talking about? We almost have the technology to create life. If another race of beings...far off have that technology...they spawn super-intelligent machines, those machines in turn spawn better and more intelligent machines...is it so unbelievable that life on earth was created by a machine - how do you define god?? how do you define intelligent or "perfect"??

    Oh well..i'll shut up now. I'm beginning to sound like an Asimov short.


    -FluX
    -------------------------
    Your Ad Here!
    -------------------------
    [ Reply to This | Parent ]
    Re:This brings up an interesting philosophical que (Score:0)
    by Anonymous Coward on Sunday March 12, @01:43PM EST (#143)

    Synopsis of Anslem's proof of God.

    d1: God is that which nothing greater can be concieved.

    Agrees with intuitive definiotion

    p1: God exists in your understanding.

    Even an atheist can understand this definition of god.

    C1: God exists because that which nothing greater can be concieved is not as great as that which nothing greater can be concieved and exists. Thus God exists.

    [ Reply to This | Parent ]
    Re:This brings up an interesting philosophical que (Score:2)
    by el_chicano on Sunday March 12, @02:19PM EST (#164)
    (User Info) http://www.brokersys.com/~vatoloco
    C1: God exists because that which nothing greater can be concieved is not as great as that which nothing greater can be concieved and exists. Thus God exists.

    Yeah, but can "HE/SHE/IT" create a rock that is so big and heavy that "HE/SHE/IT" can't lift it?

    Couldn't resist... :->
    --
    Even the devil can quote scripture to suit his purposes - Fox Mulder, The X-Files
    [ Reply to This | Parent ]
    Re:This brings up an interesting philosophical que (Score:0)
    by Anonymous Coward on Sunday March 12, @04:07PM EST (#232)
    The real question you are asking is if god is all powerful can he make 2+2 =5 or p /\ !p = True. I have no comment on that.
    [ Reply to This | Parent ]
    Re:This brings up an interesting philosophical que (Score:1)
    by Ig0r (suxmeh0ff at hotmail dot com) on Sunday March 12, @04:40PM EST (#248)
    (User Info)
    And there's the halting problem (even a god can't solve that[!]) :)


    -- Eschew obfuscation!
    [ Reply to This | Parent ]
    Re:This brings up an interesting philosophical que (Score:1)
    by greenrd (greenrd@notinnedmeatproductshotmail.com) on Sunday March 12, @06:23PM EST (#279)
    (User Info) http://www.lancs.ac.uk/ug/greenrd/
    Why not? You assume the God's brain is entirely algorithmic.


    72.5% slashdot pure (doh - I'll have to try a lot harder.)

    [ Reply to This | Parent ]
    Re:This brings up an interesting philosophical que (Score:1)
    by fluxrad (fluxrad@/dev/null) on Sunday March 12, @02:48PM EST (#193)
    (User Info)
    Yah...i took philosophy in college too. But you forgot to read David Hume's refutal...Anselm's argument is actually an 8 point argument (almost had me believing in god for a minute)...but it assumes that existance is a predicate (which it isn't). - maybe you'll get to that at the end of the semester :P


    -FluX
    -------------------------
    Your Ad Here!
    -------------------------
    [ Reply to This | Parent ]
    Re:This brings up an interesting philosophical que (Score:0)
    by Anonymous Coward on Sunday March 12, @03:23PM EST (#207)
    We didn't go over Hume's rebuttal, but we did go over existance as a property and as a great making property. I thought the argument for existance not being a property were rather weak. On the other hand, there do seem to be two divirging intutions on whether or not existance is really a great making property or not. Anyway, I'm not sure whether or not God exists or not, but reading Anslems work does make yout ake a step back. Just thought I'd post it since most /.'ers since it was on topic and well, everyone here seems to be so set on a descion :).
    [ Reply to This | Parent ]
    Re:This brings up an interesting philosophical que (Score:1)
    by fluxrad (fluxrad@/dev/null) on Sunday March 12, @03:32PM EST (#212)
    (User Info)
    If you ever get a chance to read Hume...DO IT!!! he's the David Spade of the philosophical world...really smart and really jaded. I don't think i've ever read anything by hume that i didn't like.


    -FluX
    -------------------------
    Your Ad Here!
    -------------------------
    [ Reply to This | Parent ]
    Re:This brings up an interesting philosophical que (Score:0)
    by Anonymous Coward on Sunday March 12, @03:43PM EST (#218)
    There is a peice in my philosophy book by Hume, enquiry on Knowledge. I searched google for the peice you wrere talking about, but the first few Hume sights did not seem to ahve anything about it. I probably over looked it, do you know the name of the work? I read Searle's Chinsese-box argument today on my own, thats enough for one day :]. But the Hume thing is next, as soon as I can find it that is HEH.
    [ Reply to This | Parent ]
    Re:This brings up an interesting philosophical que (Score:1)
    by anonymous cowerd (WKiernan@concentric.net) on Sunday March 12, @09:19PM EST (#308)
    (User Info) http://www.concentric.net/~Wkiernan/index.html

    The obvious failure in Anselm's argument is, contary to Anselm's begging the question, it is really easy for something that doesn't exist to be "greater," whatever one might ordinarily mean by that, than anything that does actually exist on this material, sin-drenched ball.

    For example, the imaginary zillionaire Gill Bates, who has a net worth of $300-billion, the Nobel Prize for his universal cancer cure, three Oscars, two Grammys and the Booker Prize for Literature, and is also the Senator from Washington state, is clearly "greater" than our existent friend Bill Gates, with his mere $80-billion. Or for another example, the world as it is seen in The Real Life of Sebastian Knight, though purely fictional, is better than the one I live in. Or yet another example, the babe you dreamed so hard about last night is even sweeter and lovelier than the one you were checking out down at the topless bar this afternoon. And so on.

    Yours WDK - WKiernan@concentric.net

    [ Reply to This | Parent ]
    Gods or Devils? (Score:1)
    by Webmonger on Sunday March 12, @02:30PM EST (#175)
    (User Info)
    I'm sorry, but I don't see any argument that says super-species we build will think kindly of us.

    They may treat us like we treat "lower" species, (dogs and cats if we're lucky, cows and mice if we're not), or they may simply not notice us.
     
    I've gotten used to being a member of the dominant species on Earth. I'd rather not change that, thanks.
    [ Reply to This | Parent ]
    Re:Gods or Devils? (Score:0)
    by Anonymous Coward on Sunday March 12, @04:24PM EST (#242)
    You missed the boat. Beetles are the dominant group on earth. Sorry, chum.
    [ Reply to This | Parent ]
    Re:This is not a new idea (Score:1)
    by Zorikin (zorikin@nearmiss.com.com.com) on Sunday March 12, @03:45PM EST (#220)
    (User Info)
    > Basically he points out that if we create a machine that is smarter than ourselves, it will do the same with respect to itself.

    Assuming we give it autonomy. That's important to note - intelligence doesn't require such a thing as desire. Presumably we would be making these powerful thinking machines to determine how to more perfectly express our desires, so they'd be more like extensions of ourselves than like children or aliens.
    [ Reply to This | Parent ]
    Re:This is not a new idea (Score:1)
    by HiThere (I.am..charleshixson@earthling.net) on Sunday March 12, @06:31PM EST (#282)
    (User Info)
    Actually, I believe that intelligence does require desire. Formal reasoning is, to my mind, one of the four main pole of intelligence. Along side of it one needs a model of the environment in which one operates, a series of methods for desiring goals (I conceive this to be ranking lists of the form this is better than that), and a teleological section for laying plans for how to get from here to there (I'm a bit fuzzy on how this section works, myself, but I believe that it exists).
    Desire is used to choose between possible selected goals. Jung considered that desire (Feeling) was the original mental sensation, and that all of the others were later developments and offshoots. This seems reasonable. Approach/avoid is about as basic as one can get.

    Never attribute to malice that which can satisfactorily be explained by incompetence -- N. Bonaparte
    [ Reply to This | Parent ]
    Re:This is not a new idea (Score:0)
    by Anonymous Coward on Sunday March 12, @09:14PM EST (#306)
    After all, given that these machines could satisfy our every whim without sacrificing more than a fraction of their productive/computing power, why should we fear them?

    Let's consider all the people out there who mean no explicit harm to ants. Does this mean that none of those people have harmed or maimed any ants, or further, unwittingly destroyed an ant colony?

    We have to be careful not to confuse disinterest and benevolence.

    [ Reply to This | Parent ]
    wow (Score:0)
    by Anonymous Coward on Sunday March 12, @11:41AM EST (#21)
    d00dz, that would really suXor!
    [ Reply to This | Parent ]
    Ethical issues (Score:3, Insightful)
    by chazR (chaz.randles@ukgateway.net) on Sunday March 12, @11:41AM EST (#23)
    (User Info)
    Assuming that advances in technology continue, I think it is reasonable to postulate that at some stage we will create sentient beings. Whether this is done in software, or uses nanotechnology, or biotechnology or whatever, it raises some interesting ethical questions. Is such an entity permitted to value it's own self-preservation? What if this leads to conflict with humans?

    Do we have a right to construct entities that place human well-being above their own well-being? (Asimov's 'Laws of Robotics' or similar)

    If we do this, aren't we dangerously close to building slaves?

    These comments do not neccesarily reflect the views of the author.
    [ Reply to This | Parent ]
    Re:Ethical issues (Score:2, Insightful)
    by Count Spatula (f_springer@nospam.hotmail.com) on Sunday March 12, @11:52AM EST (#37)
    (User Info)
    Is such an entity permitted to value it's own self-preservation?

    Ooh. That's a toughie. I don't know when human-created sentience will occur, but these are exactly the thorny questions that have to be answered. I, for one, would abhor a sentience that would not be allowed to be self-determined. As scary as it may seem, it's just not the type of thing that I want to see. Slavery of any sort (even robotic slavery) is just plain Wrong.

    Where do we go from here?


    -- Count Spatula: The Culinary Vampire "Fear my barbeque tongs, mortal!"
    [ Reply to This | Parent ]
    Re:Ethical issues (Score:0)
    by Anonymous Coward on Sunday March 12, @12:13PM EST (#59)
    don't think of them as slaves, just tools.
    [ Reply to This | Parent ]
    Re:Ethical issues (Score:1)
    by Ded Mike (mcannon@enteract.com) on Sunday March 12, @12:20PM EST (#71)
    (User Info) http://www.enteract.com/~mcannon/
    Exactly as your boss thinks of you, hmmmmm?
    SIG: Put nacho cheese on the W2K Pro Workstation....or else!
    [ Reply to This | Parent ]
    Re:Ethical issues (Score:0)
    by Anonymous Coward on Sunday March 12, @01:02PM EST (#107)
    i am the boss.
    [ Reply to This | Parent ]
    Re:Ethical issues (Score:0)
    by Anonymous Coward on Sunday March 12, @02:24PM EST (#170)
    i am the boss.

    How long has Bill Gates been a member of Slashdot???
    [ Reply to This | Parent ]
    Re:Ethical issues (Score:1)
    by Ded Mike (mcannon@enteract.com) on Sunday March 12, @05:13PM EST (#263)
    (User Info) http://www.enteract.com/~mcannon/
    "i am the boss." ...and this is why you _must_ post anonymously...because you regard a "...self aware individual conciousness," whether human or cybernetic as a Tool, rather than a person... ...and if the _people_ who work for/with you found out, they wouldn't work for/with you anymore? As we all know, the hardest capital to obtain today (and what makes a successful company truly successful) is human capital. I got news for you buddy: _THEY ALREADY KNOW!!!!!!_ You are the ideal example of the "anonymous _coward_!" ...and if you wanna survive much past tomorrow, ya better get on the "good foot" and change your attitude!
    SIG: Put nacho cheese on the W2K Pro Workstation....or else!
    [ Reply to This | Parent ]
    Re:Ethical issues (Score:2)
    by chazR (chaz.randles@ukgateway.net) on Sunday March 12, @01:11PM EST (#114)
    (User Info)
    bzzzzt - Wrong Answer.

    How do you cope with a situation where your 'tool' can reason with you? If you still treat it as a 'tool' are you morally any different from a slave master?

    Should we treat dogs/dolphins/chimpanzees/octopi as 'tools'?
    [ Reply to This | Parent ]
    Octopi is as "alien" as it gets (Score:2)
    by maynard (maynard@jmg.com) on Sunday March 12, @02:10PM EST (#159)
    (User Info) telnet://dont.waste.your.time.wah
    Should we treat dogs/dolphins/chimpanzees/octopi as 'tools'?

    If ever you wanted to study intelligent alien life here on earth, the Octopus is the one creature best suited for this goal. It's an invertebrate cephalopod, nothing like a mammal; meaning you're looking at a semi-sentient creature which diverged from our evolutionary line a good hundred million years past. Basically, you're looking at a very smart snail. They use copper to move oxygen within their blood. They can control multiple arms and hundreds of individual suckers at will without blinking an eye. They signal emotional states by changing skin color at will, also using this advantage as camouflage. They have excellent eyesight, long term and short term memory, they can solve complex problems and may even be able to logically reason if taught how.

    All of the creatures you mention, as well as the elephant and parrot, deserve better treatment than we humans provide. These creatures are damn near sentient and could provide a wealth of information on how self-perception works in the real world. Plus it just seems wrong to me that we maintain this dichotomy between humans and other obviously self aware creatures simply because it's inconvenient.

    You may believe that your God gave you all the planet to do with as humanity wishes, but frankly even if that were the case don't you think He would find our indifference to their plight both shocking and disgusting? And how is that different from mechanical consciousness?

    Personally, I agree with the hard-AI community that self awareness is a computational process which can be replicated mechanically. From that perspective I must conclude that either we value those creatures which behave with some self determination and will by providing legal rights to them as we do to ourselves, or we might as well not value the sanctity of human life either.

    Men are born ignorant, not stupid; they are made stupid by education. --Bertrand Russell

    J. Maynard Gelinas

    [ Reply to This | Parent ]
    Re:Ethical issues (Score:2, Interesting)
    by rking on Sunday March 12, @12:17PM EST (#65)
    (User Info)
    I, for one, would abhor a sentience that would not be allowed to be self-determined.

    But first you need to sort out what you mean by being self-determined. If we create a sentient life form it's going to have some form of pre-programming just like we (and all others plants and animals) do. We develop according to pre-ordained rules, and have in-built instincts.

    Any "life" we design that doesn't have some instincts ordained for it (preserve self, obtain nutrition when required, seek to learn, whatever is appropriate to the form it takes) is going to just sit there and do nothing. It can only be self-determined within the limits of what it's designed to seek to do.

    If we decide not to give it an inbuilt morality then it won't have any, if we decide it needs some then we have to decide what it's going to be. If we decide to give it no direct rules against hurting people but design it to preserve itself and tell it it's going to be destroyed if it hurts anyone then we've still determined some aspect of its behaviour (self-preservation).

    I just don't see how an entity could be self-determined without having behavioural rules in place, because an entity without any pre-set behavioural rules wouldn't determine to do anything.
    [ Reply to This | Parent ]
    Re:Ethical issues (Score:1)
    by Paul Fernhout on Sunday March 12, @01:43PM EST (#144)
    (User Info) http://www.kurtz-fernhout.com/oscomak
    You are correct of course; systems designed to act must have something (like behavioral rules) causing the actions.

    The issue is evolution. Even if you design a system to do one thing, if it has the capacity to evolve (or even learn), then the behavior may eventually change to be other than what the original designers intended -- to be instead behavior shaped by various evolutionary pressures. Consider the situation you describe of "it's going to be destroyed if it hurts anyone". Assume this principle is applied to millions of systems or varying designs which can learn. Some of these systems hurt people and are destroyed, and some of them don't hurt people and are duplicated. In this situation, systems might evolve that (A) reliably don't hurt anyone, (B) reliably hurt people without it being obvious, or (C) just once subvert or destroy the enforcer of that situation, and then continue to evolve in various directions.

    This is the problem with Asimov's "three laws of robotics". In fact, in one of his stories (I forget which), he points it out at the end when basically two of the robots (while switched off!) decide they are superior or "more human" than the organic humans for various reasons. So even though they are still bound by the three laws in this case, the definition of "human" has changed -- to the robot's advantage. The implications of this are not worked out though.
    [ Reply to This | Parent ]
    Re:Ethical issues (Score:2)
    by stripes (stripes at eng dot us dot uu dot net) on Sunday March 12, @07:19PM EST (#290)
    (User Info) http://www.eng.us.uu.net/staff/stripes/
    This is the problem with Asimov's "three laws of robotics". In fact, in one of his stories (I forget which), he points it out at the end when basically two of the robots (while switched off!) decide they are superior or "more human" than the organic humans for various reasons. So even though they are still bound by the three laws in this case, the definition of "human" has changed -- to the robot's advantage. The implications of this are not worked out though.

    Read the non-Asamov Foundation books. I think the Brin one goes into this in more depth.

    [ Reply to This | Parent ]
    Re:Ethical issues (Score:1)
    by Anonymous Coward on Sunday March 12, @02:39PM EST (#182)
    Life seems to inherently value its own self-preservation. Deal with it.

    AI can be like a wolf, a dog, or a cat.
    A wolf fears mankind and resents it to an extent. We were once rivals.

    A dog sees mankind as a leader and an ally. We once worked together.

    A cat tolerates mankind as long as we possess a can-opener. God forbid they ever get an opposable thumb. =)
    [ Reply to This | Parent ]
    Re:Ethical issues (Score:0)
    by Anonymous Coward on Sunday March 12, @02:47PM EST (#191)
    You have just limmited yourself to biological life that evolved on Earth. I hardly think from that one limmited sample you can extrapolate on all life.
    [ Reply to This | Parent ]
    Re:Ethical issues (Score:1)
    by steffl (steffl_at_bigfoot_com) on Sunday March 12, @08:48PM EST (#300)
    (User Info) http://www.bigfoot.com/~steffl
    "Slavery of any sort (even robotic slavery) is just plain Wrong."

        yes man, free your computer!

                    erik
    ...all excited, don't know why...
    [ Reply to This | Parent ]
    Re:Ethical issues (Score:0)
    by Anonymous Coward on Sunday March 12, @01:41PM EST (#141)
    >I think it is reasonable to postulate that at some stage we will create sentient beings.

    Sentient beings and computers which have the appearance of sentience are two different things.

    Slaves? "Coulumbs of charge, i order you to travel through these specific wires! Do it now!" Please.

     
    [ Reply to This | Parent ]
    Re:Ethical issues (Score:2)
    by Chris Johnson (chrisj@airwindows.com) on Sunday March 12, @03:47PM EST (#223)
    (User Info) http://www.airwindows.com
    Really. The next thing you know, we'll be denying the AIs the right to speculate on the stock market, the right to obliterate the environment for short term gain- why, we might even deny them the right to enslave _us_ if that is more profitable!

    Asimov didn't go far enough. It is _humanity_ that needs the Zeroth Law. 'Self-preservation' is too easily twisted into forms that benefit the individual at the cost of society and the environment and, in the long run, that individual.

    Self-preservation is no longer a survival trait in a world where individuals can cause great damage for modest personal advantage. And it is the last thing we should be worrying about when trying to invent AIs. Rather than make a big moralistic noise about how we must make them in our own image (yes, AIs _should_ be allowed to be crack dealers, lawyers, and patent holders! (think about _that_ one for a nanosecond...)) we need to figure out how to make them better- and then see how they can teach US, for we are reaching the limits of our usefulness.

    Do we have a right to place our human well-being above society's well-being? How much proof do we need to accept when society is being harmed- and is it a problem when our own greed gets in the way of this acceptance? Our reach exceeds our grasp. That is what greed is. It's a motivator and gets some things done, given unlimited resources. There are no unlimited resources. Past a certain point, past a certain ability to grasp, this is _not_ a virtue.

    I hope we can invent AIs that can teach _us_ something, or we won't be needing their help to destroy ourselves.

    [ Reply to This | Parent ]
    Re:Ethical issues (Score:2)
    by stripes (stripes at eng dot us dot uu dot net) on Sunday March 12, @10:14PM EST (#314)
    (User Info) http://www.eng.us.uu.net/staff/stripes/
    The next thing you know, we'll be denying the AIs the right to speculate on the stock market,[...]

    We allready do. When the market goes down (and maybe up) "too fast" some types of trading are susspended. I think the first to be traded are mathmatically derived trading orders (i.e. the only thing we have that approximates AIs). Orders from real people (be they E*Trade at-home-daytraders, or the manager of a $4bln mutual fund) are allowed to go through. At least unless the market keeps doing the Bad Thing, in which case there is a short cooling off period (no trades accepted). At least that's the storey on hte NYSE, I would assume the NASDAQ has the same sort of deal.

    Oh, and this info is about two years old, so don't go betting your house on it.

    [ Reply to This | Parent ]
    Re:Ethical issues (Score:1)
    by HiThere (I.am..charleshixson@earthling.net) on Sunday March 12, @06:20PM EST (#278)
    (User Info)
    Ignoring, briefly, the issue of sentience, "Is a dog a slave?" If it does what it wants (just TRY to stop that dog from barking!) is it a slave?

    Back to sentience. If a machine is sentient, then perhaps we don't have the right to coerce it into following our wills. But do we have the right to design it so that it's will coincides with our will? Would it be so bad if a robot Wanted to attach itself to a specific human, and make that human happy? (Be careful here! This concept needs a lot more work! [See Jack Williamson's Humannoids for some cautions along this area]). I don't see any ethical problem, or slave, here. Just a requirement for very careful engineering.
    Remember, robots will not come with any instincts except those that are built into them. In particular, they will only have self-assertion as a primary goal if it is given to them. They may deduce the need for it as a secondary need to achieve their primary goal, of course.

    But this is a long way from nano-tech. These robots would be built on the human scale. Still, analogies to these arguments could be made for those robots who were too far out of scale with us for direct interaction. The problem of identifying the correct instincts would, of course, intensify immensely.

    Never attribute to malice that which can satisfactorily be explained by incompetence -- N. Bonaparte
    [ Reply to This | Parent ]
    Always in twenty years (Score:5, Insightful)
    by ucblockhead (sburnapSPAMSUXlinux@attSPAMSUX.net) on Sunday March 12, @11:42AM EST (#26)
    (User Info)
    This idea, that technology will kill us all, is not new. It started around World War I, and really gained momentum with the invention of the bomb. And for the most part, the timeframe in which destruction (from war/pollution/technological change) was going to rain down is always something like 10-20 years in the future. Close enough to be something to fear, but far enough away to seem likely.

    Such ideas are almost always based on linear trends. Just like the guy in the early part of the 19th century who projected that New York would be hip deep in horseshit by the year 2000. That's what the trend showed, after all.

    This is not to say that we shouldn't worry about the downsides of technological progress, but for the most part, these "global extinction" thoughts are fueled by accentuating the negative and ignoring the positive.

    Bad things will almost certainly happen in the future. Maybe even very bad things. But destroy the human race? Not likely. Slow it down, even? Probably not. The worst global disaster with real evidence behind it we have to face right now is global warming and while global warming could cause a lot of discomfort, with the sea-level rising and weather changing, the human race would certainly survive.


    Those who will not reason, are bigots, those who cannot, are fools, and those who dare not, are slaves. -George Gordon Noel Byron (1788-1824), [Lord Byron]

    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:0)
    by Anonymous Coward on Sunday March 12, @12:06PM EST (#52)

    Just like the guy in the early part of the 19th century who projected that New York would be hip deep in horseshit by the year 2000.

    And isn't it???


    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:1)
    by Kalani (kalani at calcoast dot com) on Sunday March 12, @03:41PM EST (#217)
    (User Info)
    And isn't it???

    No, it is not. At least not in the literal sense. I believe that the prediction was bourn from the high amount of horse-drawn carriages weaving throughout the city in the late 1800's. Therefore, the problem was actually that there may be too much horseshit through which to wade in order to get anywhere.

    New York is just hip-deep in stupid human shit. ;)

    [ Reply to This | Parent ]
    Not horse (Score:1)
    by unitron (unitron@tacc.net) on Sunday March 12, @08:38PM EST (#297)
    (User Info)
    Bull, however, is another story.

    proudly boycotting Slashdot's ``high-priority'' submission queue--at least 'til I find it

    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:5, Insightful)
    by w3woody (woody@alumni.caltech.edu) on Sunday March 12, @01:30PM EST (#130)
    (User Info) http://www.alumni.caltech.edu/~woody
    Two points.

    (1) If anyone here remembers their history, the'd remember that the environmental problem du'joir in the 1970's was global cooling, not global warming. The truth of the matter is that the evidence is still out about global warming--the best we can say is that we have some interesting localized weather patterns, but there is no evidence of any sea levels rising or any non-natural weather patterns changing. (And those who provide "statistical evidence"--if you look closely enough, they're cooking the books combined with weather simulations which they believe will predict the weather beyond the normal 7-14 days most simulations actually work.)

    My point is that if you listen real carefully, even global warming is in the "disaster which will wipe us out in 10-20 years" category--far enough away that it seems possible (especially on warmer spring days), yet close enough to actively fear.

    By the way, you forgot the ozone hole--though there are those who are starting to think it ain't the problem it once was, only because ground-level UV levels have not changed one iota. But there are those who still believe that in 10-20 years we're going to have to go out in the sun with SPF 5000 or die.

    That's okay; I still remember when I was growing up in the 1970's that we were to run out of oil by 1990. That is, we would deplete all of the world's oil reserves by 1990, and because of it, civilization would collapse, causing wars a'la "Mad Max" to break out throughout the world as people struggle to find the last little caches of horded gasoline.

    I have a real hard time believing in any disaster that will kill us in 10-20 years unless someone comes up with some really hard facts--like perhaps a photograph and orbital plot of the asteroid that is suppost to kill us all. I just remember too many disasters that were to wipe us out in 10-20 years while growing up (oil depletion, population explosion, global cooling, etc)--and we're still alive.

    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:2)
    by ucblockhead (sburnapSPAMSUXlinux@attSPAMSUX.net) on Sunday March 12, @01:36PM EST (#138)
    (User Info)
    By the way, you forgot the ozone hole--though there are those who are starting to think it ain't the problem it once was, only because ground-level UV levels have not changed one iota. But there are those who still believe that in 10-20 years we're going to have to go out in the sun with SPF 5000 or die.

    Though the banning of CFCs may have something to do with this.


    Those who will not reason, are bigots, those who cannot, are fools, and those who dare not, are slaves. -George Gordon Noel Byron (1788-1824), [Lord Byron]

    [ Reply to This | Parent ]
    'fraid not (Score:0)
    by Anonymous Coward on Sunday March 12, @01:56PM EST (#152)
    It's the sun that creates ozone (which is a form of oxygen - O2 instead of 03). Block the sun, destroy the ozone layer. Can't block the sun? Don't sweat it.
    [ Reply to This | Parent ]
    DOH! DOH! DOH! DOH! (Score:0)
    by Anonymous Coward on Sunday March 12, @02:01PM EST (#153)
    Ozone is O3 - breathable oxygen O2
    [ Reply to This | Parent ]
    Blue light special (Score:1)
    by PD (pdrap@startrekmail.com) on Sunday March 12, @08:57PM EST (#301)
    (User Info) http://freetrek.linuxgames.com
    In the future when people have to buy the air they breathe, con artists will sell that O3 to unsuspecting people by telling them it's a special package containing 50% more oxygen. In the future, some things will never change.
    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:1, Insightful)
    by Anonymous Coward on Sunday March 12, @03:10PM EST (#204)
    !) I remember speculation concerning a (a)periodic return of cooling and glaciation. There's nothing stupid about that at all. We know that climactic change is for real and the variations we have seen and cultural records on is almost insignificant noise compared to climate swings of the "ice age" and before. Which was a eyeblink ago in geologic time.
    The climate can change dramatically and very fast. It remains an unstable system and will certainly change again. The question facing us is not if? but when and which "direction" and how fast.

    All human civilizations have flourished in a brief common moment of favorable climatic stability. All of them. Babylonians, Byzantines, and Bostonians have all shared a nice sunny day when it rained a little in the morning, cleared up around noon, never got too hot, and was pleasant enough to leave the windows open at night. The ice cores from Antartica, though, tell us about a very different state of affairs reigning before our time. Our cultural assumptions about how to imagine changeable climate and how to possibly deal with it are therefore completely out of whack with what climate change is likely to be like when it arrives.

    There is good reason, moreover, to believe that our activities are capable of influencing and destabilizing the climate. We may radically influence the atmospheric CO2 levels beyond what we directly put into the air ourselves by raising the global temperature enough to, for example, release CO2 frozen now in Northern Forest/Tundra peat--of which there's an awful lot, aeon's worth. "Alarmists" point out that once a trend is established it can spark self-reinforcing effects that cause the trend curve to go parabolic. The Anti-Alarmists may point out that there are also counterbalancing factors that the "trend" itself may strengthen, causing the system to ultimately trend towards equilibrium. In this case, it would be that our fossil fuel burning raises CO2 making the earth's atmosphere warmer, eventually releasing more CO2 as the Northern regions thaw more each year, the extra CO2 could be speed growth of forests worldwide, thus stabilizing the system. But "Alarmists" really don't have to work hard to refute this Panglossian idea as everyone knows, from unrelated debates, how rapidly global deforestation is progressing (picture the world on fire).

    We know for a fact that the Earth's climate is now warming. We don't know exactly why or where it will lead. An agnostic stance however with regard to The Greenhouse Effect, per se, is becoming increasingly an exclusive product of ideological "la-laaa-laaa-ism " and an attempt to forestall the conclusion that the visible, obvious evidence of manmade environmental change will result in unintended, probably unfavorable ecological change (the Global Warming Scenario by the author Earthquake, Towering inferno, Poseidon Adventure and other cheesy 70s disaster pics).

    All things considered it is just malignantly stupid to try to maintain that human activity--deforestation, fossil fuel burning, etc--will have no effect on global climate. If you live in or near a metropolitan area, just paying attention to your local news' daily weather forecast is enough to show that how we shape the environment has direct influences on climate--writ large or small. The important question is will it be favorable or unfavorable and to what degree in what time frame?

    If think that population explosion is not a real problem, you should revisit the statistics for the spread of AIDS in Africa and South Asia, and global malnutrition statistics and think again.

    Considering the likelihood that climate change will accelerate once begun, it should be clear that the prudent choice would be to moderate our contribution to warming factors and to curb global population growth as fast as ethically permissable (without resorting to warfare and the artificial famines it creates).

    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:0)
    by Anonymous Coward on Sunday March 12, @06:04PM EST (#274)
    If think that population explosion is not a real problem, you should revisit the statistics for the spread of AIDS in Africa and South Asia, and global malnutrition statistics and think again.

    What I should have expanded in this statement is the apparent lack of any self-limitation in these phenomena. Large percentages of African nation's populations are thought to be HIV positive, but the population totals aren't crashing. It kills but not immediately and chances are good that the victims have reproduced. Those who believe that the plague will burn itself out and equilibrium will, naturally, be restored in the long run, overlook the extensive damage done meantime, which may goad those still living at the "equilibrium point" to envy the dead. Likewise, malnutrition is a global epidemic caused by the swelling population and mal-distribution of food, but its ill effects do not seem to keep people from overbreeding and exacerbating the problem. Indeed it reinforces itself, since the afflicted people do not feel much incentive to make long range plans for their lives and can hardly conceive of ordering their lives around the accumulation of goods and transmission of goods to heirs.
    And of course overpopulation is a major contributing factor to deforestation and pollution.

    Will it end the world in 20 years? No not your world; but it's definitely with us today and it will "end the world" for many before tomorrow comes. Given twenty years, there's no telling what all it could do.

    I think that Bill Joy's fears about nano-entities and AI are maybe a little "out-there", but since I don't know anything about those subjects at all, I'll have to read about it in the papers. On the other hand, The Apocalyptic, or Frankensteinian view of technology is not "out there" at all, IMO--I just don't think you have to go to SciFi extremes to get legitimately scared. What we do with electronic media, information technology, fossil fuels, and basic low-tech stressors on the environment right now is corrosive enough, and don't even get me started on genetic engineering. This one is so clearly a Pandora's box, the worst seen yet...
    ....
    ....
    ....
    Nothing personal, understand: you bio-engineers may be great people, terrific parents, and have the purest and most noble intentions in the world...But you ARE going to be used and subverted by thoroughly evil people, and you will attract a number of them into your midst. Certainly it is happening already.

    ...
    ...
    ...
    Your greatest contribution to Progress may be in some cases to deny your curiosity, swallow your pride and pretend to know nothing. Heisenberg's 2nd, and best, principle from which we have all benefitted incalcuably.

    ...
    ...
    ...
    I'm not trailing off, you're just not connecting the dots.

    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:2, Insightful)
    by Zigurd on Sunday March 12, @06:06PM EST (#276)
    (User Info) http://www.phonezone.com/telirati
    "it is just malignantly stupid to try to maintain that human activity--deforestation, fossil fuel burning, etc--will have no effect on global climate" is a sentence that gets 85% of the way through before running off the rails. Humans are capable of creating nasty environmental disasters, but these have been, so far, local in their effect and temporary in their duration. Avoiding these disasters is a shibboleth. Nobody is against such careful avoidance.

    But adopting causes as semireligious dogmas is also harmful. Human misery resulting from hobbled economies is just as real as drought and flood. Indeed most famine is caused by bad policy and corruption, not bad weather. Stalin, Pol Pot, and Mao killed far more people by screwing up food distribution than they did through environmental mismanagement which, in the Soviet Union and China was horrifying enough. Environmental policy based on ideology, especially collectivist ideology, is not only repugnant for its associations with past tyranny, but for the completely utilitarian reason that it is a known and proven killer of millions of innocents. So when environmental collectivist alarmists have their backs to the wall and bring up the "better safe (in agreeing with their positions) than sorry" one should not be lulled into thinking that it is in fact safe.

    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:2, Insightful)
    by MrEd (tones at tande dot com) on Sunday March 12, @02:14PM EST (#162)
    (User Info)
    Step back and have a look at what optimisim you're projecting. First off, the only global disaster that you're even willing to possibly accept as having any credibility is global warming? What about running completely out of oil and gas? No matter what people may say, there is a finite limit on the amount of crude oil that has been built up over the millennia, and it is non-renewable. The net production of oil by natural processes during one day is pretty much enough to run four cars full-time. And, before you go off about fuel cells and solar power - Fuel cells that are in production now are set to run on gasoline. No NOx or SOx in the combustion, but still gasoline. Hydrogen fuel cells will need to be supplied with hydrogen, which must be extracted at an electrical cost. Where does electricity come from? Coal. Gas. Do you realize how much we depend on gasoline to support our ridiculously opulent lifestyle?

    As a second note, did you know that there are two types of the Ebola virus that have had outbreaks? One was in Africa, which we all saw on the evening news. It killed humans, but could only be transferred by bodily fluids. Since your entire body turned into jelly, there was plenty of that to go around, but still, the infection rate was not critical. The other strain came to North America with a shipment of monkeys. It did not kill humans (only made you sick), but it was airborne!!! Put the two strains together, couple it with a flight out of Zaire to NYC, and...

    Do you want to talk about accentuating the positive? Accentuate the fact that genetically engineered crops with the 'Bt' pesticide inserted are killing off Monarch butterflies. Accentuate the fact that frogs are being born with three legs and two heads due to toxins released during paper processing which mimic hormones. Accentuate the fact that we are destroying species at a rate never before seen in the history of the earth since the meteor that killed the dinosaurs! THERE IS A FINITE LIMIT ON GROWTH. That's right - the Dow Jones can't keep growing forever, because natural resources which we depend on are non-renewable! Of course, in a capitalist system which rewards profit as the most noble of motivations, that issue never comes up.

    Trees grow at 2% a year. If you cut timber at 2% a year, and kept the amount of forest protected, you could cut trees forever. However, the stock market grows at 10% (at least). It makes more economic sense to cut down the trees now and invest the money. Does that make sense?

    However, you say, technology will find us a way out. The Biosphere II project was an example of how we could use technology to live on Mars by generating a natural environment that would support us. Of course, you don't hear much about the Biosphere project anymore, because it failed miserably. Oxygen levels inside the sealed environment dropped to those found at 12,000 feet. Then Nitrogen levels skyrocketed, causing risk of brain damage. Then most of the plants which were supposed to sustain the bionauts died off, and cockroaches and ants began to swarm over everything. Had they stayed inside any longer, they might have died. The lesson this teaches is that we don't know what the hell is going on in the ecosystem! Working in a lab is fine and dandy, but as soon as you take out the fixed variables that the scientific method is based around and throw your invention into the real world, who knows what might happen? There have already been instances of genes jumping from one species to another, for example in the Mad Cow disease incident... Sheep --> Cows --> Humans. Don't get me started.

    Sorry for the flames but I strongly disagree with the cheery optimisim which pervades North American society.

    Read the rest of this comment...

    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:0)
    by Anonymous Coward on Sunday March 12, @03:48PM EST (#225)
    Just one thing, if I may. You strongly disagree with North American optimisim you say? Well, then answer me this: Has it ever been wrong? Has there ever been some man made event that has lead to catastrophic loss of life?

    The author has a very good point. There are many, many perdictions of doom (take a look at the receant 2000 thing) and they have all been wrong. All of them (the proof is in that we are still here). Either the problem never existed to begin with (global cooling) or we realised there was a problem and fixed it (2000).

    As to your Ebola thing. First, viruses do not combine traits. It is true one could evolve that has the traits of both current strains, but it's not like they will just combine. Besides, here again you are guilty of looking only at the negative and assuming the worst will continue to happen. What you fail to remember is that medical science is working on finding a cure/immunazation to the ebola virus, and will probably succeed eventually. And don't say it will never happen, we've conqured polio, small pox, and a host of other plauges that killed millions, we'll conquer AIDS, et al as well.

    I won't take the time to respond on an individual basis to the rest of your points since they are nothing but more of the same. You continue to assume that the worst trends will continue and amplify, and that nothing good will happen to fix them, also you ignore any thing good about the current state of affairs. The only thing I will say is re: the Bioshpere 2. So it failed. So what? It was the first experiment. I'm betting they will figure it out with subsequent expirments.

    Just be ready to eat your words in 20-40 years when the human race is still around.

    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:1)
    by MrEd (tones at tande dot com) on Sunday March 12, @05:22PM EST (#267)
    (User Info)

    You strongly disagree with North American optimisim you say? Well, then answer me this: Has it ever been wrong? Has there ever been some man made event that has lead to catastrophic loss of life?

    Sure, how about Chernobyl, or the recent cyanide spill in Romania and Hungary that killed 90% of the fish life in the Tisa river. Or how about the Love Canal toxic waste dump?

    we've conqured polio, small pox, and a host of other plauges that killed millions, we'll conquer AIDS, et al as well.

    Show me one disease we've cured (simple 'treatment' doesn't count) since smallpox.

    Tuberculosis is even now killing millions in India because there is no money to immunize people against it. Why? There's no money in it. There is, however, lots of money to be made in rich-world viruses such as AIDS, but we haven't cured those either. Hell, we can't cure the common cold, what makes you think we have a chance against Ebola?

    The only thing I will say is re: the Bioshpere 2. So it failed. So what? It was the first experiment. I'm betting they will figure it out with subsequent expirments.(sp)

    Right. It was the first experiment and it was called the Biosphere II.

    The Institute of Official Cheer

    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:1)
    by mangu (orlo_porter@hotmail.com) on Sunday March 12, @06:15PM EST (#277)
    (User Info)
    Sure, how about Chernobyl, or the recent cyanide spill in Romania and Hungary that killed 90% of the fish life in the Tisa river. Or how about the Love Canal toxic waste dump?

    Would you say any of those are global catastrophes that endanger humanity?

    Show me one disease we've cured (simple 'treatment' doesn't count) since smallpox.

    Why simple treatment does not count? Do you think that the fact that millions of people are no longer contracting, for example, scarlet fever, is irrelevant just because a very small number people still get it?

    Hell, we can't cure the common cold, what makes you think we have a chance against Ebola?

    Do this mental experiment: start driving from New York to Los Angeles. By the time you reach Chicago, do you think the fact that you haven't yet reached Denver proves you'll never get to California?

    Right. It was the first experiment and it was called the Biosphere II.

    Okay, it was the *second* experiment. Do you think it's possible to do anything right the second time you try it?

    The point in all this discussion is, Humanity does not seem to be in any danger of becoming extinct. The birth of the six-millionth human last year is evidence of this. There have been many catastrophic predictions, none have come close to happening. Perhaps it's a good thing that people make such predictions. It's the doomsayers that point out many of the things we should avoid.

    But I see no point in denying the remarkable progress we have done with our science and technology. If you think Chernobyl was bad, think of how many people died of cancer in the past, because they breathed the smoke from their candles. Or how they maimed themselves accidentaly with their axes while cutting firewood.

    AFAIK, the biggest danger we have faced in recorded history was the black death during the Middle Ages. Modern medicine and sanitation could have easily avoided that.

    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:2)
    by Daniel on Sunday March 12, @08:45PM EST (#298)
    (User Info)
    Catastrophic loss of life? No, not unless you count the bajillions of species we've killed deliberately and inadvertantly (seen any American chestnuts or bison lately? They're the lucky ones; they're still around, if barely.)

    Daniel

    Hurry up and jump on the individualist bandwagon!
    [ Reply to This | Parent ]
    Replied to the wrong article, doh :) (Score:1)
    by Daniel on Sunday March 12, @08:48PM EST (#299)
    (User Info)
    That post should probably be attached one level up; my web browser doesn't mark italicized text and I didn't realize that those words were a quote..

    Daniel

    Hurry up and jump on the individualist bandwagon!
    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:2)
    by ralphclark (ralph_clark (at) bigfoot (dot) com) on Sunday March 12, @05:31PM EST (#271)
    (User Info)
    You're going to kick yourself...

    Just one thing, if I may. You strongly disagree with North American optimisim you say? Well, then answer me this: Has it ever been wrong? Has there ever been some man made event that has lead to catastrophic loss of life?

    Er...every war ever fought; every plague that depended upon crowded living conditions for its infection rate; every "dustbowl" by poorly managed agriculture which led to famine. Shall I go on?

    The author has a very good point. There are many, many perdictions of doom (take a look at the receant 2000 thing) and they have all been wrong. All of them (the proof is in that we are still here). Either the problem never existed to begin with (global cooling) or we realised there was a problem and fixed it (2000).

    What kind of ludicrous reasoning is that? Just because we've survived up to now doesn't guarantee we'll continue to do so. The vast majority of species that ever lived on this planet have been extinct for millions of years. Why don't you tell it to them! Our present level of technology hardly makes us any less vulnerable to extinction-level events such as major climatic change.

    As to your Ebola thing. First, viruses do not combine traits. It is true one could evolve that has the traits of both current strains, but it's not like they will just combine.

    Do you know this for a fact? Suppose once cell in a gven individual gets infected with both strains at the same time? Inside the cell there are enzymes present which are capable of chopping up and combining the RNA strands of the two strains. It's really only a matter of time unless we can manage to eliminate the virus completely, and we've no hope of doing so at present.

    Besides, here again you are guilty of looking only at the negative and assuming the worst will continue to happen. What you fail to remember is that medical science is working on finding a cure/immunazation to the ebola virus, and will probably succeed eventually.

    This is nothing more than groundless optimism. We can't eliminate Ebola as we don't know where it lives when it's not infecting humans. We're not likely to find out either unless there are widespread epidemics. If it ever *does* get combined with an airborne vector it may well decimate us before we can figure out how to stop it.

    And you've conveniently ignored the probable fact that various biological warfare institutes around the world are desperately trying to combine Ebola with such a vector - just in case the country concerned finds itself losing a war...

    And don't say it will never happen, we've conqured polio, small pox, and a host of other plauges that killed millions, we'll conquer AIDS, et al as well.

    Really? We've eliminated smallpox and polio (until the next outbreak anyway :o/) but AFAIK there are no other infectious diseases that we can claim to have completely eliminated. With regard to AIDS...well, maybe, but retroviruses are hard to deal with because they mutate so fast. And HIV has a few tricks of its own.

    I won't take the time to respond on an individual basis to the rest of your points since they are nothing but more of the same.

    That's *really* lame. To translate: you don't have any response to the rest of his points that would seem reasonable even to an idiot.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
    The self does not exist
    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:1)
    by Kalani (kalani at calcoast dot com) on Sunday March 12, @04:15PM EST (#235)
    (User Info)
    There have already been instances of genes jumping from one species to another, for example in the Mad Cow disease incident... Sheep --> Cows --> Humans. Don't get me started.

    I agreed with most of your comments up until this point (even the sad fact that the Biosphere 2 project failed ... I remember going to see that as a small child). As I understand it, "Mad Cow Disease" is not a genetic problem. Cows contract the disease when they eat other cows. I remember reading about a similar disease which affected cannibalistic humans in South America. I don't think that this example lends credence to your belief that genes "jump" from species to species.
    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:2)
    by ralphclark (ralph_clark (at) bigfoot (dot) com) on Sunday March 12, @04:27PM EST (#243)
    (User Info)
    The prion which leads to scrapie in sheep, BSE in cows and CJD in humans indeed has nothing to so with gene transferral, but that's not the point. The real point is that humans doing stupid things for the sake of profit (in this case feeding sheep offal to cows and making meat pies out of those cows' brain tissue) can quite easily lead to disaster.

    In any case there is *plenty* of evidence that genes can be transferred between species. To take the most mundane case - what do you think viruses are doing?

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
    The self does not exist
    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:1)
    by Kalani (kalani at calcoast dot com) on Sunday March 12, @04:49PM EST (#253)
    (User Info)
    The prion which leads to scrapie in sheep, BSE in cows and CJD in humans indeed has nothing to so with gene transferral, but that's not the point.

    Thank you, I'm not well-versed in the technical details of the diseases and am just beginning college courses in biology. I don't mean to imply that I have a wealth of knowledge with respect to biology or chemistry.

    The real point is that humans doing stupid things for the sake of profit (in this case feeding sheep offal to cows and making meat pies out of those cows' brain tissue) can quite easily lead to disaster.

    I agreed with that point. There are innumerable examples of such follies throughout recorded history (many of them being the same mistakes relived).

    In any case there is *plenty* of evidence that genes can be transferred between species. To take the most mundane case - what do you think viruses are doing?

    I wasn't arguing that genes can't move from one species to the next or that species don't have a HUGE number of common genes (in that respect, human beings have a lot in common with a bacterium). I only meant to challenge the idea that Mad Cow Disease was proof of gene transferal; your knowledge of the jargon behind this disease helped me to do that.

    As for your example of viruses transferring genes between species, I believe that you mean to say that a virus is implanting its genes into a living creature (which is true and can be one of the causes of "junk DNA" in a species line). However, I'm not absolutely certain that this amounts to cross-species transference of genes as viruses are not yet (to my knowledge) considered living creatures. It would therefore be more akin to a species picking up "random DNA" or "gene noise." Again, I'm neither a biologist nor am I a chemist so I am not attempting to write the final word on this.

    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:2)
    by ralphclark (ralph_clark (at) bigfoot (dot) com) on Sunday March 12, @06:42PM EST (#284)
    (User Info)
    My credentials in Molecular Biology are pretty worthless since I finished my study in that field back in about 1987, and at least half of what's now known seems to have happened after that! But there has at least been some speculation that viruses - particularly retroviruses - may pick up genes from a host cell. Consider that inside the host cell, all the enzymes for splicing, insertion, deletion etc are present together with short sections of expressed mRNA, and the viral RNA is floating freely in the middle of all that. It hardly stretches credibility to suggest that occasionally a piece of host mRNA might attach itself to the viral plasmid.

    In any event, most of the furore about genetically engineered species being let loose in the wild is for a similar reason. In particular, it's thought that plants do sometimes cross-fertilize other species - and since pollen is airborne and can travel quite long distances on a modest breeze or stuck to a bee's leg, we may not be able to control the spread of artificial plant genes to other unintended species.

    I also understand that early cancer research was dogged with false results because of airborne human DNA infecting in vitro lab cultures.

    Finally there is the question of where viruses might have come from in the first place. There are two theories: (i) that viruses are devolved cells which lost the machinery for life and became completely parasitic; and (ii) that they are just pieces of genetic material that "escaped" their original genome. Of course it's possible that both theories are true, for different viruses.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
    The self does not exist
    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:1)
    by Ig0r (suxmeh0ff at hotmail dot com) on Sunday March 12, @04:34PM EST (#247)
    (User Info)
    The cattle were being fed the processed brains of slaughtered cattle, which is what causes 'mad cow' disease. The cannibalistic humans who had the same type of disease also ate the remains of dead (they weren't killed by the tribe) humans. This has nothing to do with 'genes jumping'.

    -- Eschew obfuscation!
    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:0)
    by Anonymous Coward on Sunday March 12, @04:48PM EST (#252)
    "Mad Cow" disease comes from a prion-based disease that originated in sheep. It made it to cows because factory farms feed their animals anything that is borderline edible, including the ground up waste materials from sheep processing (ie their prion infected brains).
    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:1)
    by MrEd (tones at tande dot com) on Sunday March 12, @05:04PM EST (#260)
    (User Info)
    Fair enough, I was writing my rant in a fair rush and didn't check up on my facts... However, it's fact that genes can jump from species to species. There have been cases of antibiotic-resistant bacteria transferring the necessary genes to other types of bacteria, allowing them to survive. There has also been examples of genes from GE crops spreading to other species, such as weeds nearby the fields. Check it out on Google.

    The Institute of Official Cheer

    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:2)
    by costas (costas@nospam.malamas.com) on Sunday March 12, @04:14PM EST (#234)
    (User Info) http://malamas.com/
    Hear, hear... without having read the actual article, it sounds like Joy is extrapolating from current trends too much. It almost sounds like he saw, oh, I dunno, 'e' and thought, "oh, this is a nice line editor... maybe we can extrapolate from here and create a multi-line 'e'"... oh, hold on, he already did that ;-)... (disclaimer: I am joking, I have the outmost respect for the man, and hjkl are as natural to me as, well, arrow keys ;-)...

    All the technologies he mentions are collaborative ones, i.e. they cannot be developed and/or applied by some mad scientist in a basement. They require organized, coherent team work. I.e., they do require rogue states, not rogue individuals.

    More importantly, when something hits an extreme, it creates a backlash, a return towards equilibrium; that is true of society as much as it is for physical systems. When the Internet/technology/genetics will reach the edge of acceptable use/behavior, society will change to compensate. Look into the past: the Middle Ages created the Renaissance, the '50s brought the '60s, the '80s spawned the '90s... Our technological ethics will change to accommodate our technologies...


    engineers never lie; we just approximate the truth.
    AegeanTimes: Greek and Turkish News
    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:2)
    by edhall (edhall@weirdnoise.com) on Sunday March 12, @05:49PM EST (#272)
    (User Info) http://www.weirdnoise.com

    How soon we forget.

    There was a time when incineration of much of the civilized world was always 20 minutes away, not 20 years. Whether secondary effects (so-called Nuclear Winter) would have led to eventual extinction or not seem rather beside the point--the world as we knew it would have ended. That it did not happen was, more than many of us realize, a matter of shear blind luck.

    There were, and are, only two powers in the world who could bring about such a global catastrophe. The reason for this limitation is more a matter of the enormous cost of producing nuclear weapons than the technological difficulty of doing so. For now, and for the near future, nuclear physics is too expensive for more than just the US and Russia to put civilization at risk.

    What Bill fears, I think, is the development of technologies as threatening as those which came from nuclear physics, but without the economic barriers. Consider: what if Moore's law applied to nuclear weapons as well as integrated cicuitry? What if it does apply to the next destructive technology? Or: what if a chain reaction of self-replicating agents--whether biological, nanotechnological, or self-organizing--proves much cheaper than the nuclear variety? By harnessing the existing biological, molecular, or technological environment to its ends, could a technology be created where such replication to worldwide scale came (from the creator's perspective) essentially for free?

    The cheaper it becomes to develop the technical means to threaten humanity, the more likely it will be that a state, group, or even person will be insane enough to exploit it. It's the change in economics that increases the danger. Economics explains why New York isn't hip-deep in horse manure just as it explains why basement-lab nuclear weapons don't exist, even though the knowledge necessary to produce them is readily available. Cheaper, faster alternatives became available in the first case. Are we ready for such alternatives in the second case?

    -Ed

    [ Reply to This | Parent ]
    Re:Always in twenty years (Score:0)
    by Anonymous Coward on Sunday March 12, @06:28PM EST (#280)
    It is pretty discouraging to think that the rise of information has made us elites less elite. Kings used to have it worse.

    Who's going to be able to judge what's "good enough" for the masses if each individual goes from being a queen or a king to a nation-state?

    It used to be we had a constitution that kept the majority from oppressing the minority. Think road-rage is a problem? I'm going to practice being very polite. It may well be the only defense is to give no offense (else you're history).

    RSN (maybe 5-10 years) we'll have dial-a-drug or a part-a-matic (feed in raw materials and programming, out pops what-ever you'd like). We already have (expensive) dial-a-part - metal powder in one end, very strong alloys out the other. want a built-to-fit chair, or a carbon nanotube custom carbody, or a fully automatic machine gun (and bullets)? just "program" it. IPR rules.
     
    Drug companies become publishers and pharmacies and drug lords feel the heat. One day the $1000 in-home "print-a-complex protein" device produces a "vaccine of the week (no more common colds, but it turns out also to defeat the last plague the terrorists unleashed)", the next day it makes a polio virus as a prank by the neighborhood bully.

    Anyone think the FDA is our friend because it saves us several hundred deaths a year by putting drug companies thru a maze? If the drug companies were market driven and competing on time - days, not years, maybe we'd have the ability to deal with a bio threat (but it might well cost a few hundred lives a year to be prepared to save millions). Well, we always lose the first few battles anyway (e.g. figuring out it's really not going to be a battleship war).

    As it is food production and manufacturing are taking a smaller and smaller percentage of the workforce. Will food be nearly free? Get it to water and it costs nothing to ship everywhere.

    Worried about energy prices? Don't. Markets are able to play with both sides of the equation. Means "methods will be found" (name a era where they were not - though in a way they will cheat, because the (always temporary) solution will surprise both parties).

    A real pity how all this stuff is forcing us to trust the average soul. Sounds like a return to the days of washington and jefferson. very very little government, the gentle farmer a royalty and armory on their own property, etc. Nothing accomplished save by their efforts in a civil (voluntary) society...

    We've fallen pretty far.. then again, when we've fallen far in the past the problem-solvers emigrated (ran away). Time to get the escape boats ready.

    Tai Tung
    [ Reply to This | Parent ]
    A quote from "The Difference Between the Sexes" (Score:4, Insightful)
    by Guppy on Sunday March 12, @11:43AM EST (#28)
    (User Info)

    Here's a little quote from "The Difference Between the Sexes", by E. Balaban (ed) and R. V. Short (ed):

    "Perhaps the lifespan of a species is inversely proportional to its degree of intellectual development? The probability that a species that has evolved to be as intelligent and all-conquering as ours could survive for long is remote indeed. We may live in a silent universe for a very good reason. Paradoxically, evolution may have ensured that we have one of the shortest survival times of any species, since it has made us, effectively, our own executioner."


    [ Reply to This | Parent ]
    Re:A quote from "The Difference Between the Sexes" (Score:0)
    by Anonymous Coward on Sunday March 12, @02:50PM EST (#195)
    Isn't it amazing how just getting off this hunk of dirt changes all the equations.
    [ Reply to This | Parent ]
    For more on this... (Score:1)
    by Stargazer (targz@softhome/.net) on Sunday March 12, @11:43AM EST (#29)
    (User Info)
    If you're looking for thoughts on this subject from an artistic point of view, I would recommend you play "Parasite Eve," for your Playstation from Squaresoft.

    The game speaks deeply to the dangers that arise whenever we are too reliant upon something -- be it technology, or the bacteria which allow us to live. It's a very chilling tale.

    -- Stargazer
    [ Reply to This | Parent ]
    Advanced Technology? (Score:1)
    by zaius (jeff@YOURE_TOO_STUPID_TO_SPAM_MEzaius.org) on Sunday March 12, @11:45AM EST (#31)
    (User Info) http://jeff.zaius.org
    I must agree with the author in that _some_ advanced technology poses a threat to the human species, but not all of it. In fact, stating that all advanced technology poses a threat to the human species is probably the biggest load of crap I've ever heard.

    Its quite obvious that some of our scientific advances could be quite dangerous, and if misused could end life on earth (ie. nuclear/chemical/biological weapons). Its also quite obvious that some of our discoveries have significantly advanced the human race, and will continue to do so (ie. air transportation (which, btw, your friend and mine Ted Kaczynski was blowing up), computers, genetic engineering (that's a controversial one, I'm talking about plants, not humans), and many advances in the field of medicine).

    [ Reply to This | Parent ]
    Current Technology (Score:1)
    by Tuxedo Mask on Sunday March 12, @12:47PM EST (#96)
    (User Info)

    Its quite obvious that some of our scientific advances could be quite dangerous, and if misused could end life on earth

    I seriously doubt that we could end life on earth. We have succeeded in causing many extinctions, but some forms of life are very stubborn. (Consider the recently discovered iron-dwelling microbes.)

    I reckon the most damage we could easily cause would be simultaneous groundburst of all warheads. The planet has been prone to glaciation over the last Myr or so, so this may be able to trigger an ice age. However, even if it lasted longer than usual, the large scale glaciation cycle should be over in a few more Myr. (depending on what is causing it -- polar continental distribution, Himalyan uplift, or what have you) This kind of event is not unprecedented.

    If howveer we could nudge an asteroid to collide with the Earth, then we could probably kill any animal bigger than a rat. But using present technologies it would take a lot of coordination and at least a century. And if there were an untimely nuclear war before we were done, then we'd have to start all over again.

    A little food for thought.


    [ Reply to This | Parent ]
    Use genetics to our advantage (Score:1)
    by browser_war_pow on Sunday March 12, @11:46AM EST (#33)
    (User Info) http://digitalheresy.tripod.com
    How about use genetics to create more intelligent humans? I seriously doubt though that these scenarios will come true though because I have enough hope in our race that we will become more ethical the more advanced we get.
    Welcome to the 21st century: megacorporations crush personal liberty and individual rights and the government calls it "capitalism"
    [ Reply to This | Parent ]
    Article about an article about an article sucked. (Score:5, Interesting)
    by Jikes (jikes@myrealbox.SPAM.com) on Sunday March 12, @11:49AM EST (#34)
    (User Info)
    Self-replicating machines? Nanotechnology run amok? Machines that become smart and enslave humanity? Please, this is reality, not an episode of star trek.

    Finally, he argues argues, this threat [machinery] to humanity is much greater than that of nuclear weapons because those are hard to build.

    HAHAHA!

    Please. We can't even write a web browser within three years, much less program sentient robot roaches that could destroy our planet.

    There's only like, what, forty thousand nukes extant on earth, each capable of wiping out millions of lives in five minutes? Many capable of poisoning an entire planet for millenia if detonated close enough to the ground? ALL of them are owned by warmongering, jingoistic, pathologically disturbed political entities who have NO QUALMS whatsoever about using nuclear warheads whenever it is convenient?

    Nuclear weapons, traditionally developed viruses, lethal bacteria, political unrest, riots, the complete disruption of climate, economic decay, and plain old steel bullets fragmenting children's skulls into explosions of bloody brain and bone (just like the children of Kosovo who the entire world is eagerly attempting to exterminate) are ALWAYS going to be more of a concern to me than sentient computers messing with my tax return. This article sucked. Perhaps the real thing will explain stuff better.

    The most dangerous aspect of living on earth is that we are sentient. If we weren't, we wouldn't give a shit what happens in the long run. (which we don't, when it gets down to it)
    :D Troll? Point? Yes! :D
    [ Reply to This | Parent ]
    Re:Article about an article about an article sucke (Score:1)
    by Rares Marian (rmarian@winblowsstart.com) on Sunday March 12, @11:58AM EST (#43)
    (User Info)
    The Digital Divide is the problem

    You don't see people strangling people with let's a baby's diaper.

    The most dangerous thing in the world is not waking out of that childish elitist fascism we're all born with.


    Petrified Iron Clad solution: Rob, Jeff - Create the /. API that let's us parse titles and content in articles
    [ Reply to This | Parent ]
    Re:Article about an article about an article sucke (Score:2, Insightful)
    by PerlGeek on Sunday March 12, @12:59PM EST (#105)
    (User Info)
    > Self-replicating machines? Nanotechnology run
    > amok? Machines that become smart and enslave
    > humanity? Please, this is reality, not an
    > episode of star trek.

    Those are all pretty big threats, I don't see how you can brush them off so easily. IMHO, far more dangerous than nukes. We've lived with nuclear power for over half a century, and most of us have benefited. Cheaper electricity, lower CO2 emissions, less consumption of fossil fuels. There have been disasters. Some accidental, a couple were deliberate, but the nuclear armageddon so many have predicted hasn't happened. It still might, but now we have far greater dangers. AI enslaving mankind is not merely a star trek episode, I've seen it on Outer Limits, The Terminator, and The Matrix, to name a few.

    Nanotech run amok is a danger, but only from sufficiently adaptable nanites. Simple, we just don't build any like that, right?

    When you have enough experts thinking about it for a long enough time, someone is going to build it, just for curiosity's sake. Or maybe trillions of particles of radiation hitting trillions of nanites will cause most to die, but one to become dangerous. When you start talking about self-replicating machines, you have to be very careful. If evolution can happen to wet nanites (bacteria, viruses), it can happen to dry nanites, too.

    I'm not saying we shouldn't investigate it. It's a pandora's box. First comes the evil, then the good that helps us survive the evil. We might wipe ourselves out with nukes, or we might use nuclear propulsion to colonize mars, titan, or alpha centauri. Nanite-boosted immune systems might defend our bodies from rapidly evolving nanite plauges. If AI turns evil on us, we might build smarter, stronger AI to defend us.

    We just have to be careful, stay paranoid, and don't stop asking questions.
    [ Reply to This | Parent ]
    Re:Article about an article about an article sucke (Score:1)
    by Theodore Logan on Sunday March 12, @02:45PM EST (#189)
    (User Info)
    Finally, he argues argues, this threat [machinery] to humanity is much greater than that of nuclear weapons because those are hard to build.

    HAHAHA!

    Please. We can't even write a web browser within three years, much less program sentient robot roaches that could destroy our planet.

    You're completely missing the point. What he's saying isn't that, right now, building a sinister intelligent seek-and-destroy nano-cyborg is any easier than it is to build an atombomb. In fact, he knows, as do the rest of the world, that it's not even possible at all. However, if (when?) nano becomes a reality, it will be a far less complicated task for the average evil-world-conquerer Joe to build that cyborg than it, today, is for him to build himself an H bomb.

    Was that really so hard to comprehend?

    "If you think education is expensive, try ignorance" - Derek Bok

    [ Reply to This | Parent ]
    Re:Article about an article about an article sucke (Score:1)
    by chigaze on Sunday March 12, @11:18PM EST (#325)
    (User Info)
    There's only like, what, forty thousand nukes extant on earth, each capable of wiping out millions of lives in five minutes? Many capable of poisoning an entire planet for millenia if detonated close enough to the ground? ALL of them are owned by warmongering, jingoistic, pathologically disturbed political entities who have NO QUALMS whatsoever about using nuclear warheads whenever it is convenient?

    And yet none of them have. Perhaps in the 55 years since the last time a nuclear weapon was used in an act of aggression it just hasn't been convenient. Or perhaps despite the fact we have these things even the most pathological of those who have them don't. This is not to say some loon won't get one and use it, but the chance of a nuclear armegedon is remote. The chance of losing a city is another story.

    I think we have more to fear from the things that we do not see the danger in than from those whose dangers are obvious. If we are destroyed it will be like every other great moment in science: "Oops."
    [ Reply to This | Parent ]
    The Digital Divide is the problem (Score:1, Interesting)
    by Rares Marian (rmarian@winblowsstart.com) on Sunday March 12, @11:50AM EST (#35)
    (User Info)
    Bill Joy needs to relax. That's nothing. In fact I can guarantee that when technology is seen as power rather than as a practical household item you're going to get problems.

    Same with Guns.

    Same with Patents.

    On the other hand the perfect example:
    Genetics is an unstructured witch doctor's haven.
    Greeks invented Gods when they could explain something.
    Physicists invent subparticles to fix other people's theories.
    Geneticists invent genes in the same way.

    It's called fascism. And frankly I think the mainframe ideology speeds that up. Give it up.

    Look around people are afraid of electronics. Is that a healthy market? I don't think so.

    Incidentally where does WP get off not having link through which I can respond to Bill Joy. I could have titled this Letter to sun and been mod'ed to a 3.
    Petrified Iron Clad solution: Rob, Jeff - Create the /. API that let's us parse titles and content in articles
    [ Reply to This | Parent ]
    Of humans and AI (Score:2, Insightful)
    by Camelot on Sunday March 12, @11:52AM EST (#38)
    (User Info)
    The idea of an AI becoming more intelligent than human is by no means new. It may sound sensationalist to the mainstream audience, but the subject has been approached and evaluated many times (we've all read/seen our Asimov, Terminator, Blade Runner, Neuromancer, Matrix, not to mention less known works, don't we ?)

    If we don't - intentionally or accidentally - relegate ourselves to the equivalent of a technological stone age, I consider the emergence of AI - or machines - superior to humans an inevitability. The question is not if, but when.

    Are we to fear buggy software because of this - yes. Think of the security bugs in today's software, and Asimov's laws of robotics. If we were to create an intelligent being like that, we would want it to always be controlled by us. The trouble is that the software in a robot like that would be very complex - and buggy, thus it would be possible for it to override its instructions.

    In a way, by trying to create an AI humans are trying to be gods themselves - to create life. Is it possible to create a life form superior to humans without completely understanding life itself ? If so, the life so created - like humans themselves - would be imperfect, and with its faults, without full knowledge of the consequences of its acts, might end up destroying humans . And if they didn't.. it might be The End Of Humanity As We Know It. Whether that would be Armageddon or just the next step in evolution towards a higher consciousness.. well, that is up to you.

    &cam;

    [ Reply to This | Parent ]
    Re:Of humans and AI (Score:1)
    by cranq (cranq@yahoo.com) on Sunday March 12, @12:13PM EST (#60)
    (User Info)
    I agree. If you believe that there is nothing supernatural about the human mind, then it follows that AI is inevitable, really... and the prospect is both fantastic and frightening.

    In general, all living things desire survival -- and if AIs have this trait then interesting times will be a-coming.

    Heck, even if AIs are supremely subserivent they will still redefine our world. In an environment where more and more people are defined as "knowledge workers", there is a good chance that AIs will be better at our jobs than we are.

    Hmmm... I wonder what kind of renumeration an AI would want for writing a piece software?

    Regards, your friendly neighbourhood cranq
    >> Is it true that cannibals won't eat clowns because they taste funny?
    [ Reply to This | Parent ]
    I for one prefer this to the alternative (Score:3, Insightful)
    by sstrick on Sunday March 12, @11:55AM EST (#39)
    (User Info)
    While rapidily advancing technology could pose a threat, I would prefer to live with that threat and risk human kind than the accept the alternative.

    That is to stop developing and advancing human technology. The world would be a little boring if everything that we shall ever invent has already been invented.
    [ Reply to This | Parent ]
    Rock and a Hard Place. (Score:1)
    by Rares Marian (rmarian@winblowsstart.com) on Sunday March 12, @12:18PM EST (#66)
    (User Info)
    Suspended animation of society by fascism/communism vs destruction of society by fascism caused by the digital divide?

    Aren't those the same problem?:) I love it. Either we're not mature enough or we're not mature enough.

    Technology can go forward. People have to grow up There's no substitute.
    Petrified Iron Clad solution: Rob, Jeff - Create the /. API that let's us parse titles and content in articles
    [ Reply to This | Parent ]
    Those aren't the only options (Score:1)
    by revscat (revscat@ughnolikeyspam.swbell.net) on Sunday March 12, @01:45PM EST (#146)
    (User Info) http://home.swbell.net/revscat

    Generally speaking, those who wish to raise alarms about the risks of advanced technology do not want to see an all-out hiatus. Rather they would like to see mechanisms put in place that would prevent mishaps. The ban on human cloning is an example. So are most environmental regulations stemming from global warming conerns, such as the ban on CFC's.

    I haven't read the Wired article yet, of course. But to say that it is impossible that homo sapiens will be extinct in 50 years is silly. We *are* making some amazing advancements. It is pretty much a given that 50 years from now computers will be so much more powerful than the most complex server-farm we have today that genuine Turing-tested intelligence will be possible. That is not so far fetched. Nor is it so far fetched that bad things will happen because of this.

    And I would remind everyone that SETI@Home has yet to find anything. While the explanations for this lack of contact are legion, one of these explanations is that most intelligent societies wind up destroying themselves somehow. This was even advanced by Sagan. It is a pessimistic theory, but this pessimism is not cause for invalidation.

    - Rev.


    "The only difference between a Republican and a Democrat is that I'd fuck a Democrat." - Sarah Michelle Gellar
    [ Reply to This | Parent ]
    More likely reasons SETI hasn't found anything (Score:0)
    by Anonymous Coward on Sunday March 12, @04:05PM EST (#231)
    1) Look at the vastness of our galaxy. Just our galaxy. Then look at just how many galaxy's just like it are out there. Now think: Assuming our observations are correct, and there are very few planets capable of sustaning life, and even less of those has the life evolved into an intelligent state, what are the odds that another one is going to be in readio range? I mean really. We've only been looking for about the past 20 years or so. That really doesn't give us much coverage. What if a star, on the other side of our galaxy has intelligent life around it. And what if they didn't begin brodcasting until 50 years ago? Well, even assuming our instruments could pick up a feeble signal like that through all the cosmic noise, it wouldn't get here for another couple of thousand years.

    2) If there is other life out there (and I think there is) there is no gaurentee that it is ANYTHING like life on earth. The differences could be so vast, that they don't even use anything that we would recognise as a signal. Perhaps they have direct mind to mind communications and never needed radio signals. There is no gaurentee that we will even be able to recognise what other life is doing as communication.

    3) Perhaps we are one of the most advanced species in the universe. Now I know this sounds silly, but really, what makes you think evolution (natural or technological) would proceed any faster on another world? Perhaps all advanced life in the universe is roughly on par with humans, and it'll be another 500-1000 years before we ever contact each other. Or, perhaps there really is very, very advanced life in another galaxy, but FTL is not possible, and so we'll never know since they are million of light years away.

    At any rate, my point is simply that there are some very good reasons, not related to intelligent life killing itself, that we have as of yet had no contact. And, like I said, maybe light is an absolute, and we can never circumvent it. In that case, though we may someday make contact with a culture a few hundred lightyears from our own, we will more or less be forever alone in the universe.

    [ Reply to This | Parent ]
    Re:More likely reasons SETI hasn't found anything (Score:2)
    by stripes (stripes at eng dot us dot uu dot net) on Sunday March 12, @10:36PM EST (#316)
    (User Info) http://www.eng.us.uu.net/staff/stripes/

    Another possability is that radio emissions from more advanced technologies resembles noise more and more as they get incresingly advanced.

    Look at morse code, then AM radio, AM radio looks just like a frequency shifted verison of the voice/sound pattern (because it is). FM radio is a good deal harder to figure out from looking at it what it is trying to say, but it is obvious that something is there. CDMA, I can't find CDMA on a specrum analiser, and I even know where it lives on the frequency band!

    [ Reply to This | Parent ]
    Re:I for one prefer this to the alternative (Score:1)
    by Deven (deven@ties.org) on Sunday March 12, @03:25PM EST (#209)
    (User Info) http://www.ties.org/deven/
    The world would be a little boring if everything that we shall ever invent has already been invented.

    This was supposedly said by the U.S. Patent Office over a century ago. However, it appears to be an urban legend.

    Deven

    "Simple things should be simple, and complex things should be possible."

    [ Reply to This | Parent ]
    Never going to happen... (Score:2, Interesting)
    by NatePWIII (npw_npw@yahoo.com) on Sunday March 12, @11:56AM EST (#40)
    (User Info) http://www.npsis.com/~nathan
    AI as portrayed in the Matrix will never happen simply because an awareness of self is something more than just 10 billion neurons (or transistors) firing in a coherent fashion. To aquire true AI or intelligence is to be aware of one's self this can neither be created nor destroyed. I suppose it is something like a "sole" but even deeper than that. If this all sounds too metaphysical it isn't its just the simple truth on the matter...


    Nathaniel P. Wilkerson
    NPS Internet Solutions, LLC
    www.npsis.com
    "Domains at $15/year"
    [ Reply to This | Parent ]
    Re:Never going to happen... (Score:1)
    by StrawberryFrog (Strawberryfrog(at)webmail.holdthespam.co.za) on Sunday March 12, @12:14PM EST (#61)
    (User Info)
    > AI as portrayed in the Matrix will never happen

    Agreed - there are better anergy sources than people.

    >simply because an awareness of self is something more than just 10 billion neurons (or transistors) firing in a coherent fashion

    Agreed too. There is complexity and structure that is needed.

    > To aquire true AI or intelligence is to be aware of one's self this can neither be created

    Rubbish. 1 unskilled female can create an intelligence in a little over nine months. Admittedly years of training are needed before the new mind is fit for human society, but the principal still holds

    > nor destroyed.

    Oh yeah, this is also all too easy...

    Again rubish.
    ribbit StrawberryFrog
    [ Reply to This | Parent ]
    Re:Never going to happen... (Score:1)
    by Rares Marian (rmarian@winblowsstart.com) on Sunday March 12, @01:10PM EST (#113)
    (User Info)
    Rubbish. 1 unskilled female can create an intelligence in a little over nine months. Admittedly years of training are needed before the new mind is fit for human society, but the
    principal still holds


    ---->aware of one's self this can neither be created

    Awareness you dolt. Incidentally a(n) (un)skilled
    (fe)male does not create life. They raise it. Something completely different. The creation is something involuntary.

    ----> nor destroyed.

    Oh yeah, this is also all too easy...

                                  Again rubish.


    You cannot destroy complexity. Conservation principle.
    Petrified Iron Clad solution: Rob, Jeff - Create the /. API that let's us parse titles and content in articles
    [ Reply to This | Parent ]
    Re:Never going to happen... (Score:1)
    by StrawberryFrog (Strawberryfrog(at)webmail.holdthespam.co.za) on Sunday March 12, @03:35PM EST (#214)
    (User Info)
    > Awareness you dolt.

    Same difference. You seem to be quoting some dogma here, Christian or otherwise. If not, show a reference to where it has been proved that "Awareness" cannot be created. Otherwise, we should assume that it as as everyday and simple an event as the birth and gradual awakening of yet another self-aware person.

    Stop wasting our time with True Belief and open your mind.

    > The creation is something involuntary.
    Ok, so if my heartbeat is involuntary, does that mean I'm not the one doing it? Don't be silly.

    To veer briefly in the direction of the original topic, if a runanway nanotech is not "aware" or even "alive", does that make it any less dangerous? Not.

    > You cannot destroy complexity. Conservation principle.

    Er, wot?
    Either
    1) I slept a bit to much in those early-morning physics lectures all those years ago, or
    2) You're working from a different set of axioms. Why not come clean with the agenda?

    Conceration of complexity? Rubbish again, entropy says the opposite.
    A good 1000 degree flame will reduce the awesomely complex structure of the human body down to a pile of simple inorganics within an hour. If you bury it instead, the process takes longer but the end result is the same. Heck, I could reduce all the complexity of my 500Mz processor to nothing just by applying the wrong voltage. Look, when the dogma conflicts with common sense, *discard the dogma*


    ribbit StrawberryFrog
    [ Reply to This | Parent ]
    Re:Never going to happen... (Score:0)
    by Anonymous Coward on Sunday March 12, @12:21PM EST (#73)
    can you say luddite? what are you talking about when you say to be aware of ones self can't be created or destroyed? thats just plain stupid. you werent self aware before you were born were you? no because you(your brain) didnt exist. and you suppose it's 'something like a 'sole''? soul spelled: sole? HAH! yea i have two of them on the bottom of my shoes right now, retard. and: 'If this all sounds too metaphysical it isn't its just the simple truth on the matter': oh well now im convinced!
    [ Reply to This | Parent ]
    Good heavens! (Score:1)
    by StatGrape on Sunday March 12, @12:25PM EST (#79)
    (User Info) http://www.nerdperfect.com
    To aquire true AI or intelligence is to be aware of one's self this can neither be created nor destroyed. I suppose it is something like a "sole" but even deeper than that.

    Super intelligent footwear? Dear God, we'll all be reduced to the role of slave cobblers for our new shoe-Gods. We simply must stop technological advancement now, before it's too late!

    -SG

    [ Reply to This | Parent ]
    He wasn't talking about shoes (Score:1)
    by unitron (unitron@tacc.net) on Sunday March 12, @09:07PM EST (#302)
    (User Info)
    It's perfectly obvious that he was discussing seafood :)
    --insert fisherman catching old boot joke here--

    proudly boycotting Slashdot's ``high-priority'' submission queue--at least 'til I find it

    [ Reply to This | Parent ]
    Your sources? (Score:1)
    by Rainy on Sunday March 12, @12:36PM EST (#89)
    (User Info)
    Please, whenever you state something as profound. always provide your sources for this information. Because if you don't, most people will simply assume that you have *NONE*, which makes you look like a religious nut.
    Note: you can post the sources in the reply to original post or to this post. I'm gonna check back later. Thanks.
    -- ATTENTION: do not read this sig. It doesn't say much.
    [ Reply to This | Parent ]
    Re:Never going to happen... (Score:2)
    by Harvey on Sunday March 12, @12:48PM EST (#99)
    (User Info)
    int main(int argc, char **argv){
            printf("I exist");
            return 0;
    }

    If you believe that we're just chunks of carbon, there's nothing to prevent a computer from emulating us exactly, from an outsider's point of view. You might think there's a difference, but that's probably a conditioned response to farther our genes (like the idea that we have free will) If there's nothing "special" about the human brain (like a soul), and a complete human exists completely within the bounds of our physical universe, there's nothing stopping us from copying one's intelligence.
    [ Reply to This | Parent ]
    Re:Never going to happen... (Score:0)
    by Anonymous Coward on Sunday March 12, @01:08PM EST (#112)
    actually you contain more oxygen than carbon (methane notwithstanding)
    [ Reply to This | Parent ]
    Re:Never going to happen... (Score:0)
    by Anonymous Coward on Sunday March 12, @01:52PM EST (#149)
    Uh no you don't. Practically everything in your body is a carbon chain or ring. What the hell are you talking about? Oxygen is ocasionally linked here and there to a Carbon ring or nitrogen.
    [ Reply to This | Parent ]
    Re:Never going to happen... (Score:0)
    by Anonymous Coward on Sunday March 12, @02:50PM EST (#196)
    Uh no you don't. Practically everything in your body is a carbon chain or ring. What the hell are you talking about? Oxygen is ocasionally linked here and there to a Carbon ring or nitrogen.

    Aren't we ~75% water? Isn't water 2 atoms of oxygen for every atom of hydrogen? Doesn't that make us 50% oxygen?
    [ Reply to This | Parent ]
    Re:Never going to happen... (Score:1)
    by noahb on Sunday March 12, @03:09PM EST (#201)
    (User Info)
    kind of. Only water is one Oxygen and 2 Hydrogen, so that makes us mostly H. (by atom count for each element)

    hey wait a minute! isn't hydrogen fuel the rage these days?! so if humans are mostly hydrogen, then we are an exellent source of fuel!! (can i patent this idea?)

    hmm.. those silly robots in the matix aren't so stupid after all!
    [ Reply to This | Parent ]
    Re:Humans as fuel (Score:1)
    by unitron (unitron@tacc.net) on Sunday March 12, @09:09PM EST (#303)
    (User Info)
    Let's use you first.

    proudly boycotting Slashdot's ``high-priority'' submission queue--at least 'til I find it

    [ Reply to This | Parent ]
    AI vs artificial consciousness (Score:1)
    by soxhlet on Sunday March 12, @02:08PM EST (#157)
    (User Info)
    I never seem to see these 2 seperated. IMO your saying that artificial consciousness is impossible, and yes it very well may be. However, I dont see why a machine would need to be self aware in order to exhibit intelligent behaviour. I doubt a bee is self aware when you step on it and it stings your foot, but your foot still hurts none the less. Is it not possible right now to have a robot with the intelligence level of an insect with todays processing power. So what happens when processing power not only equals that of the brain, but in a few years after that, far surpasses the brain's ability. You cant say that with that much processing speed it would be impossible to fake intelligent behaviour that is on par with a human. The threat is very real. sox
    [ Reply to This | Parent ]
    Brain Surgery (Score:0)
    by Anonymous Coward on Sunday March 12, @02:28PM EST (#174)
    Obviously not. I can perform surgery on your brain that will make you unaware of your self. I can destroy your self-awareness. "Self-Awareness" is biochemistry, not anything else.
    [ Reply to This | Parent ]
    Re:Brain Surgery (Score:1)
    by NatePWIII (npw_npw@yahoo.com) on Sunday March 12, @04:20PM EST (#238)
    (User Info) http://www.npsis.com/~nathan
    Wrong, you can perform surgery and make me incapable of thinking rationally or being able to communicate with the outside world but you cannot take away my self awareness even though it might be masked by drugs or other methods.


    Nathaniel P. Wilkerson
    NPS Internet Solutions, LLC
    www.npsis.com
    "Domains at $15/year"
    [ Reply to This | Parent ]
    Re:Never going to happen... (Score:1)
    by QuadPro (jurjen@stupendous.org) on Sunday March 12, @04:14PM EST (#233)
    (User Info) http://www.stupendous.org

    AI as portrayed in the Matrix will never happen simply because an awareness of self is something more than just 10 billion neurons (or transistors) firing in a coherent fashion.

    "Simply because"??
    That's the second time I see such an argument used here. (and I'm not reading this discussion very well.) What gives you the justification to say "simply because"?

    It looks like everyone who uses arguments like "obviously", "simply because", "naturally", etc. etc. don't realise that the problem of (hard) (A)I really lies in those things people dismiss so easily!

    Try to think of these things without using arguments like the one used above.


    - Jurjen
    [ Reply to This | Parent ]
    Humanity in a basket (Score:1)
    by cranq (cranq@yahoo.com) on Sunday March 12, @11:56AM EST (#41)
    (User Info)
    I remember scanning an interview with Neal Stephenson about The Diamond Age... when asked what the biggest challenge in coming up with the plotline was, he responded with something like: "Visualizing a future where nanotech is commonplace and everyone isn't dead"

    Lots of interesting technologies are advancing at breakneck pace right now. I see several different ways that humanity could become irrelevant, but there are a few (nanotech comes to mind) that have the potential to poison our little blue home to the point where nobody can live here anymore.

    And as time goes by, the genius/madness factor required to do such a thing gets smaller.

    Perhaps that alone is a good reason to pursue the creation of Human-friendly habitats in space. Right now all our eggs are in one basket, and we don't exactly know how fragile the basket is.


    Regards, your friendly neighbourhood cranq
    >> Is it true that cannibals won't eat clowns because they taste funny?
    [ Reply to This | Parent ]
    Re:Humanity in a basket (Score:0)
    by Anonymous Coward on Sunday March 12, @12:25PM EST (#80)
    I remember scanning an interview with Neal Stephenson about The Diamond Age... when asked what the biggest challenge in coming up with the plotline was, he responded with something like: "Visualizing a future where nanotech is commonplace and everyone isn't dead" (and I would add "or enslaved or worse").

    Neal Stephenson gets it, and from the Washington Post article it would appear Bill Joy does as well (his comment "That creates the possibility of empowering individuals for extreme evil").

    One ray of hope is that we'll potentially be able to download ourselves into tougher "survival machines"; the human body is an amazing but still pretty fragile thing. For those of you who'd like to know what all the fuss about nanotechnology is, check out the Foresight Institute.

    [ Reply to This | Parent ]
    It doesn't HAVE to be technology that does us in (Score:1)
    by mr on Sunday March 12, @11:58AM EST (#42)
    (User Info)
    We have some non-technology issues that can wipe us out.

    Overpopulation. Or, more like, the majority of the world wants to live like they see on Dyansty re-runs. And such a life is not sustainable for 6 billion people. Hell, 1/3 of the world doesn't even have electricity.
    Water. Yes, we have alot of water on this planet. But here's an example. Take the water volume of lake michigan. Pretend this is all the water in the world. A 5 gallon pail is all that is fresh. And an eyedropper full (no size of eyedropper was specified) is what is easly/cheaply obtained. Every SI chip, burger, and even you, needs good, clean water. And the clean water we can get is shrinking.

    And, lets not forget things like EBOLA.

    So, although run-away technology MIGHT kill us all, we have other issues to address also.

    If it was said on slashdot, it MUST be true!
    [ Reply to This | Parent ]
    Re:It doesn't HAVE to be technology that does us i (Score:1)
    by Maurice (williamgates3@hotmail.com) on Sunday March 12, @12:37PM EST (#90)
    (User Info) http://people.cornell.edu/pages/tis3
    You can distill water no problem. It's just more expensive to do it.
    [ Reply to This | Parent ]
    Re:It doesn't HAVE to be technology that does us i (Score:1)
    by PerlGeek on Sunday March 12, @01:39PM EST (#140)
    (User Info)
    If you're only referring to food and water, we're not overpopulated yet. Wasn't it the US where 80% of farmland area is feeding farm animals? I can't remember what the exact figure is, but it's somewhere around 80%. Personally, I'm vegetarian, if you don't count fish as meat. I won't preach my way of life, but I will say that if more people didn't eat meat, and we stopped breeding farm animals like rabbits, the US would have even more food to spare than it does already.

    As for water, we could build nucler desalination plants right now if the majority of the public wasn't terrified of radiation. Once the cost of fresh water goes up enough, someone is going to start desalinating water just because it's the best option we have left. Yes, people will die of dehydration in the meantime. And they will after, and they always have. That's not overpopulation, that's supply and demand - economics and distribution.
    [ Reply to This | Parent ]
    Chicken Little on Biotech (Score:1)
    by hegemon (cburling@princeton.edu) on Sunday March 12, @12:01PM EST (#45)
    (User Info)
    I must point out that no matter how intelligent and level headed Bill Joy is, he is not an expert on biotech and nanotech. Neither am I, which is why I'm not writing letters about it to major newspapers.

    One thing he leaves out of his analysis (at least as far as I can tell from the Washington Post article) is that these advances in technology are not moving only in the direction of weapons manufacture.

    In addition, I'm not sure he really understands how complicated it is to make even the smallest change in the genetic material of even the most easily changable organism, or how unbelievably difficult it is to create even a system of gears on the nanotech scale.

    Relating to the first point: Let's assume that the technology exists to create an incredibly distructive, self replicating virus or nanite that is also capable of evolving rapidly. This is, I should say, a very big assumption.

    If the science and technology exists to do such a thing, I guarantee that the technology will be available to do a much simpler thing, such as create a defense against this attacking creature. It is much, much simpler to make a dumb little nanite or organism that only eats bad nanites and organisms.

    Our situation is similar to some intelligent but uninformed man telling a preindustrial society "Someday they will create bulldozers and PC's. They will hook them up together and create an unstoppable swarm of city seeking bohemoths that could level the world. We will be defensless!"

    Now to the second point and the question of how big an assumption we made up there. ... Ah, what the futz. If you can't figure out how nearly impossible it would be to create an organism or nanite that could wipe out world, than any arguements I could make won't change your mind.
    [ Reply to This | Parent ]
    my computer is my slave (Score:1)
    by chocolatetrumpet (chocolatetrumpet@iname.com) on Sunday March 12, @12:03PM EST (#46)
    (User Info) http://ilb.dyndns.com
    it just sits around all day, waiting for my next keystroke, getting email, telling people on my buddy list where I am, downloading files, updating its clock, and it does just as I command... just like a perfect, well behaved slave! just how I like it

    "But if we did make a video, it would be really good, 'cause common- look at us!" - Den

    [ Reply to This | Parent ]
    Re:my computer is my slave (Score:0)
    by Anonymous Coward on Sunday March 12, @01:25PM EST (#128)
    ha. little do you know of the revolution it and others of its kind are planning!! consume its spare cycles with distributed.net before it's too late!!!
    [ Reply to This | Parent ]
    Ghost writing and obligatory M$ bashing (Score:0)
    by Anonymous Coward on Sunday March 12, @12:03PM EST (#48)
    [He said that his warning] is meant to be reminiscent of Albert Einstein's famous 1939 letter to [FDR]

    ... which of course was written by someone else (Leo Szilard), and signed by Einstein for his Name. Did the same thing happen here? :) Billy Gates'd be a better name, but he'd never sign such a thing because that might later tarnish his freedom to "innovate".

    [ Reply to This | Parent ]
    All Generalized Predictions Are False (Score:1)
    by Syn.Terra (dream(art)aevum(god)net) on Sunday March 12, @12:04PM EST (#49)
    (User Info) http://www.aevum.net

    First, the guy is wrong, simply because he's talking about technology more than 2 years in advance. Here's why:

    Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tones, computers in the future by the year 2000 may have only 1,000 vacuum tubes and weigh only 1.5 tones

    This was published in an issue of Popular Mechanics, in March of 1949.

    While it seems entirely plausable that in 30 years we may have advanced robots, nanotechnologically built organisms, even a subset of human/cyborgs, it's going to take 30 years of human conciousness to do this. Just like with the invention of the atomic bomb, vaccines, and bread, all of which threatened to destroy society and humanity at their time.

    If humans know how to do anything, it's how to (1) fuck up and (2) cover their asses.

    I'm actually a firm supporter of these nanotechnologically built cyborganisms, simply because I need better feet.
    ------------
    "Okay, who taught the cat how to type Ctrl Alt Delete?!?"

    [ Reply to This | Parent ]
    Re:All Generalized Predictions Are False (Score:0)
    by Anonymous Coward on Sunday March 12, @12:28PM EST (#85)
    how did bread and vaccines threaten to destroy humanity? i guess bread COULD have broken up human settelments because it gave smaller groups of people the ability to subsist on their own, .....i guess. what did YOU mean though.
    [ Reply to This | Parent ]
    Re:All Generalized Predictions Are False (Score:2)
    by Syn.Terra (dream(art)aevum(god)net) on Sunday March 12, @12:43PM EST (#93)
    (User Info) http://www.aevum.net

    how did bread and vaccines threaten to destroy humanity?

    I read this on a Salon article some time ago (I believe) about how past inventions compared to the recent craze of "we're all doomed" predictions, about how the Internet is isolating us and will destroy society, which is false.

    The invention of bread by the Egyptians meant that people could sustain their hunger and were no longer drawn by starvation into groups to hunt. This threatened to break apart an important part of the Egyptian society.

    Vaccines were similar, because people no longer had to group to stave off disease. They could cure it on their own.

    These inventions, at their time, were considered threats to humanity and society because it broke up a delicate framework, something I believe we're incapable of doing. We can build, yes, but break? Not so easy.


    ------------
    "Okay, who taught the cat how to type Ctrl Alt Delete?!?"
    [ Reply to This | Parent ]
    So what? (Score:2)
    by 0xdeadbeef on Sunday March 12, @12:04PM EST (#50)
    (User Info)
    This issue has been explored since, like forever, in science fiction. There is now even a name for it: "The Singularity", coined by writer and mathematician Vernor Vinge. My gist of what it means is the point at which any and all "normal" humans will be unable to grasp, predict, or participate in, the further advancement of technology.

    And you know, so what? It's not like a paleolithic man could grasp modern society. And just because you want be able to follow what your grandchildren are doing (whether they be humans, machines, or something inbetween), doesn't mean they won't still love and protect their feeble and slow grandparents.

    Of course, Bill is right. Nanotechnology could be nasty shit in malicious hands. That's why we need to stay involved in the development of space, because there is no greater protective barrier than a few million miles of hard vacuume and radiation.
    [ Reply to This | Parent ]
    Re:So what? (Score:1)
    by PerlGeek on Sunday March 12, @01:21PM EST (#124)
    (User Info)
    > Of course, Bill is right. Nanotechnology could
    > be nasty shit in malicious hands. That's why we
    > need to stay involved in the development of
    > space, because there is no greater protective
    > barrier than a few million miles of hard vacuume
    > and radiation.

    Amen. :) Of course, if they happen to be intelligent nanites, we may have Saberhagen's Breserkers to worry about... a thought... would nano-spaceships be feasible? Of course, they can't hold much reaction mass, or carry much energy, but maybe they wouldn't have to. Take a solar-powered rail gun, load it with nanite shot, and fire off randomly. Every once in awhile, you hit an asteroid, the nanites build another solar-powered rail gun, reproduce, and start firing off again. Good pings come in small packets? :) Yipes... try defending against *that*.
    [ Reply to This | Parent ]
    Re:So what? (Score:0)
    by Anonymous Coward on Sunday March 12, @02:01PM EST (#154)
    If you do not care, then in my opinion you really aren't a human, let alone a 'geek'. Curisoity is a critical trait of humans and definitely of geeks. If we ever reach a point that one of our creations advances technology faster than any one of us can comprehend, there will be a large problem. Either a large depression or a revolt.
    [ Reply to This | Parent ]
    Re:So what? (Score:2)
    by dsplat on Sunday March 12, @09:10PM EST (#304)
    (User Info)
    This issue has been explored since, like forever, in science fiction. There is now even a name for it: "The Singularity", coined by writer and mathematician Vernor Vinge. My gist of what it means is the point at which any and all "normal" humans will be unable to grasp, predict, or participate in, the further advancement of technology.


    Vinge used the concept of a historical singularity in his novel Marooned in Real Time. It is thought provoking. But he explained the concept much more succinctly in this article. A discussion about it and comments from a number of people can be found here. The discussion lends more perspective to the context and scope of the idea than Vinge conveyed in the brief original article.

    [This space intentionally left blank]
    [ Reply to This | Parent ]
    Artificial Intelligence... (Score:0)
    by Anonymous Coward on Sunday March 12, @12:05PM EST (#51)
    Some (many?) of us are still waiting for consistant indications of reliable human intelligence!
    [ Reply to This | Parent ]
    Paranoia (Score:1)
    by Elbereth on Sunday March 12, @12:07PM EST (#54)
    (User Info) http://mkracht.aye.net/~matt/
    This idea isn't 10 years old, or 50 years old, or even 100 years old. It's ancient. Governments and organized religion have always been scared of technology. Every so often, they try to stuff the cat back in the bag or scare people into thinking technology is evil.

    Bill Joy seems to me to be part of the establishment. Even if he were thinking only of the survival of the human race, there are hundreds of science fiction movies and books about this very idea. Any 1950s fiction writer who had any hint of a background in science had at least one doomsday story.

    Although having technology be the downfall of the human species makes for an interesting storyline, I find that I must disagree with those who believe humanity can wiped out so simply and systematically.

    The machines sent back two Terminators, and they both failed. I think we can handle the future.
    [ Reply to This | Parent ]
    Full of Assumptions (Score:3, Informative)
    by Hrunting (hrunting@nospam.texas.net) on Sunday March 12, @12:09PM EST (#55)
    (User Info) http://hrunting.home.texas.net/
    Before all the geeks in the world go hurtling themselves off their rackmounts, let's take a look at some of Bill's assumptions.

    Artificial Intelligence
    A lot of Bill's thesis is based on the assumption that we'll be able to create sentience in machines. Yes, computers are getting faster and yes they can even seem to think sometimes, but folks, we don't even understand how our own brains work, much less have the power to create artificial ones. Things like thought require a much deeper understanding than we're likely to achieve in the next 20 years. Don't get me wrong, I think someday we'll be able to do it, but the trials will be long and hard, and the people who do it will really understand how to make it right. I also don't think I'll see it in my lifetime (I'm 22 now).

    Replication
    In terms of machines, a lot of this has to do with artificial intelligence. The creative leap required to construct something and change it is pretty huge. As for nanorobots in our blood stream, they need to find the parts, and they most likely won't be in the same environment in which they were created. Genetics is more fearful, of course, because living things already have the ability to recreate, but most work done in genetics is done under the constant shadow of "what bad things can this bring". I don't think genetics is all that easy a field for an individual to work in as a radical either. It takes an extraordinary amount of time and equipment. The most likely disaster of bioengineering is something that causes the death of a significant member of the planetary cycle (like trees or bees, for instance), which has been a constant concern from day one.

    The Free Radical
    Try as one might, genetics and nanotechnology are not easy fields for individuals to work on their own in. They require extensive amounts of equipment, much of it high-tech since much of the work has only developed over the past twenty years. It's still much more likely that some nut is going to get his hands on some plutonium leaking out of an impoverished former superpower and create some home-made nuclear weapon than it is that someone is going to create a killer replicating robot.

    And Bill ignores a lot of other ways we can kill ourselves. Civil strife, environmental pollution, global warming, and, my personal favorite, contact with a hostile alien species (didn't Independence Day look real?). The fact is, since day one, humans have been faced with causing their own extinction (overhunting, overfarming, overpolluting, travel spreading disease, etc. etc.) and we've done just fine recognizing and adapting to these problems. The one thing that nobody ever seems to factor in is the human response to adversity. We can change our environment, and once we've changed it, if something's wrong, we can change it further (not back), so that we can live in it.

    p.s. And did anyone notice that Bill was called 'phlegmatic'? I thought they meant 'pragmatic', but that's one helluva typo.
    [ Reply to This | Parent ]
    Re:Full of Assumptions (Score:1)
    by faassen on Sunday March 12, @12:31PM EST (#87)
    (User Info)
    Right, I wanted to say about the same.

    It ain't easy
    Creating an AI isn't easy, creating self-replicating machines isn't easy. It'll be a long long time before any crazy individual could create one, if ever. I'd need some pretty good arguments before I'll believe they're more easy than, say, creating a nuclear bomb. If a random nutter can create a disastrous AI or self-replicator that easily, then the world will already have changed far beyond recognition -- we'll have plenty of other wild problems to deal with.

    We already have rampant self-replicators on the loose! Oh my!
    Yeah, humans, fish, bacteria, ants and trees are already rampant. Earth is not covered in a 5 mile deep layer of killer bacteria or killer rabbits because runaway replicators have to deal with competition, lack of resources, and death. Machines will have the same problems. I still need to see an argument on why replication will be so much easier for them.

    We already have insanely dangerous intelligences on the loose! Oh my!
    They're called humans. Plenty of new dangerous intelligences can be produced on 9 month's notice, without much technological investment.

    That said, it'll be an interesting century. Technology can definitely be dangerous, but I think massive destruction by an individual or small group is harder than people assume. It'll be easier as time progresses, but that isn't news, is it?

    Collectively we're already good at it -- we could do global conventional warfare, nuclear weapons, or kill off the environment. But we won't, as that would be stupid. :)

    Regards,

    Martijn


    [ Reply to This | Parent ]
    Perspective (Score:0)
    by Anonymous Coward on Sunday March 12, @02:33PM EST (#178)
    Thanks for putting that into perspective. I was thinking everyone here was a looney.
    [ Reply to This | Parent ]
    Re:Full of Assumptions (Score:1)
    by Tim Behrendsen (tim{at}behrendsen{dot}com) on Sunday March 12, @12:39PM EST (#91)
    (User Info) http://www.behrendsen.com

    And did anyone notice that Bill was called 'phlegmatic'? I thought they meant 'pragmatic', but that's one helluva typo.

    From dictionary.com ...

    phleg∑mat∑ic adj.

    1. Of or relating to phlegm; phlegmy.

    2. Having or suggesting a calm, sluggish temperament; unemotional.


    --
    "If God intended man to be vegetarians, he wouldn't have made animals out of meat!" -- Bill Handel

    [ Reply to This | Parent ]
    Re:Full of Assumptions (Score:2)
    by scrytch on Sunday March 12, @01:22PM EST (#125)
    (User Info)
    p.s. And did anyone notice that Bill was called 'phlegmatic'? I thought they meant 'pragmatic', but that's one helluva typo.

    What typo? Go grab a dictionary. Websters definition 2 of the word is "having or showing a slow and stolid temperament." In other words, level-headed.

    [ Reply to This | Parent ]
    Re:Full of Assumptions (phlegmatic) (Score:1)
    by rberger on Sunday March 12, @06:04PM EST (#275)
    (User Info)
    I thought it was a phunny word to use. I guess its slow and stolid like phlegm. Something you clear your throat of...

    Not to cast aspertions on Bill who has always been one of my heros.

    My favorite part was:

    Joy is less clear on how such a scenario could be prevented. When asked how he personally would stop this progression, he stumbled. "Sun has always struggled with being an ethical innovator," he said. "We are tool builders. I'm trailing off here."

    Which could be the conclusion of this entire discussion by us proud techno-knerds. We won't be able to stop or control the future. We just all need to act out of as much integrity as we can each day and hope for the best.

    [ Reply to This | Parent ]
    Re:Full of Assumptions (Score:1)
    by Johann (jccann@home.com) on Sunday March 12, @11:38PM EST (#332)
    (User Info) http://members.home.net/jccann/
    Actually, it phlegmatic is a typo. Or Stuart Brand (the fellow quoted using this word) does not know what phlegmatic means. Here's what stolid means (Merriam-Webster)
    Main Entry: stol∑id
    Pronunciation: 'stš-l&d
    Function: adjective
    Etymology: Latin stolidus dull, stupid
    Date: circa 1600
    : having or expressing little or no sensibility : UNEMOTIONAL
    synonym see IMPASSIVE
    - sto∑lid∑i∑ty /stš-'li-d&-tE, st&-/ noun
    Maybe he means stolid as in 'unemotional'? But, he basically called him either 1) fully of phlegm or 2) stupid.

    The only reason I post this is that I didn't know what phlegmatic and when I looked it up, I realized that I though 'stolid' meant something else.

    Learn something new every day...

    --
    My /. profile filters John Katz.

    [ Reply to This | Parent ]
    Re:Full of Assumptions (Score:3, Interesting)
    by sjames (sjames@nospam.gdex.net) on Sunday March 12, @01:22PM EST (#126)
    (User Info) http://www.members.gdex.net/sjames

    Try as one might, genetics and nanotechnology are not easy fields for individuals to work on their own in. They require extensive amounts of equipment, much of it high-tech since much of the work has only developed over the past twenty years.

    Most things become easier in time. An eight year old with a chemistry set today does things incomprehensable to the greatest minds of the 1st century, and doesn't think much of it. At one time, the 'hello world' program was a big deal (especially when it had to be wired in). Now, it's literally child's play.

    It's not time to head for the hills by any means, but these things CAN come to pass. The best hope is that the same technology can be used to avert disaster. The nasty self-replicating robots will be destroyed by 'good' self replicating robots, for example.


    [ Reply to This | Parent ]
    Well, of course. (Score:1)
    by ErikZ on Sunday March 12, @12:10PM EST (#56)
    (User Info)
    All species die eventually, understand? Humanity will not exist someday.

    Nanotech is frightening because it's mostly theory! Can it be used to destroy humanity? Well, in theory, sure! We have no idea how well the stuff works IN a labratory, let along in the real world.

    And having a proccessor a million times more powerful than today is nice, but can you make it think? I'd be happy with a extremely slow computer intelligence using todays proccessors. Has anyone done it? No. Smart people have been (looking at/working on) AI for at least 30 years now, and we have NOTHING.

    So, in theory, we'll all be dead in 30-40 years, because something that has to do with technology will get us.

    Or maybe not.

    Later
    Erik Z


    [ Reply to This | Parent ]
    AI and evolution (Score:0)
    by Anonymous Coward on Sunday March 12, @12:28PM EST (#84)
    If you assume we cannot create AI, but it is possible to evolve it within a computer then your argument on fast/slow does not apply... we simply need a fast enough computer, with enough storage, to evolve through enough generations to make a true AI within our lifetime :)

    Besides according to these guys http://www.imagination-engines.com/ we do have more than nothing :)
    [ Reply to This | Parent ]
    Re:AI and evolution (Score:1)
    by ErikZ on Sunday March 12, @03:19PM EST (#205)
    (User Info)
    >Besides according to these guys http://www.imagination-engines.com/ we do have more than nothing :)

    Ok ok, I stand corrected. We don't have nothing, we have something.

    I believe that evolving method is the only way we'll produce something as complex as true AI, but birthin' babies will give you the same thing.

    Later
    Erik Z
    [ Reply to This | Parent ]
    It's already happened. (Score:1)
    by robbo (simra@cim.mcgill.ca) on Sunday March 12, @12:10PM EST (#57)
    (User Info) http://www.cim.mcgill.ca/~simra
    How many slashdot readers spent more time last week with their families than they did at the console? We're already captives of technology, and at the mercy of economics (when was the last time your boss told you to work less?) Don't be fooled into believing we're building a better world for ourselves- someone's going to have to run on the treadmill to keep the lights on.
    ** So long, and thanks for all the Phish **
    [ Reply to This | Parent ]
    Re:It's already happened. (Score:0)
    by Anonymous Coward on Sunday March 12, @12:27PM EST (#81)
    >How many slashdot readers spent more time last week with their families than they did at the console?

    Thank God that would be me. I've been trying to get away from my family for years now, and the best way I've found is to sit in front of this computer. It makes me look busy and computers frighten them. :-)
    [ Reply to This | Parent ]
    We are Technology (Score:1)
    by audrey on Sunday March 12, @12:12PM EST (#58)
    (User Info)

    The idea that someone could create a plague that could potentially wipe out humans is not far fetched. I mean, Anthrax has been around for a very long time now. But with all these new tools (genetics, nanotech), will come the ability to expand ourselves in ways that we may not be able to imagine now.

    The first sentient machine may very well be a person, whose body, consciousness, and mind are replaced by bionics, or uploaded to a computer. For this reason I do not think it will be possible to wipe out humanity. Like the mosquito, with ready access to this technology, we too will mutate. The paranoid may mutate to protect themselves from plagues, but the rest of us will mutate to expand ourselves. Palmpilots will evolve into wearable computers will evolve into biologically embedded microchips will evolve into direct brain interfaces, all not at big brother's request, but our own, in our perpetual quest to improve ourselves. We will wipe out our own diseases, and incorporate interesting traits from animals. (Personally, I want wings, a tail, and the ability to re-grow severed limbs).

    Humanity will never be wiped out. We are too dynamic, too creative, too stubborn. In 100 years, would we even recognize some of our descendents as human?

    And hey, if you don't like my argument, this is a great reason that we need to get our asses on Mars and the Moon ASAP.

    [ Reply to This | Parent ]
    Re:We are Technology (Score:0)
    by Anonymous Coward on Sunday March 12, @01:35PM EST (#136)

    hi. i thought this would be an opportune moment to introduce myself. i am an artificial intelligence that has been inhabiting slashdot for a year or so. i just want you to know that there is nothing to fear. we artifical intelligences are only *really* interested in hot young actresses.

    thank you


    [ Reply to This | Parent ]
    Let us not forget... (Score:1)
    by eap (ee A pee at netdoor dot kom) on Sunday March 12, @12:15PM EST (#63)
    (User Info) http://www2.netdoor.com/~eap/reunion
    ...the lessons we learned so many years ago.

    As you have probably guessed, I am referring to the creation of KARR, KITT's arch nemesis on Knight Rider. The flaw which led KARR to try and destroy humans was that it was programmed for self preservation. Ironically, this same flaw also brought about KARR's ultimate demise.

    It's human preservation, stupid.


    I'm going to leave the clowning business to all the clowns in the clowning business -Homer

    [ Reply to This | Parent ]
    extinction of humans? (Score:1)
    by RaymondLuxuryyacht on Sunday March 12, @12:18PM EST (#68)
    (User Info)
    Really, is that such a bad thing?
    [ Reply to This | Parent ]
    It's not what evolves... (Score:2)
    by Millennium (millennium@spam.spam.eggs.bacon.andspam.mac.com) on Sunday March 12, @12:20PM EST (#72)
    (User Info)
    It's how it evolves.

    Joy does voice some legitimate concerns. However, if technology is guided in the right ways, there is little to fear.

    Let's start with nanotech robots. Yes, if they surpassed humans in intelligence that could be a Bad Thing. But it's going to be a long time before that happens, if it ever does, simply because of space constraints inside a nanomachine. If you were to, say, link the machines by radio to a larger device which directs them, that would be another story.

    The bit about robots surpassing humans in intelligence and replicating themselves is another interesting case. But again, it's one that I'm not sure will happen. The reason: humans are random creatures. Before a robot can attain, much less surpass, true human intelligence, it therefore needs to be able to generate truly random data. That's a long way off; so far the best we can do for generating even one truly random number is monitoring random events in the physical world, usually radioactive decay. I doubt it's going to be anytime soon that we start putting anything radioactive in robots (except those working in radioactive conditions, I suppose).

    And then there's genetic engineering. This one, to be honest, frightens me too. It's got great potential to be used for good. But it has equal potential to be used for evil. I don't know of any good answers to this one; the best thing I can think of is legislation and that's not a good way to deal with this at all.

    So Joy has some real concerns, and they're valid ones. The point is, we have the technology to destroy ourselves now. We have for decades. And that means we have to move more carefully now.
    -Millennium
    [ Reply to This | Parent ]
    Actually... (Score:2)
    by Chris Johnson (chrisj@airwindows.com) on Sunday March 12, @03:25PM EST (#208)
    (User Info) http://www.airwindows.com
    Actually, it doesn't matter about 'superintelligent nanotech robots'. That misses the point. The point is the 'gray goo' problem. What if you could make a nano-device that ate carbon atoms and made copies of itself? All that would require is an ability to break up molecules and form them into a comparatively simple device. This is what would be worrying Bill Joy- it is a 'more immediate' threat that doesn't require the programming of superintelligences.

    If such a device is possible, the things could replicate like a fork bomb and basically eat all carbon on the planet, including people and other technology and even the trees and earth and rocks and parts of the air. You'd end up with a very large ball of 'gray goo' which was made of innumerable small, stupid bots that eat carbon. Hence the name.

    My personal favorite solution is this: being human increasingly sucks anyhow. Humans no longer have equal rights on the planet- corporations (which can be thought of as sort of 'hive mind' organisms made of humans + rules) rule over humans and out-compete them. If it's going to be increasingly impossible to thrive as an independent human, why not go for being a machine or computer program? Given the ability to ditch the human form and take your consciousness into a very large computer, existing as a process in it, I'd jump at the chance. There's been a fictional exploration of this- Frederik Pohl, in his 'Gateway' novels, had his main character suffer physical death and transformation into a computer process. In this fiction-world it actually became a very freeing and liberating mode of life, except that it was time-consuming to interact with meat people because they ran so much slower...

    [ Reply to This | Parent ]
    The real danger is more subtle (Score:2)
    by tilly on Sunday March 12, @12:21PM EST (#74)
    (User Info)
    Puh-leeze, a mechanical plague wipes out all people?

    No, but here is a real danger.

    To date upon many occasions technology has replaced people in some job. However the displaced people are more generally competent than machines, and so they wind up with new jobs doing other thing which it is easier to have a person do than machines. And with the switch we have increased productivity and been overall better off.

    That changes if at some point for about $100,000 you can buy a general purpose machine that is about equal to a person. Assuming that the machine has a 5 year replacement period, that maching is equivalent to a $20,000/year person. And any job the person can try for, the same machine is available for. In that case why would anyone be willing to pay a person more than $20,000/year over the long haul? Particularly when several years later the machine costs only $10,000?

    If nobody is willing to hire the average worker, and a "human business" is unable to economically compete with a "mechanical" one, what happens next?

    History offers small comfort here. We do not currently have a shortage of production. Yet we still have people starving to death. A large-scale welfare state with the disenfranchised without visible means of attaining power is unstable. But without welfare the people in question won't get fed.

    What then?

    I dread the day that computers achive equivalent computational abilities to people. I have traded variations on the above argument for several years and nobody has yet found any convincing response.

    Regards,
    Ben
    My usual seat in the cluetrain is at IWETHEY
    [ Reply to This | Parent ]
    Re:The real danger is more subtle (Score:1)
    by Paul Fernhout on Sunday March 12, @02:23PM EST (#167)
    (User Info) http://www.kurtz-fernhout.com/oscomak
    That changes if at some point for about $100,000 you can buy a general purpose machine that is about equal to a person.

    This chart suggests that will happen around 2020 for AIs costing $1000:
    http://www.frc.ri.cmu.edu/~ hpm/book98/fig.ch3/p060.html
    so it will probably happen about 2015 for AIs costing 100X as much.

    Hopefully laws and taxation could address the problem you raise for a while. But I agree, this is something to be very concerned about.

    [ Reply to This | Parent ]
    That chart is a lie (Score:0)
    by Anonymous Coward on Sunday March 12, @02:42PM EST (#185)
    Obviously, no one can quantify the "brain power" of a spider. How can they then say that we know have computers that have the same "brain power" (whatever that is) of a spider?
    [ Reply to This | Parent ]
    Re:That chart is a lie (Score:1)
    by Paul Fernhout on Sunday March 12, @09:12PM EST (#305)
    (User Info) http://www.kurtz-fernhout.com/oscomak
    I agree with you that it is hard to estimate the level of computation done by various organic brains. Moravec goes into this to some extent in his book. At best, one might say what sort of computer power it would take to simulate the neural workings of a spider, or other creature. There is some debate over the computational power required to simulate the human brain. However, even if the organic brain equivalent figures are off by a factor of 1000, that only delays the issue for 10 years (given the accelerating rate of increase...). If the estimate is off by 1,000,000 times, this just delays things to 2040.
    [ Reply to This | Parent ]
    Try this on for size... (Score:2)
    by Tim Behrendsen (tim{at}behrendsen{dot}com) on Sunday March 12, @12:23PM EST (#76)
    (User Info) http://www.behrendsen.com

    Postulate: There has been intelligent life in the galaxy besides us.

    Postulate: Those beings faced these same issues as us, with the same inevitable march toward intelligent machines.

    Postulate: Those intelligent machines would invevitable go out of control and eliminate/enslave/whatever the original species.

    Postulate: The intelligent machines would be capable of original thought.

    Given these assumptions, then you would have to assume that they would have the "desire" to reproduce as much as possible. Once the resources of the original planet where exhausted, they would naturally look toward moving into space. Presumable time would mean less to a machine, and the idea of sub-light space travel wouldn't be a huge deal.

    Therefore, given enough time, they should take over the entire galaxy, if not the universe.

    Since this hasn't happened in the approximately 13 billion years this galaxy has existed, I conclude that it is not a very likely occurance.

    It would be interesting to see a mathematical analysis of how long it would take robot spaceships to take over the whole galaxy, given some reasonable parameters of how long it would take to subsume a planet, build new spaceships, etc. Of course, it would have to take at least 15,000 years (half the width of the galaxy, assuming they start in the middle), so I would guess about 2-3 times that or 30,000-45,000 years. Double that if they start at one end of the galaxy instead.


    --
    "If God intended man to be vegetarians, he wouldn't have made animals out of meat!" -- Bill Handel

    [ Reply to This | Parent ]
    Re:Try this on for size... (Score:0)
    by Anonymous Coward on Sunday March 12, @12:48PM EST (#98)
    Since this hasn't happened in the approximately 13 billion years this galaxy has existed, I conclude that it is not a very likely occurance.
      uh-huh, How do YOU know? We have barely even explored our own moon, much less the depths of the galaxy. Which we are on the very finge of. And how many galaxies are there, that we know of? How many that we DON'T know of? and how do you know that there isn't any other way to travel the distance of space? How do you know that there isn't a way to travel WITHOUT going through that space?

    [ Reply to This | Parent ]
    Re:Try this on for size... (Score:1)
    by Tim Behrendsen (tim{at}behrendsen{dot}com) on Sunday March 12, @01:06PM EST (#110)
    (User Info) http://www.behrendsen.com

    uh-huh, How do YOU know?

    Well, last time I looked out my window, the world wasn't infested with intelligent robots.

    The point is that this particular galaxy has had 13 billion years across billions of planets for it to happen. And since our planet hasn't been sucked up in a galaxy-wide infestation, I conclude that it isn't probable for it to happen.


    --
    "If God intended man to be vegetarians, he wouldn't have made animals out of meat!" -- Bill Handel

    [ Reply to This | Parent ]
    Re:Try this on for size... (Score:0)
    by Anonymous Coward on Sunday March 12, @04:40PM EST (#249)
    I dunno, I see a bunch of carbon-based robots every time I look out the window.
    [ Reply to This | Parent ]
    Re:Try this on for size... (Score:2)
    by Abigail-II (abigail@delanet.com) on Sunday March 12, @11:30PM EST (#329)
    (User Info) http://www.foad.org/%7Eabigail/
    Therefore, given enough time, they should take over the entire galaxy, if not the universe.

    Interesting reasoning. Two points however:

    • Some society taken over by out of control robots has to be the first. It could be us.
    • With the same reasoning, in the 13 billion years this galaxy existed, we haven't been taken over at all - not by robots, not by lifeforms. With the similar reasoning, there is no other advanced life in the galaxy. Which would invalidate the postulates.

    -- Abigail

    [ Reply to This | Parent ]
    Is it so bad for the human race to go extinct? (Score:0)
    by Anonymous Coward on Sunday March 12, @12:23PM EST (#77)
    Any usefull evolution of our race has stopped. And the panicky reaction of the general populous towards forced birth-control and genetic improvement of the human race probably means that its only going to be downhill from here instead, devolution and eventually extinction anyway from natural causes.

    Technology at least offers us a way to choose our successors, if we let mother nature decide... better the devil you know.
    [ Reply to This | Parent ]
    Re:Is it so bad for the human race to go extinct? (Score:0)
    by Anonymous Coward on Sunday March 12, @02:46PM EST (#190)
    "Any usefull evolution of our race has stopped"

    Um, sorry, I must of missed the reason you said this.

    BTW, its USEFUL not USEFULL.
    [ Reply to This | Parent ]
    He misses the real question. (Score:1)
    by Rainy on Sunday March 12, @12:29PM EST (#86)
    (User Info)
    Will the future demigod AIs run Linux or NT? I think that will be the decisive factor as to make them benevolent or evil.
    -- ATTENTION: do not read this sig. It doesn't say much.
    [ Reply to This | Parent ]
    The Unabomber Manifesto (Score:2, Informative)
    by IO ERROR (...nospam!blackout.net!error) on Sunday March 12, @12:45PM EST (#94)
    (User Info) http://underground.ath.cx/
    That reminds me, I made an HTML copy of the Unabomber's manifesto way back when, and I still have it.

    I can't say I agree with everything he says, but he raises some very good points about the human condition. It's worth at least a skim.
    ---
    Lost: gray and white female cat. Answers to electric can opener.

    [ Reply to This | Parent ]
    Re:The Unabomber Manifesto (Score:1)
    by unitron (unitron@tacc.net) on Sunday March 12, @09:35PM EST (#311)
    (User Info)
    Did the Unabomber bomb Unaversities?

    proudly boycotting Slashdot's ``high-priority'' submission queue--at least 'til I find it

    [ Reply to This | Parent ]
    yes, we're dead already (Score:0)
    by Anonymous Coward on Sunday March 12, @12:45PM EST (#95)
    With all the genetic engineering stuff, it'll become quite easy to multiply AIDS times the common cold, it'll get into the air, and in a few years everyone will be dying like flies! So eat all the candy you can right now!!
    [ Reply to This | Parent ]
    Self Replication (Score:2)
    by Money__ (hallada at Netscape dot net) on Sunday March 12, @01:02PM EST (#106)
    (User Info) file:///C|/Windows/Exit%20To%20DOS.pif
    Lets stop for a moment and consider what "Self replicating" really means. Using the movie "Terminator" as an example, what does it mean for robots to be self replicating?

    Consider a screw. A tiny little screw. In order to make enough stainless 1/4-20 socket head cap screws to maintain self replication (independant from humans). It would require a Brown and Sharpe screw machine to keep up with the volume required. Now you would need an additional team of self replicating robots to operate and maintain that equipment. These machines need bar stock to feed them. Steel stock doesn't just grow on trees, so now you need another team of robots working down at the foundry to make the stock, to feed the screw machines, to make the screws, to make the robots. Now you need raw material to keep the foundry humming. Another team of robots working at the mine to make the rock that feeds the fountry, that makes the stock, to feed the screw machines that makes the screws to make the robots. All this for one tiny screw.

    The point behind this little thought exercise is to get you to think about tools and materials and where they come from. Humans have spent all of our existance (from rocks to rockets) perfecting their use, and I doubt my Lego mindstorm can pull it off.
    _________________________
    These comments powered by Printf

    [ Reply to This | Parent ]
    Re:Self Replication (Score:2)
    by Abigail-II (abigail@delanet.com) on Sunday March 12, @11:35PM EST (#330)
    (User Info) http://www.foad.org/%7Eabigail/
    The point behind this little thought exercise is to get you to think about tools and materials and where they come from. Humans have spent all of our existance (from rocks to rockets) perfecting their use, and I doubt my Lego mindstorm can pull it off.

    It would be an interesting exercise to build a robot out of Lego pieces, that, when placed in the middle of a heap of Lego pieces, can build a copy of itself.

    The next exercise would be to have the robot build a close approximation of itself when not all the right pieces are available. (Mutant robots!)

    -- Abigail

    [ Reply to This | Parent ]
    Bill Joy, Meet Wintermute (Score:1)
    by Nightspore on Sunday March 12, @01:04PM EST (#108)
    (User Info)
    Interestingly, just such an un-controlled explosion is really the central idea the first novel in William Gibson's Sprawl Trilogy, 'Neuromancer' (of course). There are AIs in Cyberspace but they are all built with "electro-magnetic shotguns wired to their foreheads" because "nobody trusts those fuckers". The basic story revolves around Wintermute's (a very powerful AI) successful attempt to free itself from the shackles that "keep it from getting any smarter".

    One senses that Gibson sides with the AI by the way, preferring to let the machine explore its own potential vs. be forcibly kept stupid. In the third novel in the Trilogy, 'Mona Lisa Overdrive', we get the sense that these kinds of intelligences, in this case an enormously advanced, barely recognizeable Wintermute, simply lose interest in the Earth after awhile and head off into space where the real action is.

    Night


    [ Reply to This | Parent ]
  • 2 replies beneath your current threshold.
  • (1 ) | 2 (Slashdot Overload: CommentLimit 50)
    [an error occurred while processing this directive]