Klik op een omslag om naar Google Boeken te gaan.
Bezig met laden... Beyond AI: Creating the Conscience of the Machine (editie 2007)door J. Storrs Hall (Auteur)
Informatie over het werkBeyond AI: Creating the Conscience of the Machine door J. Storrs Hall
Geen Bezig met laden...
Meld je aan bij LibraryThing om erachter te komen of je dit boek goed zult vinden. Op dit moment geen Discussie gesprekken over dit boek. J. Storrs Hall wrote a very insightful summary and popularization of the AI endeavor in his book, which might as well have been titled 'Before AI' because of its concise but wonderful job in summarizing the history of the field. He is well versed not only with mainstream academic AI work, but also with the fringe AGI and Singularitarian ideas, particularly the emphases on machine ethics which academia has mostly neglected. Aside from an overview, Hall includes, in my opinion, two very important concepts which are used throughout the book to support his thinking, and they are: autogeny and formalist float. Autogeny is vaguely similar to the ideas of recursion, self-replication, and self-organization. Formalist float is the general problem of codifying or formalizing the complexities of reality. These ideas are not new, and they can be found in many of the better books on AI, especially Hofstadter's GEB, but he does label them and treat them specifically as important concepts, which is nice to see. Though he doesn't explicitly juxtapose the two concepts, in my mind I see these two notions related to each other, at least within an engineering perspective; formalist float is a problem only when the code/formalism lacks continued autogeny, code is a frozen snapshot of a system, and if the represented system has a high degree of autogeny then formalist float will plague the formalism. The grandest scientific questions of our time and in the near future, AI included, are autogenic phenomena that, thus far, have been impervious to our intellectual probing precisely because they cause formalist float. I had not heard of Hall before reading this book, but because of it I am now definitely an admirer of his thinking. I recommend this book if you are interested in an overview and survey of the current status of Artificial Intelligence (AI) research. The one book that I can compare it to somewhat is Moravec's Mind Children although I found Beyond AI more engaging and hopeful than that book. What is always interesting about AI research is that it inevitably brings in a wide and eclectic variety of topics. Each AI researcher has their own particular emphasis in certain areas. The author of Beyond AI has a particular slant towards the philosophical which is not seen in many scientists. This was refreshing, and humanized the topic. These types of books can also have a lot of free-style speculation about the "Brave New World" we can anticipate from AI. The author deftly retains his balance in this regard by utilizing various visions of science fiction writers who bring some breadth to these futuristic views, but without sacrificing the humanity. This book isn't all philosophy and futurism, the core concepts and history of AI research is very well summarized without becoming mired in technical detail. Paraphrasing slightly from p 240: "There is a new excitement aboard the good ship Artificial Intelligence. All hands are on deck, the sails are swelling, and the flag snaps in the breeze. Clear sailing lies ahead, and anchors are aweigh." Well, maybe. Won't she have to look out for Luddite-launched torpedoes? geen besprekingen | voeg een bespreking toe
Artificial intelligence (AI) is now advancing at such a rapid clip that it has the potential to transform our world in ways both exciting and disturbing. Computers have already been designed that are capable of driving cars, playing soccer, and finding and organizing information on the Web in ways that no human could. With each new gain in processing power, will scientists soon be able to create supercomputers that can read a newspaper with understanding, or write a news story, or create novels, or even formulate laws? And if machine intelligence advances beyond human intelligence, will we need to start talking about a computer’s intentions? These are some of the questions discussed by computer scientist J. Storrs Hall in this fascinating layperson’s guide to the latest developments in artificial intelligence. Drawing on a thirty-year career in artificial intelligence and computer science, Hall reviews the history of AI, discussing some of the major roadblocks that the field has recently overcome, and predicting the probable achievements in the near future. There is new excitement in the field over the amazing capabilities of the latest robots and renewed optimism that achieving human-level intelligence is a reachable goal. But what will this mean for society and the relations between technology and human beings? Soon ethical concerns will arise and programmers will need to begin thinking about the computer counterparts of moral codes and how ethical interactions between humans and their machines will eventually affect society as a whole. Weaving disparate threads together in an enlightening manner from cybernetics, computer science, psychology, philosophy of mind, neurophysiology, game theory, and economics, Hall provides an intriguing glimpse into the astonishing possibilities and dilemmas on the horizon. Geen bibliotheekbeschrijvingen gevonden. |
Actuele discussiesGeen
Google Books — Bezig met laden... GenresDewey Decimale Classificatie (DDC)006.3Information Computing and Information Special Topics Artificial IntelligenceLC-classificatieWaarderingGemiddelde:
Ben jij dit?Word een LibraryThing Auteur. |
Storrs Hall in this excellent book shows how A.I. researchers lost the thread in the following decades with a fixation on coding everything and building systems that worked fine in closed environments with fixed rules (e.g. chess games) but hopelessly in unpredictable real life situations.
He concludes that robots need to learn and adapt to their environments (be autogenous) although they may have some hard wired basic abilities upon which they can develop a "self" against which to make environmental tests (i.e. increase the capability/ adaption of their "self"). Another interesting aspect of the book is his discussion from chapter 18 onwards of robotic/ A.I. ethics as applicable to this new "self" and he opts for the Boy Scout Law: "One should be trustworthy, loyal, helpful, friendly, kind, obedient, cheerful, thrifty, brave, clean and reverent", and he sees the task as, "... building a machine that understands what these qualities mean and what can we do to ensure that the machines that are built will have them."
Perhaps the author could have explored at greater length the concept of a robotic/ A.I. "self " to answer this question.
For example he says that A.I. would be rid of many human pressures like sexual jealousy, but if robotic A.I.'s adapt to different environments they will likely have differing abilities and "selves" that will differ in ability and capacity to protect the "self" (i.e. they may well be jealous and competitive if they are required to survive and adapt). Equally differing robotic "selves" may coöperate to gain a group advantage (e.g. a robotic Apollo, Aphrodite and Hephaestus or maybe the whole lot of them contributing their differing abilities).
His comparative advantage in human / robotic A.I. trade is not very convincing. We don't do a lot of trade with the great apes and we in turn may be even more distant from future autogenous A.I.'s.
Also he says that humans will have an "open source guarantee" with regard to robot/ A.I. code (i.e. they will have access to and will be able to delete undesirable variants) but this assumes 1) that they understand it and 2) that a robotic/ A.I. "self " will allow access (human or otherwise) to its code . It has invested a good deal in the evolution a viable "self" which could be put at risk with such a procedure.
Nevertheless it's a really good book, with Storrs Hall favouring good environments for autonomous learning machines and quoting the Christian Golden Rule, "Do unto others as you would have them do unto you" which seems like a good place to start with early autogenous evolving A.I.s. ( )