StartGroepenDiscussieMeerTijdgeest
Doorzoek de site
Onze site gebruikt cookies om diensten te leveren, prestaties te verbeteren, voor analyse en (indien je niet ingelogd bent) voor advertenties. Door LibraryThing te gebruiken erken je dat je onze Servicevoorwaarden en Privacybeleid gelezen en begrepen hebt. Je gebruik van de site en diensten is onderhevig aan dit beleid en deze voorwaarden.

Resultaten uit Google Boeken

Klik op een omslag om naar Google Boeken te gaan.

Bezig met laden...

The Age of AI: And Our Human Future

door Henry A. Kissinger

LedenBesprekingenPopulariteitGemiddelde beoordelingDiscussies / Aanhalingen
2466108,805 (3.54)1 / 2
"Artificial Intelligence (AI) is transforming human society in fundamental and profound ways. Not since the Age of Reason have we changed how we approach security, economics, order, and even knowledge itself. In the Age of AI, three deep and accomplished thinkers come together to consider what AI will mean for us all" --… (meer)
Geen
Bezig met laden...

Meld je aan bij LibraryThing om erachter te komen of je dit boek goed zult vinden.

» Zie ook 2 vermeldingen

1-5 van 6 worden getoond (volgende | toon alle)
As a summary on all open questions regarding AI and human society interactions and roles, this is relatively good book, but with some very deep issues and weird recommendations. Book is more on philosophical side, but again considering the subject this is expected.

Will this book open new horizons about AI, and make you think about it in a different way? Maybe, if this is first book on this subject you have read. Otherwise there is not much you can read here that was not mention in any of the previous works on AI published in last 2 or 3 years.

Authors' are showing possible effects of AI proliferation for various purposes - from educational, scientific, religious to military - and they constantly compare it with the previous scientific revolutions.

But I have a feeling that in all this rather philosophical approach to the topic they mix up things.
To name the few.

New chess AI (AlphaZero), and "revolution" in chess it introduced by playing in ways that surprised human players. I am not quite sure how is this a revolution - it is just definition of total-war and price one is ready to pay to defeat the enemy. Chess is one on one game and victory does not have effect on follow up games. Reason why players give value to rooks or queen or knights is because they allow for advanced movements that can change the situation in the field so they have greater values than peons (ever lasting, expendable, infantry). In general it does not matter if one wins chess game with only king and peon remaining or with 50% of forces - these loses do not transfer into follow up games. Inherently for human players there is a resistance to losing valuable pieces for the sake of the single game (it is considered waste because humans approach conflict not as a singular event but as a part of possible series of conflicts). For algorithm victory is all that matters, so everything that enables victory is a success (even if it means losing valuable assets). I do not know why is this seen as revolutionary - few people I know play these attrition chess games reducing the entire board to little more than peon combat with King acting as a cumbersome last line defender.
Now imagine situation where AI decides that catastrophe in the one own territory would be beneficial for war effort - what if some other enemy pops up with aggressive tendencies post-factum and surprises everyone? What is level of losses (military and economical) that is acceptable?

Second is talk about how, like in previous scientific revolutions, there will be people that will try to limit exposure of their children to AI systems and how this will affect the society in question.
What are we talking about here - did existence of home schooling or Amishes had any effect on scientific progress? Of course it did not because that progress is dictated on much higher political and state levels, unfortunately no longer at local community level.

Third is constant talk about how modern AIs can generate various texts, images and videos. Outside the development, where these elements play the role of indicating the progress and ability to mimic human like behavior - what exactly is gain here from the user perspective? No need to have somebody to assemble say briefings for various meetings? But how do you know that briefing is correct and wont create disaster? Or about images - DeviantArt showed that AI are basically meshing things together, synthesizing various elements and use feedback to decide is it good or not and then progress forward. But again what is the purpose of this type of artistic expression? Hey I have the best tool around? While I can imagine people being proud of auto generated images - hubris and stupidity go hand in hand - what exactly does that mean when one is judging a person and person's accomplishments? Seems very very shallow, dont you think. Also what does it mean to have AI "paint" a cat sitting at the piano? Does it mean AI know what piano and cat is? No, it is just technology demonstration - hey it can do it [giggle, giggle].

Fourth, AI will give us insight into things we are not aware about reality. In order to think about the reality around it, AI will need to have senses, not know existing concepts (set by us otherwise it will just reinterpret existing things) and be aware of its surroundings and itself (we are so away from this it is ridiculous) - otherwise what can it say that would be meaningful and different from what we already know? Say you give it senses and you put it into some beautiful valley surrounded by mountains, full of meadows, trees and wild animals. If it sees say the mountain snow peaks, how will it know (a) what is peak and (b) what is snow? If we tell it, then it is not something it will come to by itself - right? And if it does create new terms it will name it differently - what was the gain here then if we need to make do with new language and terms? If we expect that AI will discover new dimensions and new material structures - isn't this a little bit silly expectation? First it will use our knowledge as starting point. Without ability to know that something is missing, how will it know that it needs to look for something? We forget that very large number of discoveries were accidental (transistor anybody?), they happened by observing something completely different. How will AI know that some accident/materiel weakness or specificity is not failure in performance but might have different application in something completely unrelated field some time in the future? With every knowledge comes context but context is not summary of facts, it is experience and facts. Now imagine you need to teach something that cannot be bodily harmed about death? For us this is normal and more or less instinctual, but how can you explain what death is and that various means of losing life are not the same? And plan is to give huge amount of control to such entities?

Same applies to chemical/pharmaceutical industry - say AI delivers compounds that are OK from simulation point of view for various treatments. But lets say one or two generations after using the medication there are effects that are life altering (in a bad way - say these effects could not be foreseen before because full effect of these compounds could not be foreseen)? Are we to blame AI for this or our dependency on it? AI can only work from the starting point we give to it - if you could transfer AI 500 years back its usefulness would be 0 because initial knowledge would be very much limited. There is no magic bullet here. If you take AI and start teaching it theology - do people truly believe AI would end up with recommendation to establish secular states? Neither Newton nor many other great names in politics and science of the past were atheist or fighting for secular views of the world - what changed is society itself, more reliance on internal societal force (trade, and relations without influence of strong religious organizations). Its not like somebody came up and said, from Monday secular state! It is just set of circumstances that world ended up as it ended up - at some point there were multiple choices and one path was selected. Who can say proper path was chosen (in the long run) when compared to other options (whatever these might be, now lost to history). Trusting that machines would come to same conclusions is unrealistic.

Ideas of virtual realities and creations of virtual selves (again, for what purpose except escapism from the real world?) I find very disturbing. As a matter of fact for a lot of applications here I am not sure what would be the real world benefits, except using it for simulating oneself (but again, why?).

AI is tremendous tool as an advisor and knowledge base. It's use as a replacement for our own cognitive functions is idiocy (as we can see in current times, where everyone is trying to do multiple things at the same time, do not achieve anything in any of the fields and basically end up using high technology and access to tremendous amount of information for entertainment only - constant mention of movie recommendations as a high kick in use of AI is s-i-l-l-y to say the least). Using AI to help kids develop is OK, but doing that without touch with other humans (and even worse, encouraging such behavior because-you-know-technology) is for all means and purposes strike against the humanity.

AI development seems to be its own goal - we will create something that has its purposes (especially in governance and security), will become self-fulfilling prophecy (we need to have it because they might/will/could-be-that-they-think-about having it) and will become link to our everyday lives without giving an answer - why does it need to be part of everyday life for everyone? It is like with each cycle of technology progress humanity decides (or somebody decides in the name of humanity) to lose yet another part of it's knowledge and functions and replace it wiyh next shiny toy.

And this is where danger lies - if we get to the point where we trust something, not knowing the reasoning behind, and we trust it fully and without any doubt (even if we have doubt, we dont comment simply because we cannot reason against it because we do not know how decision was made, but regulators say it is OK (khm, khm)) isn't this fall back to times of religious zealotry, only thing being that humanity starts to worship deities it created and new clergy is science (lost completely as everyone else, but belonging to the elite)? Was not this a thing that so heavily divided society during epidemic? Only to have - with all events and hearings, especially those about modelling the progress of the disease that was cause of the major havoc - validated a lot of forbidden [by the regulators and truth-sayers/interpreters] chatter and findings?

Book tries to present itself as asking questions for a lot of very serious areas, but ends up enforcing the simplest (and most stupid) approach. Because you see, AI is here to stay so instead of thinking about it, everybody needs to rush to applying it all over the place without any idea what will follow.

And you would think people in decision making places actually learn from history. ( )
  Zare | Apr 22, 2024 |
Read June 2023
  morocharll | Apr 19, 2024 |
This was a review of the current state of AI and an overview of all the risks, potentials, problems, and ways to handle them. It didn't really provide specific answers to these questions but leaves everything open for the reader and for the future. Pretty interesting read. ( )
  neanderthal88 | Jan 27, 2024 |
Public fascination with artificial intelligence (AI) has only increased since this book was published in 2021. AI technologies, such as Chat GPT, have entered mainstream society and are being used in everyday business work. Publicly, however, leaders from philosophy, business, and government do not appear yet ready to grapple with the deep human questions involved. For example, when do we defer to AI bots over human agency? Are we ready for AI tools of war – both offensively and defensively? How will this affect how we view ourselves as creatures of reason? In this book, Henry Kissinger, a dean at MIT Daniel Huttenlocher, and the CEO of Google Eric Schmidt grapple with similar issues at length.

The depth of thought in this work cannot be contained in a short book review. Needless to say, they cover the foreseeable issues through a historical lens. AI technology seems to portend an epochal transition in human civilization, much like the advent of the printing press. A big distinction is between assistive AI, under human direction, and autonomous AI, which directs us. Also in this realm, the prospect of artificial general intelligence – that is, a sentient computer or android – looms large and frighteningly realistic.

AI can apply to many fields of human activity, like the military, healthcare, business, education, and scientific research. These examples and more are explicitly examined throughout this book. Not all are good, however. The prospect of AI weapons scares me deeply. United States policy is not to develop autonomous weapons, but what about other countries? Is there any plausible way to defend against such war? It seems inevitable that someone is going to try using such a weapon eventually, even if they are a rogue terrorist group. Do we have to go through another World War I to learn our lesson?

This book offers more intelligent questions than firm answers, and that is the authors’ apparent intention. We are at the early stages of mainstream adoption of this technology, and questions abound while certainty is scarce. As such, reading this socially focused book behooves anyone interested in seriously forecasting the repercussions on the world. I develop software for a living, on the micro-level, so a treatment like this on the macro-level is helpful to see coding’s impact down the road. My experience tells me that the issues raised are spot-on, and the treatment is even and balanced. As humans, are we ready for this? No, but reading this book will make a reader more prepared. ( )
  scottjpearson | Aug 15, 2023 |
Although this book seems to be more a collection of disjointed chapters put together hastily by the three authors, I did enjoy the ways in which the developments and current state of AI were juxtaposed with the start of the first World War, Enlightenment thinking and philosophy in general.

I would have left out chapter 4 though as it is very poorly written and does not contribute much to the rest of the book. ( )
  Herculean_Librarian | Sep 10, 2022 |
1-5 van 6 worden getoond (volgende | toon alle)
geen besprekingen | voeg een bespreking toe
Je moet ingelogd zijn om Algemene Kennis te mogen bewerken.
Voor meer hulp zie de helppagina Algemene Kennis .
Gangbare titel
Oorspronkelijke titel
Alternatieve titels
Oorspronkelijk jaar van uitgave
Mensen/Personages
Belangrijke plaatsen
Belangrijke gebeurtenissen
Verwante films
Motto
Opdracht
Eerste woorden
Citaten
Laatste woorden
Ontwarringsbericht
Uitgevers redacteuren
Auteur van flaptekst/aanprijzing
Oorspronkelijke taal
Gangbare DDC/MDS
Canonieke LCC

Verwijzingen naar dit werk in externe bronnen.

Wikipedia in het Engels

Geen

"Artificial Intelligence (AI) is transforming human society in fundamental and profound ways. Not since the Age of Reason have we changed how we approach security, economics, order, and even knowledge itself. In the Age of AI, three deep and accomplished thinkers come together to consider what AI will mean for us all" --

Geen bibliotheekbeschrijvingen gevonden.

Boekbeschrijving
Haiku samenvatting

Actuele discussies

Geen

Populaire omslagen

Snelkoppelingen

Waardering

Gemiddelde: (3.54)
0.5
1
1.5
2 1
2.5 2
3 4
3.5
4 4
4.5 1
5 2

Ben jij dit?

Word een LibraryThing Auteur.

 

Over | Contact | LibraryThing.com | Privacy/Voorwaarden | Help/Veelgestelde vragen | Blog | Winkel | APIs | TinyCat | Nagelaten Bibliotheken | Vroege Recensenten | Algemene kennis | 204,778,338 boeken! | Bovenbalk: Altijd zichtbaar