Klik op een omslag om naar Google Boeken te gaan.
Bezig met laden... The Alignment Problem: Machine Learning and Human Values (2020)door Brian Christian
Top Five Books of 2023 (117) Books Read in 2023 (1,828) Bezig met laden...
Meld je aan bij LibraryThing om erachter te komen of je dit boek goed zult vinden. Op dit moment geen Discussie gesprekken over dit boek. An impressive, conversation-based analysis of how AI systems developed through processes of machine learning (ML) might be constrained to be both safe and ethical. I had little idea of how rich and massive the research on this has been. In nine chapters with carefully chosen one-word headings (Representation, Fairness, Transparency, Reinforcement, Shaping, Curiosity, Imitation, Inference, and Uncertainty), the author describes a sequence of diverse and increasingly sophisticated ML concepts, culminating in what is called Cooperative Inverse Reinforcement Learning (CIRL). Whether AI will ever stop being part of what I regard as the wrongness of modern technology, I don't know, but at least there are people in the field who have their hearts in the right place. There is a great book trapped inside this good book, waiting for a skillful editor to carve it out. The author did vast research in multiple domains and it seems like he could neither build a cohesive narration that could connect all of it nor leave anything out. This book is probably the best intro to machine learning space for a non-engineer I've read. It presents its history, challenges, what can be done, and what can't be done (yet). It's both accessible and substantive, presenting complex ideas in a digestible form without dumbing them down. If you want to spark the ML interest in anyone who hasn't been paying attention to this field, give them this book. It provides a wide background connecting ML to neuroscience, cognitive science, psychology, ethics, and behavioral economics that will blow their mind. It's also very detailed, screaming at the reader "I did the research, I went where no one else dared to go!". It will not only present you with an intriguing ML concept but also: trace its roots to XIX century farming problem or biology breakthrough, present all the scientist contributing to this research, explain how they met and got along, cite author's interviews with some of them, and present their life after they published their masterpiece, including completely unrelated information about their substance abuse and dark circumstances of their premature death. It's written quite well, so there might be an audience who enjoys this, but sadly I'm not a part of it. If this book was structured to touch directly the subject of the alignment problem it would be at least 3 times shorter. It doesn't mean that 2/3 are bad - most of it is informative, some of it is entertaining, a lot seems like ML things that the author found interesting and just added to the book without any specific connection to its premise. I really liked the first few chapters where machine learning algorithms are presented as the first viable benchmark to the human thinking process and mental models that we build. Spoiler alert: it very clearly shows our flaws, biases, and lies that we tell ourselves (that are further embedded in ML models that we create and technology that uses them). Overall, I enjoyed most of this book. I just feel a bit cheated by its title and premise, which advertise a different kind of book. This is the Machine Learning omnibus, presenting the most interesting scientific concepts of this field and the scientists behind them. If this is what you expect and need, you won't be disappointed!
The Alignment Problem does an outstanding job of explaining insights and progress from recent technical AI/ML literature for a general audience. For risk analysts, it provides both a fascinating exploration of foundational issues about how data analysis and algorithms can best be used to serve human needs and goals and also a perceptive examination of how they can fail to do so.
"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"-- Geen bibliotheekbeschrijvingen gevonden. |
Actuele discussiesGeenPopulaire omslagen
Google Books — Bezig met laden... GenresDewey Decimale Classificatie (DDC)006.3Information Computer Science; Knowledge and Systems Special Topics Artificial IntelligenceLC-classificatieWaarderingGemiddelde:
Ben jij dit?Word een LibraryThing Auteur. |
While capitalism will ensure the inevitability that humans will be pushed “out of the loop” in every aspect – The question is not if but when . Brian Christian’s Alignment Problem educates the reader with the real pitfalls of depending on algorithms and inherent drawbacks of machine learning . Brian more than Nick Bostrom’s – Super Intelligence in my opinion dwells much deeper on the alignment problem at hand ; Bostrom set the stage for AI safety and was labelled as an alarmist ; well not anymore .
From dopamine exploiting social media algorithms to parole sentences to mortgage application approvals ; these highly pervasive machine learning algos now control various aspects of humans , while Congress grapples with legislation & red-tape .
The book gives an over arching view on how the ML algos came about around the following “pillars” curiosity, imitation, reinforcement, model bias , bad data samples etc. and why it is crucial to align AI goals with Human values .
And as often is the case the problems are more of the philosophical nature than anything , this also highlights the importance of psychology , social anthropology , neurophysiology and psychoanalysis playing a quintessential part in future development of this nascent field .The latter part of the book deals with possibly the tougher questions which AI posses ; happy to see the Effective Altruism movement founder Will MacAskell get a page in there too . ( )