StartGroepenDiscussieMeerTijdgeest
Doorzoek de site
Onze site gebruikt cookies om diensten te leveren, prestaties te verbeteren, voor analyse en (indien je niet ingelogd bent) voor advertenties. Door LibraryThing te gebruiken erken je dat je onze Servicevoorwaarden en Privacybeleid gelezen en begrepen hebt. Je gebruik van de site en diensten is onderhevig aan dit beleid en deze voorwaarden.

Resultaten uit Google Boeken

Klik op een omslag om naar Google Boeken te gaan.

Bernoulli's fallacy : statistical illogic…
Bezig met laden...

Bernoulli's fallacy : statistical illogic and the crisis of modern science (origineel 2021; editie 2021)

door Aubrey Clayton

LedenBesprekingenPopulariteitGemiddelde beoordelingDiscussies
795341,833 (4.18)Geen
"There is a logical flaw in the statistical methods used across experimental science. This fault is not just a minor academic quibble: it underlies a reproducibility crisis now threatening entire disciplines. In an increasingly data-reliant culture, this same deeply rooted error shapes decisions in medicine, law, and public policy with profound consequences. The foundation of the problem is a misunderstanding of probability and our ability to make inferences from data. Aubrey Clayton traces the history of how statistics went astray, beginning with the groundbreaking work of the seventeenth-century mathematician Jacob Bernoulli and winding through gambling, astronomy, and genetics. He recounts the feuds among rival schools of statistics, exploring the surprisingly human problems that gave rise to the discipline and the all-too-human shortcomings that derailed it. Clayton highlights how influential nineteenth- and twentieth-century figures developed a statistical methodology they claimed was purely objective in order to silence critics of their political agendas, including eugenics. Clayton provides a clear account of the mathematics and logic of probability, conveying complex concepts accessibly for readers interested in the statistical methods that frame our understanding of the world. He contends that we need to take a Bayesian approach-incorporating prior knowledge when reasoning with incomplete information-in order to resolve the crisis. Ranging across math, philosophy, and culture, Bernoulli's Fallacy explains why something has gone wrong with how we use data-and how to fix it"--… (meer)
Lid:Taffiner
Titel:Bernoulli's fallacy : statistical illogic and the crisis of modern science
Auteurs:Aubrey Clayton
Info:New York : Columbia University Press, 2021.
Verzamelingen:Jouw bibliotheek, Aan het lezen, Te lezen
Waardering:
Trefwoorden:to-read

Informatie over het werk

Bernoulli's Fallacy: Statistical Illogic and the Crisis of Modern Science door Aubrey Clayton (2021)

Onlangs toegevoegd doorcctesttc1, shahbazc, zighkril, boydnguyen
Bezig met laden...

Meld je aan bij LibraryThing om erachter te komen of je dit boek goed zult vinden.

Op dit moment geen Discussie gesprekken over dit boek.

Toon 5 van 5
What is probability? We know the expected distributions of dice throws, card shuffles and deals. But how about real-world future events, each of which is bound to have a unique set of circumstances? How well does a simplistic counting methodology map against inevitable real-world complexities and ambiguities that resist easy sampling? Aubrey Clayton's answer: not very.

This book offers amazingly complete documentation of the century-long war between the "frequentists" and the Bayesians. Frequentists are the ones who think mere counting is adequate for any kind of probability computations. Bayesians use the formula invented by Thomas Bayes, an 18th-century priest (!), which incorporates into probability calculations the notion of "priors," estimates of our beliefs based on how confident we should be according to the current state of our knowledge. Bayesian calculations feed chainlike into each other, adjusting the confidence level per each latest batch of data. Frequentists rail against the idea of priors, which they revile as allowing a subjectivity to infiltrate. Bayesians defend priors as tools that allow us to leverage the knowledge we've already gained, noting that any subjectivity risk progressively diminishes as each new wave of data nudges the probability in progress into closer accordance with empirical research.

One takeaway from any serious study of probability is that doing the calculations involves dangerous traps and subtleties. In the hands of less than expert statisticians, the great mass of us who think we know how it works but don't, calculated odds are likely to be seriously off if not flat-out wrong. As Clayton shows, the failures of frequentism show up in the incidence of peer groups failing to reproduce scientific results produced by counting alone. The use of significance levels of 95% or lower also feed into this reproducibility problem.

This book probably contains too much dense information for the non-expert (and I confess to being an amateur at best). But I give it five stars because of its amazing scholarship and completeness in presenting the history, theory, practice, and scientific criticality of this long-running methodological war; a war that has affected not only science, but the very epistemological framework of what we think we know. ( )
  Cr00 | Apr 1, 2023 |
This is an excellent, ground-breaking criticism of standard statistical methods and the philosophical basis that underlies them. Well-written and persuasive. Should give pause to social scientists who routinely misuse statistics to justify questionable research. Highly recommended. ( )
  hcubic | Apr 3, 2022 |
Clayton states at the outset that the common traditional methods of statistical inference are "simply and irredeemably wrong" and "logically bankrupt". He describes the development of these methods, which notably include null-hypothesis significance testing, by the frequentist (and eugenicist) statisticians Francis Galton, Karl Pearson, and Ronald Fisher. He details how the methods fail and how they are to blame for the ongoing research-result-replicability crisis in the soft sciences. Since the root problem is the view of probability as a quality of objects rather than of analysts' perception and knowledge of those objects, the remedy is to abide by the logical strictures of Bayesian reasoning, where there is no disregarding of prior information or confusion of P[data|hypothesis] with P[hypothesis|data]. Amen.
  fpagan | Feb 1, 2022 |
Lies, Damn Lies, and Statistics. On the one hand, if this text is true, the words often attributed to Mark Twain have likely never been more true. If this text is true, you can effectively toss out any and all probaballistic claims you've ever heard. Which means virtually everything about any social science (psychology, sociology, etc). The vast bulk of climate science. Indeed, most anything that cannot be repeatedly accurately measured in verifiable ways is pretty much *gone*. On the other, the claims herein could be seen as constituting yet another battle in yet another Ivory Tower world with little real-world implications at all. Indeed, one section in particular - where the author imagines a super computer trained in the ways of the opposing camp and an unknowing statistics student - could be argued as being little more than a straight up straw man attack. And it is these very points - regarding the possibility of this being little more than an Ivory Tower battle and the seeming straw man - that form part of the reasoning for the star deduction. The other two points are these: 1) Lack of bibliography. As the text repeatedly and painfully makes the point of astounding claims requiring astounding proof, the fact that this bibliography is only about 10% of this (advance reader copy, so potentially fixable before publication) copy is quite remarkable. Particularly when considering that other science books this reader has read within the last few weeks have made far less astounding claims and yet had much lengthier bibliographies. 2) There isn't a way around this one: This is one *dense* book. I fully cop to not being able to follow *all* of the math, but the explanations seem reasonable themselves. This is simply an extremely dense book that someone that hasn't had at least Statistics 1 in college likely won't be able to follow at all, even as it not only proposes new systems of statistics but also follows the historical development of statistics and statistical thinking. And it is based, largely, on a paper that came out roughly when this reader was indeed *in* said Statistics 1 class in college - 2003. As to the actual mathematical arguments presented here and their validity, this reader will simply note that he has but a Bachelor of Science in Computer Science - and thus at least *some* knowledge of the field, but isn't anywhere near being able to confirm or refute someone possessing a PhD in some Statistics-adjacent field. But as someone who reads many books across many genres and disciplines, the overall points made in this one... well, go back to the beginning of the review. If true, they are indeed earth quaking if not shattering. But one could easily see them to just as likely be just another academic war. In the end, this is a book that is indeed recommended, though one may wish to assess their own mathematical and statistical knowledge before attempting to read this polemic. ( )
  BookAnonJeff | Jul 11, 2021 |
Bernoulli's Fallacy by Aubrey Clayton is a well-argued case against what has passed for probability over the past century plus. While his explanations are straightforward and the math is presented in a clear manner, it is still a read that will, and should, take more effort than many other books. The reward, however, is well worth the effort.

While some may mistakenly think this is just some feud within academia and so doesn't really matter beyond those walls, that is wrong and Clayton makes that clear with many of the social science as well as science examples he cites. When people's lives can be harmed if not ended at least partly because of improper use of data expressed as probability, then this is anything but an ivory tower debate. It takes place largely within those walls because that is where these theories are taught and because the "experts" who pronounce the so-called probabilities on policy issues are still pulled from academia's ranks.

While my first undergraduate degree was a mathematics heavy degree (EE) it has been very long ago and my subsequent degrees were all humanities and social science. So I am not going to try to explain what Clayton goes through. To put it as basically as possible, what passes for probability is often just frequency, with little or no predictive or explanatory value. Yet it is used to predict and to explain, which then becomes part of future policy, which more often than not fails.

A good example of a Bayesian approach is an article Clayton wrote for the Boston Globe in June of last year about the statistical paradox of police killings. Without taking prior information into account when assessing limited or skewed information, a faulty and quite deadly conclusion can be made which seems, on the surface, to be based on sound scientific information. That article, quite short, is well worth looking up to offer a real world glimpse of what Clayton is arguing against.

While the book is dense, it is accessible to most readers who either have some math background (especially if you still use it frequently) or is willing to read a bit slower and wrestle with the concepts. Clayton's explanations and examples, as well as the history lesson, can be read largely without too much concern for understanding the nuance of every formula he shows. If you understand that if a figure in a particular place in a formula can have an outsize effect on the result, then understanding the nuance is less important since Clayton explains what we need to understand for the big picture argument. In other words, if you're interested or concerned about the reproducibility crisis in science as well as the social sciences this book will be well worth any effort you might have to put into it. But it is, bottom line, accessible to most who want to understand.

Using my experience as an example, I had to progress rather slowly and make an effort to understand each bit of information, each aspect of the history as well as of the mathematics. I feel like I managed to do so at a reasonable level for a first read. What I haven't yet done but anticipate doing with subsequent readings is connecting these still, in my mind, separate pieces into a better understanding of the whole. Clayton's explanations allowed me to understand the big picture without every detail being in perfect focus. Now I can connect the dots (my small pieces of understanding) to make the big picture come into sharper focus. Okay, maybe I didn't help with this paragraph, but maybe someone will understand what I am trying to say.

Quick aside, ignore the "sky is falling" people who imply that all statistics and all that we do with them is pointless, that is throwing the baby out with the bath water, and probably makes the screamer feel smart. This is a wide ranging problem and touches almost every aspect of policy making as well as research, but it is not a case of "everything that has been done before is now meaningless." Keep the data and use it better, don't panic and throw everything out and hyperventilate.

Also, to clear up some misunderstandings, the review copy I had, both the Kindle version and the one I read on Adobe Digital Editions, had substantial notes (many of which were bibliographic in nature) as well as several pages of a bibliography. So anyone interested in checking Clayton's sources can do so. Not sure why the mix up, but rest assured, this is both well-researched and well-documented.

Reviewed from a copy made available by the publisher via NetGalley. ( )
  pomo58 | May 20, 2021 |
Toon 5 van 5
geen besprekingen | voeg een bespreking toe
Je moet ingelogd zijn om Algemene Kennis te mogen bewerken.
Voor meer hulp zie de helppagina Algemene Kennis .
Gangbare titel
Oorspronkelijke titel
Alternatieve titels
Oorspronkelijk jaar van uitgave
Mensen/Personages
Belangrijke plaatsen
Belangrijke gebeurtenissen
Verwante films
Motto
Opdracht
Eerste woorden
Citaten
Laatste woorden
Ontwarringsbericht
Uitgevers redacteuren
Auteur van flaptekst/aanprijzing
Oorspronkelijke taal
Gangbare DDC/MDS
Canonieke LCC

Verwijzingen naar dit werk in externe bronnen.

Wikipedia in het Engels

Geen

"There is a logical flaw in the statistical methods used across experimental science. This fault is not just a minor academic quibble: it underlies a reproducibility crisis now threatening entire disciplines. In an increasingly data-reliant culture, this same deeply rooted error shapes decisions in medicine, law, and public policy with profound consequences. The foundation of the problem is a misunderstanding of probability and our ability to make inferences from data. Aubrey Clayton traces the history of how statistics went astray, beginning with the groundbreaking work of the seventeenth-century mathematician Jacob Bernoulli and winding through gambling, astronomy, and genetics. He recounts the feuds among rival schools of statistics, exploring the surprisingly human problems that gave rise to the discipline and the all-too-human shortcomings that derailed it. Clayton highlights how influential nineteenth- and twentieth-century figures developed a statistical methodology they claimed was purely objective in order to silence critics of their political agendas, including eugenics. Clayton provides a clear account of the mathematics and logic of probability, conveying complex concepts accessibly for readers interested in the statistical methods that frame our understanding of the world. He contends that we need to take a Bayesian approach-incorporating prior knowledge when reasoning with incomplete information-in order to resolve the crisis. Ranging across math, philosophy, and culture, Bernoulli's Fallacy explains why something has gone wrong with how we use data-and how to fix it"--

Geen bibliotheekbeschrijvingen gevonden.

Boekbeschrijving
Haiku samenvatting

Actuele discussies

Geen

Populaire omslagen

Snelkoppelingen

Waardering

Gemiddelde: (4.18)
0.5
1
1.5
2
2.5
3 1
3.5
4 7
4.5
5 3

Ben jij dit?

Word een LibraryThing Auteur.

 

Over | Contact | LibraryThing.com | Privacy/Voorwaarden | Help/Veelgestelde vragen | Blog | Winkel | APIs | TinyCat | Nagelaten Bibliotheken | Vroege Recensenten | Algemene kennis | 206,449,961 boeken! | Bovenbalk: Altijd zichtbaar