Afbeelding auteur

Andrew May (1)

Auteur van Destination Mars

Voor andere auteurs genaamd Andrew May, zie de verduidelijkingspagina.

9 Werken 64 Leden 2 Besprekingen

Werken van Andrew May

Tagged

Algemene kennis

Geslacht
male

Leden

Besprekingen

This is an excellent book about orbital mechanics and how asteroids and comets move around the solar system. It talks about the history discovering impacts on the Earth and their linkage to rocks in the sky. It also talks about how we are tracking possible encounters and how it may be possible to move such rocks using real equipment and not movies. It does mention how the film "Deep Impact" was filled with scientific implausibilities but "Armageddon" makes it look like a PhD thesis in comparison, which has 168 factual errors found so far.
I found it easy to read and clear with its explanations. I highly recommend this book.
… (meer)
 
Gemarkeerd
John_T_Stewart | Jan 26, 2022 |
“For a SF writer, or anyone else, to produce ‘fake physics’ that might even fool a professional physicist, it has to look much more like the real thing.”

In “Fake Physics: Spoofs, Hoaxes and Fictitious Science” by Andrew May

“Technically Quantum Theory is a branch of physics, but it’s quite unlike any of the others. It doesn’t involve any authoritarian ‘laws of physics’ that you’re not allowed to break. Relativity says you can’t travel faster than light. The Second Law of Thermodynamics says you can’t have perpetual motion. Quantum Theory, on the other hand, says you can do anything you like. [...] It’s based entirely on jargon, which you can use to mean whatever you want it to mean. A few examples are: ‘Nonlocality, ‘Entanglement’, ‘Wave-Particle Duality’, ‘Hidden Variables’ and ‘the Uncertainty Principle’. Feel free to use these terms in any way you want: no-one else understands them any better than you do.”

In a spoof story called “Science for Crackpots” reprinted in “Fake Physics: Spoofs, Hoaxes and Fictitious Science” by Andrew May

“Is the Yeti the same species as Bigfoot or a different one? Does the effectiveness of telepathy depend on the distance involved? Why does the temperature drop when a ghost is in the room? How many members of the US congress are shape-shifting reptiles? If mainstream science [as opposed to crackpot science] addressed questions like these, people might start taking it seriously. ”

In a spoof story called “Science for Crackpots” reprinted in “Fake Physics: Spoofs, Hoaxes and Fictitious Science” by Andrew May

Do you believe FACEAPP App really ages you? (*)

Do you believe in Climate Change? The problem is what you mean by it. This illustrates very nicely the issue with science, verification of results, the different kinds of bad science, and the relationship of science to public policy. Who are the deniers? What do they deny? And how do you think ruling out fraud from the key papers is going to enable you to purge them? Made up data is the least of our problems with climate science. I know of only one alleged instance, where a series was allegedly illegitimately extended by simply infilling. The problem comes when there is real data, in the form of lots of proxy series, but one picks only those which show what one wants. Is this fraud? Or is it bad judgment? Or is it perhaps a legitimate attention to some very disturbing and important data? One then picks a statistical treatment which is not recognised as being legitimate or optimal. Is this fraudulent? Probably not. One is not a statistician, the study has passed peer review. People may disagree, but so what? People who are sceptical about the merits of dramatic CO2 reduction programs, the Paris Agreement, Wind and Solar, high values for the Climate Sensitivity Parameter are very rarely alleging fraud. What they are arguing is that some scientific propositions have not been shown to be correct, and that the public policies supposed to react to them are not fit for purpose even if they were.

The problem from bad science flowing into public policy seems to me many times greater and more costly than the problems from fraud. And the result is that even if you hit fraud in the way the article suggests, you will still be left with the problem of bad science. The solution is better, more critical, peer review. More and more prompt disclosure of data and methods. And more critical coverage in the generalist press. And less talk of purging the unbelievers!

Climate science makes pretty much all of its raw data available for just this kind of purpose in fact: and model code is typically either open source or source-available (i.e. you have to sign something and what you can use it for is restricted but it does not cost money). *Running* models is harder because their configuration is extremely fiddly and usually dependent on details of the computational environment they live in. Also you need significant computational resources to do anything non-trivial.

I've done some private work on making tools to post-processing tools in fact, and I'm generally interested in this area. There is a lot of work to do to make things more accessible, inevitably, and a lot of the technical problems are not well-understood by people with science backgrounds. The deniers do not like this as more data, more easily processed, is very harmful to their interests. There has been a bad history of datasets becoming unavailable under denialist administrations in the US, and this seems likely to happen (and may already be happening) under Trump. This vanishing of data by denialists is reasonably terrifying (very terrifying in fact) as it makes it hard to dispute their lies, which is the point of course.

What about peer review you may ask? Ah yes, Peer Review, the practice of prestigious journals asking scientists to edit and review research articles for free, then charging their universities for access to that research. The practice were scientists submit papers to journals edited by their colleagues who ask friendly colleagues to review them favourably. Peer Review isn't worth much. Transparent and reproducible analysis that can be scrutinised by anyone, including algorithms, is the way forward. As in all areas of human activity there will be those who are unethical, but for the most part scientists and editors have high integrity. The major problem in my experience is not fraud but incompetence, compounded with protectionism. The protectionism is a bigger issue which also needs unpacking. Using the wrong tests is actually a much more important source of error than simple mathematical errors and this more important problem will not be picked up. I agree that transparency is critical, but this kind of "vigilantism" will not generate the desired outcomes. We need mandatory Open Data, open data sharing infrastructures, and competent reviewing. Fresh air is an excellent disinfectant and we don’t need antiseptics.. at least not yet. Most of the time this works well, but there are multiple cases where "bad science" gets through. There are also folks who announce results that haven't been through peer review. To give a few examples:

Árpád Pusztai who stated that his research showed feeding genetically modified potatoes to rats had negative effects on their stomach lining and immune system. Pusztai's experiment was eventually published as a letter in The Lancet in 1999. Because of the controversial nature of his research the letter was reviewed by six reviewers - three times the usual number. One publicly opposed the letter, another thought it was flawed, but wanted it published "to avoid suspicions of a conspiracy against Pusztai and to give colleagues a chance to see the data for themselves," while the other four raised questions that were addressed by the authors. The letter reported significant differences between the thickness of the gut epithelium of rats fed genetically modified potatoes, compared to those fed the control diet. The Royal Society of Medicine declared that the study ‘is flawed in many aspects of design, execution and analysis’ and that ‘no conclusions should be drawn from it’. For example, too few rats per test group were used to derive meaningful, statistically significant data.[11]

Jacques Benveniste who published a paper in the prestigious scientific journal Nature describing the action of very high dilutions of anti-IgE antibody on the degranulation of human basophils, findings which seemed to support the concept of homeopathy. The controversial paper published in Nature was eventually co-authored by four laboratories worldwide, in Canada, Italy, Israel, and France. After the article was published, a follow-up investigation was set up by a team including physicist and Nature editor John Maddox, illusionist and well-known skeptic James Randi, as well as fraud expert Walter Stewart who had recently raised suspicion on the work of Nobel Laureate David Baltimore. With the cooperation of Benveniste's own team, the group failed to replicate the original results.

Gilles-Éric Séralini who published a paper in Food and Chemical Toxicology in September 2012, the article presented a two-year feeding study in rats, and reported an increase in tumors among rats fed genetically modified corn and the herbicide RoundUp. Scientists and regulatory agencies subsequently concluded that the study's design was flawed and its findings unsubstantiated. A chief criticism was that each part of the study had too few rats to obtain statistically useful data, particularly because the strain of rat used, Sprague Dawley, develops tumors at a high rate over its lifetime.

Fraud exists, poor reviewing exists, incompetence exists. A study of these high profile cases can help understand weaknesses in the process of publication and dissemination, and I think we learn from them. What's been quite heartening is that the increased concern - interestingly kicked off by the Pharma industry when they couldn't reproduce experiments from the literature - is now driving improved policies and adoption of mandatory open data provision. This has lead to increasing awareness in the scientific community and I hope that will improve things more. The need to work towards good practice everywhere is evident but thats no reason to damn the whole of the scientific enterprise as flawed and fraudulent. Have you ever herad of "p-hacking"? If you have ever done any real data analysis you will know that you try many things. You use different variables, different transformations of those variables, different sample selections, different models. The model you publish is obviously not the model that showed anything but the model that shows something interesting. But the procedure above invalidates p-values, they then no longer protect against finding spurious findings 95% of the time, if they are less than 5%. This is why so many results in science do not replicate.

Incidentally, though Asimov's 'Thiotimoline' story wasn't intended as SF. He wrote it when he was writing his PhD thesis, and was worried he wouldn't be able to fit in with the obligatory fairly turgid style - so it was a kind of spoof writing exercise. He showed it to an editor (John Campbell as Andrew May’s correctly states but without telling the all story) who liked it and wanted to publish it. Asimov agrees, but only if a pseudonym was used, as he didn't want the Doctoral committee assessing him to think he wasn't taking it seriously. To his horror it was published under his own name. Though all turned out well. Apparently the final question in the viva voce asked him to discuss the properties of his imaginary substance thiotimoline - and Asimov collapsed into laughter, realising they wouldn't have done this if they weren't going to pass him. There is so much stuff called science fiction today we need a method of rating the science in the fiction. But so many readers do not care and the viewers are even worse. And most of the media just cares about collecting eye balls.

How about archetypes for comparison to rate the science?

#1. Cat and Mouse by Ralph Williams
http://www.gutenberg.org/files/24392/24392-h/24392-h.htm
http://ia700300.us.archive.org/2/items/short_scifi_006_0811_librivox/catandmouse...

#2. The Servant Problem by Robert F. Young
http://www.gutenberg.org/files/23232/23232-h/23232-h.htm
http://ia600408.us.archive.org/25/items/short_scifi_028_0910_librivox/servantpro...

#3. Omnilingual (Feb 1957) by H. Beam Piper
http://www.tor.com/blogs/2012/03/scientific-language-h-beam-pipers-qomnilingualq
http://www.feedbooks.com/book/308/omnilingual
http://librivox.org/omnilingual-by-h-beam-piper/

#4. All Day September by Roger Kuykendal
http://www.gutenberg.org/files/24161/24161-h/24161-h.htm
http://ia700508.us.archive.org/21/items/short_scifi_016_0905_librivox/alldaysept...

#5. Redemption Ark Review.

#1 was nominated for a Hugo but lost to Flowers for Algernon so it should not be bad but it says nothing whatsoever about the "science" or "technology" enabling the story. An alien just makes things happen as though by magic;
#2 is unusually similar to #1 in that the technology driving the story performs the same function but the writer offers a kind of explanation mentioning mobius loops and has a little astronomy;
#3 is a mixture of speculation about future technology combined with considerable discussion of real science regarding physics and chemistry;
#4 is strictly hard SF and contains nothing likely to be impossible at some time in the not too distant future. It is in fact curious in that it is a Moon colony story 10 years before the first Moon landing in 1969 and a prospector finds water on the Moon which was actually found in October of 2009. The story also has a little chemistry. It brings to mind Arthur C. Clarke's "A Fall of Moondust";
#5 one of the characters attempts to get near light-speed using inertia-repressing technology and ends up with a ship full of pulverised goo where her crew used to be.

(*) All the imaging and algorithmic tools are based on Convolutional Neural Networks. ConvNet is just the machinery to interpret localized features (pixels, superpixels and so on). It seems these features are similar to textures but they are not exactly the same. This beautiful paper is about visualization of what is actually learned (memorized) about images based on this technology:

I think it is important for you to know how the machinery behind it actually interprets pixels of images.
FaceApp is most likely based on end-to-end trained networks. In other words, they do not have to tell the algorithm "this is nose wrinkles, this is smile lines" etc for each component. They just throw millions of faces, young and old, at it. The algorithm learns the underlying patterns, by asking it to decide if the output is a real old person, or a fake old person. I do not know FaceApp's actual algorithm, but I'm pretty confident it is based on CycleGAN, DiscoGAN, or similar. There's a whole zoo of Generative Adversarial Networks nowadays. So, the answer to the question is no. They just store your pictures somewhere else and they throw FACEAPP's "algorithm" at them to get at your aged self...
For fun I uploaded the photo which depicts my hangover after two days of binge drinking in the application and it gave very weird results: my older face was very similar to Boris Johnson's. Is there anything wrong with me? My bathroom mirror has a similar app. Whenever I look at it I see an old bloke with whitish beard and a lined and jowly face instead of the fit thirty year old sex god that I still am mentally.
… (meer)
 
Gemarkeerd
antao | Aug 30, 2019 |

Misschien vindt je deze ook leuk

Gerelateerde auteurs

Statistieken

Werken
9
Leden
64
Populariteit
#264,968
Waardering
½ 4.4
Besprekingen
2
ISBNs
55
Talen
3

Tabellen & Grafieken