How do I teach myself science

How reliable is science?

Trust in collaborations

It is with justified pride that a friend presents me with his latest scientific publication: six pages of content plus one and a half pages to list the two hundred authors. My only subdued enthusiasm creates a slight disgruntlement in him. My reference to the Ig Nobel Prize for Literature (!) In 1993, which went to the 976 authors of an almost ten-page medical study, is also not conducive to the climate.

My friend thinks that today, in the usual large-scale collaborations in particle physics, this publication culture is common and that the quality of the work is ensured by internal collaboration bodies and processes.

I wonder: where has its copyright gone? How can the creative performance of research personalities be recognized in such a culture? The list of publications plays a central role in the appointment process for professors. I can no longer see the value of such lists. I doubted that before. If the same publication appears on the lists of two hundred people, then it is inflation. This can no longer be a proof of performance.

Above all, it is one thing: equalization. In the end credits of films, at least what they have done is communicated: production, direction, acting, make-up, editing, music, trick technology, getting bread rolls, driving people from place to place and so on.

We are all laypeople in almost every subject, including scientists. Our power of judgment is based to a large extent on information from people to whom we are trust. What should one stick to if not the good reputation of big names. We rely on institutions and editorial offices whose constitutions and statutes bring trustworthy scientists and journalists to the front. From the publication processes we require clear rules of competency and delimitation of responsibilities, in short: transparency.

Science largely eludes judgment through leveling and through a quality system that is not transparent to outsiders.

Large collaborations are unavoidable today, the multitude of authors is not. There is another way. For example, you can give the collaboration a name and treat it as an independent organism. As such, he can also gain reputation.

A famous example from the field of mathematics is the collective of authors called "Nicolas Bourbaki". Only the collective appears as the author of the work. At Bourbaki it was even kept secret who was a member.

The collective bears responsibility for the work in its entirety. And only he deserves the fame and only the collective has a reputation to lose.

A collaboration or a collective cannot apply for a professorship. But there is nothing against it when individuals step out and gain a personal reputation through their own publications. This is exactly what happened at Bourbaki: through my studies I know books by Henri Cartan and Jean Dieudonné. Their writing style alone makes it clear: They are Bourbaists. It is now known that they were founding members of the authors' collective.

I appreciate my friend's work because I know something about his contribution to the collaboration. The report of the two hundred authors did not add anything to my appreciation.

I can only advise members of appointment committees or comparable bodies not to rely on the length of the literature lists submitted by applicants. Especially when working with several authors, it is worthwhile to research exactly what contribution the applicant has actually made.

Search for the Grail

Most skeptic organizations make critical thinking and science the basis of their work. This is the only way to come to reliable knowledge that allows purely illusory thinking and pseudoscience to be recognized as such.

The last Oops! Article spoke of skeptics who were amazed to find that some of what they called “unscientific” is science in the common sense. Conversely, the assertion popular among skeptics that their own argument has been “scientifically proven” is on a rather shaky basis.

Let us assume that science is what the scientific community hatches and declares it credible. We also have a rough idea of ​​what is meant by scientific enterprise. Our trust in science is not entirely groundless, because the technology that surrounds us works quite well, and it is the entirely practical result of scientific activity.

The matter is not always so clear before us, especially when it comes to exploring the shy and fleeting, weak and moody effects and processes that are presumably spiritual in nature. Then the scientific approach reaches its limits. It is research on topics that may not be; it is like a strictly scientific search for the Grail.

What the Psi researchers do, for example, not only looks like science, it also meets the strict standards of scientific working methods. Nevertheless, the question remains whether what you are scientifically researching actually exists. It is not easy for the skeptic to accept an activity that deals with something non-existent as a science. But that is exactly the question: Does psi really not exist? In the penultimate Oops! Article, I pointed out how a stalemate between researchers and skeptics can arise.

The situation is similar with some studies by the Karl and Veronica Carstens Foundation. It is primarily about the effectiveness of homeopathy. But it is precisely this effectiveness that the skeptic fundamentally questions. For him it is the scientific investigation of something that doesn't exist. But still it is evidently science.

The transitions to mainstream science are fluid. After all, what is to be made of it if a large state-funded research project is designed to determine the neutrino mass? An early hypothesis is that this mass is zero. The project has now shown that, with great technical effort, it was possible to lower the estimate of the upper limit of the neutrino mass from the previous 2 eV to 1.1 eV. (The energy corresponding to the mass is measured.) Since the project is still going on, one can only hope that in the course of time there will be a lower limit for the mass. With that one would at least know that the Grail really does exist. (There are estimates of the lower limit based on models. Direct measurements have so far not shown anything like this, as I learned when asked.)

The spectacular draws - the refutation does not

The 2019 Ig Nobel Prize for Psychology was awarded to Professor Fritz Strack from Würzburg for the discovery that a pen, held across the mouth, makes you smile and happier - and for discovering that this is not the case .

In the past few years, several researchers had tested Strack's thesis on thousands of test subjects. Nine studies saw the effect, eight studies found the opposite. This highlights the replication crisis in psychology. "Many of the results of classic experiments in this field cannot be reproduced if other researchers try to imitate them" (Christoph Drösser in Zeit online from September 13, 2019).

It is a much lamented fact that scientific journals are very fond of producing spectacular results. Failed replication attempts have a lower chance of reaching the general public. This distorts what we call the state of the art.

I am using the pencil example because it showed me my own gullibility. I value Daniel Kahneman's work very highly and I have incorporated some of his results into my thought trap studies. Kahneman is Nobel Prize Laureate in Economics in 2002. He represents Strack's pencil thesis in the book “Thinking, Fast and Slow” (2011, p. 54). I didn't spread the thesis any further, but just accepted it. That could have been eye-catching. The lesson: A skeptic should not throw his critical faculties overboard, even in the face of authorities.

Questionable Methods of Science

The questionable methods of science were mentioned in the Oops! Article on psi research. They threaten the credibility of what we call the state of the art. In the book “Die Pharma-Lüge” from 2013, Ben Goldacre compiles the types of bad studies. I am listing some of the unethical methods here and briefly characterizing them.

Outright fraud occurs when a scientist falsifies measurement results or even invents them outright.

Fishing for Significance. The researcher falsely gives the impression that he has properly recorded all the attempts in a test series, instead he has made a suitable selection in order to achieve a spectacular publication. The accompanying Publication bias falsifies the image of science. A variant of this questionable practice is the premature termination of a study if the desired result can no longer be expected. The opposite also happens: keep going until you get the desired result.

Presorted test and control group. The Oops! Article “Slim in 14 days” viewed with skepticism shows such a case: It was about the proof of effectiveness for a fitness device. The product user and control group were not selected at random for the test. The people in the control group were slimmer on average and their chances of losing weight were therefore lower from the outset. There is talk of distorted samples in medicine when Goldacre writes (2013, p. 208): “Most studies, on the basis of which medical decisions are made in practice, only test drugs on unrepresentative ideal patients, who are often young have a single diagnosis, hardly have any other health problems, and so on. "

Too small a sample. The Aral study, of which I report in my book “Klüger erren - avoid thinking traps with a system” (2016), is an example of how one can use small samples to get lurid and at the same time empty headlines. Of the three hundred people, including an equal number of women and men, twelve women and only six men chose a Ford when buying a car. Inflated to a sensation, it reads like this: "4% of men and 8% of women would buy a Ford as the next car".

Creative goal setting. It is a variant of the Fishing for Significance. It is not the large number of tests from which a choice is made, but only one test, which is evaluated according to several assessment criteria. In the healthcare sector, these are, for example, pain, depression, quality of life, mobility, all-round mortality or deaths due to certain causes. "If we measure many factors, in the end some of them are only statistically significantly improved due to the natural random variation that occurs in all studies." (Goldacre, 2013, p. 233)

Questionable research practices distort the "state of the art".

The Matthew Effect

In Mt 13:12 it says: “For whoever has it will be given that he may have abundance; but whoever has not, what he has is taken from him. ”In a popular way:“ The devil always shits on the biggest pile. ”The cautious speaks of the Matthew effect.

I had this experience years ago: I had submitted a scientific article to a renowned journal. There it went through the usual peer review process. The reviewer recommended that I cite two articles by a well-known author. Although the reviewer remained anonymous, I quickly realized that it had to be the recommended author. I knew him personally.

Many an author goes to work even more brazenly. In the Amazon review of the book “Autopilot im Kopf”, Rolf Dobelli wrote: “Carl Naughton could have written a simple guide with tips on how to avoid thought traps. Instead, he digs into the depths of our brains and unearths astonishing insights [...] Naughton's well-founded and entertaining reflections deserve more careful design, and his bad luck is that the book appears shortly after Rolf Dobelli's bestseller The Art of Clear Thinking, which covers the same terrain plowed. "

Authors sometimes form small groups and cheer each other up, as you can see; and some even cheer themselves.

Even without calculating intent, a Matthew effect arises. The unbiased reader considers famous authors good, not because they are good, but because they are famous. There is a largely quality-independent feedback process. The same principle works in science: well-cited authors have to be good, so you cite them too, hoping for recognition by the establishment. So: be careful when interpreting bibliometric measures and rankings!

Instead of the quality of the content, sensitivities become the driving forces. The result is then largely irrelevant. Some discussions about Wikipedia articles are wonderful examples of such meaningless dynamics. In several Oops! Articles I dissect the discussion about the so-called goat problem and demonstrate how animated discussions develop, largely independently of the content.

The Matthäus-Effect puts the state of science in a lopsided position.

Conclusion

It is better not for the skeptic to refer generally to the “state of the art”. When in doubt, he takes a closer look and studies one or the other research report and review article, always bearing in mind the possibility of deception. He also looks for opposing opinions. After all, he has to weigh things up and come to a judgment himself. It is part of the essence of science that no absolute certainty arises in this process, that no truth emerges.

This entry was posted in Education, Morals and Ethics, Science, Skepticism, Science and Pseudoscience and tagged Enlightenment, Healthcare, Pseudoscience, Significance, Sampling, Deception and Self-Deception. Set a Bookmark the permalink.