One of the themes of this newsletter is how non-scientists should relate to scientists. A typical attitude that most of us take up is one of deference and respect. This attitude towards scientists is similar to a common attitude that we take to doctors or lawyers. These people are trained professionals. They have worked hard to enter their profession, and their ongoing success in that profession is good evidence that they are skilled at their work. When my doctor prescribes me some medicine, or my lawyer advises me to reject a contract, I take that advice seriously. In some similar way, if I have some reason to seek out a scientist for their recommendation on some scientific matter, I am liable to factor that recommendation into my deliberations.
This analogy between doctors, lawyers and scientists might strike some readers as already a bit strained. I agree, and I take the differences between these professions to indicate what is misguided about a question like “Why trust science?” This is the title of an important book by the historian of science Naomi Oreskes from 2019. Reading this book is a great way to start to grapple with questions about scientific authority. (Other, related books that I hope to discuss in the future are by Rauch, Strevens and Vickers.) Oreskes develops what could be called an institutional answer to her question: we should all trust science because science is an institution that is organized in the right way. She rightly rejects the traditional answer that this organization involves scientists using some uniform “scientific method”. There are instead “diverse methods” that are employed across fields. What is special about science, for Oreskes, is “its sustained engagement with the world” and “its social character” (55). This engagement with the world is said to produce enormous amounts of empirical evidence concerning how the world really is. But this empirical evidence is not sufficient to warrant our trust. In addition, “We must also take to heart — and explain — the social character of science and the role it plays in vetting claims” (57). Familiar aspects of this “vetting” include peer-reviewed journals and tenure reviews. Oreskes also endorses Longino’s argument that a highly diverse scientific community that is rewarded for its critical efforts is our best means of ensuring a kind of objectivity for scientific results. So, given that science as an institution is aimed at gathering empirical evidence about the world, and that it is a sufficiently diverse critical community, we are right to trust science.
Although this seems like an argument for some kind of blind trust or blanket endorsement of scientific authority, Oreskes is quick to note when or how this trust should be withdrawn:
My arguments require a few caveats. Most important is that there is no guarantee that the ideal of objectivity through diversity and critical interrogation will always be achieved, and therefore no guarantee that scientists are correct in any given case. … outsiders may judge scientific claims in part by considering how diverse and open to critique the community involved is. If there is evidence that a community is not open, or is dominated by a small clique or even a few aggressive individuals — or if we have evidence (and not just allegations) that some voices are being suppressed — this may be grounds for warranted skepticism. In this respect, each case must be evaluated on its own merits (59).
That is, there is some kind of default or basic trust in science that can be qualified or withdrawn, but only when evidence arises that some scientific community has failed to meet the standards Oreskes identifies.
There seem to me to be two problems with Oreskes’ position. First, given how science actually is as an institution, the sort of basic trust in science that Oreskes argues for should typically be withdrawn. Second, a much more qualified and nuanced attitude towards science as an institution is sufficient to use the results of science for political and other practical deliberations. I will sketch out these two problems here, but hope to get back to them in more detail in future discussions.
To illustrate the first problem, consider a claim that is nearly universally endorsed by the scientific authorities on some topic, and yet is controversial outside of science. For example, let us suppose that the most recent IPCC report makes some specific prediction about how global temperatures will rise under this or that emission scenario. This report aims to present a consensus of the experts on these issues, and in some cases the findings are given with “high confidence”. If we ask non-scientists if they accept such a prediction, many people will say “no”. And if we tell them that this prediction is endorsed by the scientific experts, then many people will change their mind to “yes”, but some will continue to say “no”.
What should happen if we apply Oreskes’ approach to this sort of prediction? A non-scientist should consider their evidence about the organization of the scientific community. But most of us lack this sort of evidence: we do not understand how the prediction was arrived at, how it was scrutinized or what level of diversity was actually included in these deliberations. Of course, if we start with some kind of blanket trust in science as an institution, then we are likely to apply that trust to this more specific institutional group, the IPCC. But it seems to me that applying Oreskes’ test from a more open-minded starting point should lead one to doubt that this sort of prediction really has been scrutinized in the way she requires. The basic worry is that non-scientists not only lack access to the evidence that the scientists use to make their prediction, but they also lack evidence concerning the organization of these scientific institutions.
But is this really a problem? Suppose that we lack a generic reason to “trust science” as institution. Would that create some sort of practical or political problem in using scientific findings? My suggestion is that the answer to this question is “no”. Two different kinds of scientific claims illustrate how non-scientists can operate without access to this kind of evidence. On the one hand, there are what could be called theoretical or foundational scientific claims. If we suppose that nearly all cosmologists endorse the Big Bang account of the origin of the universe, does that give the rest of us sufficient reason to believe that this account is true? For this type of claim, it seems best for non-scientists to remain agnostic, unless they want to take the time to review the scientific evidence (which is considerable). On the other hand, there are predictions like those found in the IPCC report that have urgent practical significance. For this type of claim, I would argue that it is rational for non-scientists to act on the claims endorsed by scientists even if they do not believe that they are true. For even if a non-scientist thinks that there is only a 20 percent chance that this sort of prediction is true, they still have a practical reason to adopt whatever policies would avoid this predicted scenario. So even without “trust” in science as an institution, non-scientists can act on some scientific claims without believing that those claims are true.
Here we see, then, the difference between the advice of doctors and lawyers as opposed to generic claims by scientists: we get the advice of doctors and lawyers when we face an urgent practical decision, but many of the claims advanced by scientists fail to relate to such a decision. It is only when the scientific claims directly impact our choices that we must consider the question of trust. But in those situations we can bypass a global, institutional consideration of how science is organized, and focus more directly on the consequences of our decisions.