Why Some Governments Have a Grudge Against Science

Why Some Governments Have a Grudge Against Science

How should we categorize what kinds of scientific projects there are? The answer that our public institutions expect determines how the sciences are run by the government, and that points to what the government expects of scientists. The classic answer relies on the distinction between basic (or pure) and applied science, where basic science is that which looks for “true facts” about the world like the structure of the atom, the nature of chemicals, or the reason we need to hypothesize about dark matter or quantum probability. Applied science looks to solve problems that we humans have. Some examples include how to cure malaria, or how to guide missiles into particular buildings from tens of thousands of feet in the air. This distinction makes sense at first glance - applied science is that which we pursue for some goal, basic science is that which we pursue for its own sake.

The Old Social Contract for Science

Distinguishing between pure and applied research

This distinction between basic and applied science is older than the institutions that provide funding for the sciences in our society, and it makes up our conception of how science should operate and what scientific research projects should achieve. Since the postwar 1940s, influential scientists and officials have adopted this view: pure science should not have social values - it should not care about usefulness or the public good, but instead should be focused only on truth and understanding. Part of this distinction was made very important by the first use of nuclear weapons in 1945: scientists that worked on pure research should never be held responsible for the applied consequences of their discoveries. Part of this justification came plainly from the idea that scientists doing basic science should care only about truth and understanding. Worrying about consequences or responsibilities would be an additional and unnecessary restraint on what scientists could do, which would taint the purity of their work and ultimately its ability to achieve the understanding of the world that it aimed for. It is plausible, too, that scientists could foresee how seemingly innocent research could be turned around and used for horrifying weapons. In a system where the consequences of a scientist’s purely scientific work could not be held against them, researchers would not have to worry about being held responsible or accountable for how the military or industry used their discoveries. This would allow scientists to work with a clearer conscience, not worrying about lifelong guilt or public consequences if someone else found a way to weaponize their work.

Why fund pure research?

Due to the intense advocacy of scientists involved deeply with the American government, who argued along the very lines I’ve laid out above, this distinction took root. The idea that scientists doing work on basic science should not have a goal or social values (other than the pursuit of truth) is still pervasive, and justifies research programs that are undertaken for curiosity’s sake. Yet this is not the whole story - what justifies public funding of these projects when other applied science projects also need prioritization? Why do we think that physicists working on the fundamentals merit our support in the same way that those working to cure cancer or develop national defence materials do? This is where the linear model of scientific production comes in. Relying on the distinction between pure and applied science, the linear model asserts that pure scientific research is the “seed corn” of the “crop” of benefits that applied science is able to provide, and that by funding pure science, applied science in industry and military applications can grow real public benefit from seemingly useless facts about the world. This model is everywhere - American funding of science is categorized as pure or applied, and many scientists hold on the idea that basic science is the seed corn of future social benefit. The Guardian’s Ian Sample recently reported on a biomedical researcher, Dr. David Liu, and his receiving of a 2025 Breakthrough prize for gene editing research. Commenting on funding for science in America, Liu remarked that “slashing funding and people from science in the United States is like burning your seed corn”. This vision of a linear model, where money is given to basic researchers, who then produce usable knowledge without being responsible for the applications of that knowledge, is a fundamental idea in the modern institutions of science.

How Does Science Actually Work?

Dissolving the distinction

There’s a problem, though. This model of scientific research might be the way that governments expect things to go, and it might be the way that some scientists see their work going, but it does not, and cannot, accurately describe what actually goes on with science. The first problem, and the easiest to explain, is that the distinction between pure and applied science is really not so clear in practice. All pure science needs to be experimentally tested, worked on physically, and confirmed with evidence. These practices are the very same ones that applied scientists use. Since pure science requires practical procedures, it is not a difference found in the nature of what is done that can explain this distinction. It is also not a difference in the location, or funding, or institution doing it. Applied science often needs to expand what is known about phenomena just to get a project off the ground, and pure scientific research often has a potential application in mind. In industry and the military especially, the line is completely blurred - and has been for a long time, as seen in the production of the atomic bomb. The only real difference might be found in the intention of the scientist, where pure science is intended to be free from responsibility and focused on discovering rules or ways of understanding the universe that allow for broad application or predictive power, while applied science is intended to solve some specified problem.

Yet even if one intends to be free from responsibility for their work, it does not follow that they are. Without this distinction, the only difference is that pure science is intended to potentially solve several (vague or less specific) problems in understanding, while applied science is intended to solve one (that is clearly specifiable). This is not a useful distinction, as applied science can intend to solve several problems as well! We are left with a picture of basic researchers choosing to ignore the social values they hold, which helps them determine the problems they aim to solve, in the hopes that they can also ignore any responsibility for the consequences of their work. Even if these “basic” problems aren’t specifiable in terms of something that will bring immediate economic or social benefit, they are problems based in a lack of understanding of the world, which scientists hope their research will be able to make progress on. This is still a social problem, it’s just not so clearly delineated.

Value-freedom and social goals

It follows from this discussion that the linear model of scientific research is also doomed. We can’t expect basic science to exist independently of applied science, and so we can’t expect that funding it as a separate category of socially beneficial work will produce a special kind of good. Less obviously, but more importantly, we can’t expect any scientist to ignore social values - to be “value-free” - in their work. Instead, different social responsibilities apply to all scientists, to the extent that they need to make choices in their research that are based on social values. One very clear example is that of inductive risk: the risk of being wrong and facing the consequences of being wrong. All scientists make inductive reasoning decisions, like deciding whether chemical X causes cancer in humans if 30% of exposed humans are diagnosed with cancer within a month. The evidence in this case does not prove for sure that chemical X causes cancer, and wrongly saying that it doesn’t (if it does) could put a lot of lives at risk. Similarly, the consequences of being wrong about a fundamental physics principle might put lives at risk. If nuclear scientists were wrong about the principles that led to the atomic bomb, and the blast was much bigger or the fallout much more dangerous, lives would have been at immense risk. Since physical evidence (unlike math) is always probabilistic, there is no way to mathematically or logically prove an empirical hypothesis, and so the consequences of being wrong need to be considered in every case, not just what would previously be counted as “applied science”.

All of this is confirmed by the direction that modern scientific organizations have taken with regard to responsibility. The American Association for the Advancement of Science’s recent statement on scientific freedom and responsibility requires all scientists to consider the social impact of their work and to work with the social values of humanity in consideration. The desire for value-freedom has left the scientific institutions and many of the scientists working with them.

The AAAS Statement on Scientific Freedom and Responsibility, from https://aaas.org

What does this tell us?

The system we have set up for funding science requires scientists to do things that don’t make sense and may even actively harm society. Failing to consider the impacts of being wrong, refusing to set standards of evidence that make sense for the level of inductive risk involved, and trying to be completely neutral on all values (even the value of human life!) can lead to scientific results that recommend harmful practices or mis-label harms as harmless. The narrative that completely unhindered and non-responsible work is both possible and good tries to lead scientists to ignore the consequences of being wrong as well as the impact of their work on society, which is unconscionable. Discovering this, though, does nothing to change the real structures that fund scientific work - the ones that expect value free scientists to come up with value free results just because they are paid to do “pure” rather than “applied” science. All of those structures have remained in place since they were set up. The governmental bodies that fund science are still paying scientists under the old contract. Even if the old contract is untenable and scientists can no longer abide by it, that is what is paid for and what is expected.

The expectation that scientists abide by the old social contract for science can explain much of the current strife of American scientific institutions in their relationship with governmental agencies. The American administration’s insistence on removing social values from science, especially in the form of diversity initiatives, is a prime example. By stripping the institutionalized social values from scientific institutions, the government is enforcing the old social contract. Writing for The Guardian, Christina Pagel remarks that a primary reason for this is that diversity issues are seen as a distraction from successful research(!) - a sentiment backed up by The Times of Britain. It is clear that the relationship between science and government is strained, and with a view to what the government expects (and cannot receive), this is unsurprising.

The New Social Contract

What can we do?

The government, ideally, is made up of representatives of the people. When the people no longer expect the old contract, the government should no longer expect the old contract. This is very ideal, but it seems a tall order to request that the people expect a new system without being able to provide a new system for them to expect. I do not have space here to lay out a new schema for scientific operations and funding. In fact, I do not have a worked-out new system to lay out. However, some features of the new system are essential - especially where the old system failed. For example, scientific funding should recognize that there is always a problem to be solved, and that social values will always play a role in the science that gets done. It’s impossible to ignore social values, but it isn’t impossible to regulate and democratically decide what’s important. This means that good science does not need to be basic, completely free of values, directed only by scientists (and never by government!) and only funded in the hopes of some application in the future. As a society, we value science because it solves problems and creates advancements in what we can do, so it’s important that people recognize that science is a social activity no matter when or where it’s conducted. Second, there may also be distinctions between kinds of scientific work, but it cannot be in the intentions of the scientist. It might more profitably be found in the kind of problem under investigation, whether industry is trying to make a profit or a government is testing a new drug for safety. It may also be good to distinguish between the methods that scientists use, though a major worry there is that many methods could be used to solve a problem, and it’s not always clear why scientists choose one method over another. A third (and easier) distinction might just be where the science is done: a university, a company’s lab, a government institution. These sorts of distinctions might bring important features of scientific research to light, and ideally wouldn’t be based on an impossible distinction.

We need more than what I’ve laid out to move forward. We need a way of seeing science as part of society, and we need to discover a way of directing scientific research socially. This is a monumental task - one that can only be started when the public is ready to move beyond a vision of science that can never be fulfilled.

Credits

This post is a inspired by a presentation given by Heather Douglas at the University of British Columbia, as well as her paper which formed the basis for that presentation: Douglas, Heather, and T. Y. Branch. “The Social Contract for Science and the Value-Free Ideal.” Synthese 203, no. 2 (January 22, 2024): 40. https://doi.org/10.1007/s11229-023-04477-9.

“The Social Contract for Science and the Value-Free Ideal” is my source for factual claims made here regarding the history of science and philosophy.