November 7, 2010

Big Science: The Academic-Government-Industry Complex

The New World Order of Science

The prime objective of 'Big Science' is not the search for abstract scientific truth. Since the mid-20th century, scientific truth has been suppressed by the most powerful forces in society — a scientific-technological elite — and public policy has become the captive of knowledge monopolies and research cartels, such as the taxpayer-funded NSF and NIH, as well as private foundations.

By Henry Bauer, LewRockwell.com
December 19, 2009

I’m going to sketch a chronology and analysis that draw on the history of several centuries of science and on many volumes written about that. In being concise, I’ll make some very sweeping generalizations without acknowledging necessary exceptions or nuances. But the basic story is solidly in the mainstream of history of science, philosophy of science, sociology of science, and the like, what’s nowadays called "science & technology studies" (STS).

It never was really true, of course, as the conventional wisdom tends even now to imagine, that "the scientific method" guarantees objectivity, that scientists work impersonally to discover truth, that scientists are notably smarter, more trustworthy, more honest, so tied up in their work that they neglect everything else, don’t care about making money.

But it is true that for centuries scientists weren’t subject to multiple and powerful conflicts of interest. There is no "scientific method." Science is done by people; people aren’t objective. Scientists are just like other professionals – to use a telling contemporary parallel, scientists are professionals just like the wheelers and dealers on Wall Street: not exactly dishonest, but looking out first and foremost for Number One.

"Modern" science dates roughly from the 17th century. It was driven by the sheer curiosity of lay amateurs and the God-worshipping curiosity of churchmen; there was little or no conflict of interest with plain truth-seeking. The truth-seekers formed voluntary associations: academies like the Royal Society of London. Those began to publish what happened at their meetings, and some of those Proceedings and Transactions have continued publication to the present day. These meetings and publications were the first informal steps to contemporary "peer review."

During the 19th century, "scientist" became a profession, one could make a living at it. Research universities were founded, and with that came the inevitable conflict of interest between truth-seeking and career-making, especially since science gained a very high status and one could become famous through success in science. (An excellent account is by David Knight in The Age of Science.)

Still it was pretty much an intellectual free market, in which the entrepreneurs could be highly independent because almost all science was quite inexpensive and there were a multitude of potential patrons and sponsors, circumstances that made for genuine intellectual competition.

The portentous change to "Big Science" really got going in mid-20th century. Iconic of the new circumstances remains the Manhattan Project to produce atomic bombs. Its dramatic success strengthened the popular faith that "science" can do anything, and very quickly, given enough resources. More than half a century later, people still talk about having a "Manhattan Project" to stop global warming, eradicate cancer, whatever.

So shortly after World War II, the National Science Foundation (NSF) was established, and researchers could get grants for almost anything they wanted to do, not only from NSF but also from the Atomic Energy Commission, the Army, the Navy, the Air Force, the Defense Advanced Research Projects Agency (DARPA), the Department of the Interior, the Agriculture Department . . . as well as from a number of private foundations.

I experienced the tail end of this bonanza after I came to the United States in the mid-1960s. Everyone was getting grants. Teachers colleges were climbing the prestige ladder to become research universities, funded by grant-getting faculty "stars": colleges just had to appoint some researchers, those would bring in the moolah, that would pay for graduate students to do the actual work, and the "overhead" or "indirect costs" associated with the grants – often on the order of 25%, with private universities sometimes even double that – allowed the institutions to establish all sorts of infrastructure and administrative structures. In the 1940s, there had been 107 PhD-granting universities in the United States; by 1978 there were more than 300.

Institutions competed with one another for faculty stars and to be ranked high among "research universities," to get their graduate programs into the 20 or so "Top Graduate Departments" – rankings that were being published at intervals for quite a range of disciplines.

Everything was being quantified, and the rankings pretty much reflected quantity, because of course that’s what you can measure "objectively": How many grants? How much money? How many papers published? How many citations to those papers? How many students? How many graduates placed where?

This quantitative explosion quickly reached the limits of possible growth. That had been predicted early on by Derek de Solla Price, historian of science and pioneer of "scientometrics" and "Science Indicators," quantitative measures of scientific and technological activity. Price had recognized that science had been growing exponentially with remarkable regularity since roughly the 17th century: doubling about every 15 years had been the numbers of scientific journals being published, the numbers of papers being published in them, the numbers of abstracts journals established to digest the flood of research, the numbers of researchers.

Soon after WWII, Price noted, expenditures on research and development (R&D) had reached about 2.5% of GDP in industrialized countries, which meant quite obviously that continued exponential growth had become literally impossible. And indeed the growth slowed, and quite dramatically by the early 1970s. I saw recently that the Obama administration expressed the ambition to bring R&D to 3% of GDP, so there’s indeed been little relative growth in the last half century.

Now, modern science had developed a culture based on limitless growth. Huge numbers of graduates were being turned out, many with the ambition to do what their mentors had done: become entrepreneurial researchers bringing in grants wholesale and commanding a stable of students and post-docs who could churn out the research and generate a flood of publications. By the late 1960s or early 1970s, for example, to my personal knowledge, one of the leading electrochemists in the United States in one of the better universities was controlling annual expenditures of many hundreds of thousands of dollars (1970s dollars!), with several postdocs each supervising a horde of graduate students and pouring out the paper.

The change from unlimited possibilities to a culture of steady state, to science as zero-sum game, represents a genuine crisis: If one person gets a grant, some number of others don’t. The "success rate" in applications to NSF or the National Institutes of Health (NIH) is no more than 25% on average nowadays, less so among the not-yet-established institutions. So it would make sense for researchers to change their aims, their beliefs about what is possible, to stop counting success in terms of quantities: but they can’t do that because the institutions that employ them still count success in terms of quantity, primarily the quantity of dollars brought in.

To draw again on a contemporary analogy, scientific research and the production or training of researchers expanded in bubble-like fashion following World War II; that bubble was pricked in the early 1970s and has been deflating with increasingly obvious consequences ever since.

One consequence of the bubble’s burst is that there are far too many would-be researchers and would-be research institutions chasing grants. Increasing desperation leads to corner-cutting and frank cheating. Senior researchers established in comfortable positions guard their own privileged circumstances jealously, and that means in some part not allowing their favored theories and approaches to be challenged by the Young Turks. Hence knowledge monopolies and research cartels.

A consequence of Big Science is that very few if any researchers can work as independent entrepreneurs. They belong to teams or institutions with inevitably hierarchical structures. Where independent scientists owed loyalty first and foremost to scientific truth, now employee researchers owe loyalty first to employers, grant-givers, sponsors. (For this change in ideals and mores, see John Ziman, Prometheus Bound, 1994.)

Science used to be compared to religion, and scientists to monks – in the late 19th century, T. H. Huxley claimed quite seriously to be giving Lay Sermons on behalf of the Church of Scientific; but today’s scientists, as already said, are more like Wall Street professionals than like monks.

Since those who pay the piper call the tune, research projects are chosen increasingly for non-scientific reasons; perhaps political ones, as when President Nixon declared war on cancer at a time when the scientific background knowledge made such a declaration substantively ludicrous and doomed to failure for the foreseeable future. With administrators in control because the enterprises are so large, bureaucrats set the rules and make the decisions. For advice, they naturally listen to the senior well-established figures, so grants go only to "mainstream" projects.

Nowadays there are conflicts of interest everywhere. Researchers benefit from individual consultancies. University faculty establish personal businesses to exploit their specialized knowledge which was gained largely at public expense. Institutional conflicts of interest are everywhere: There are university-industry collaborations; some universities have toyed with establishing their own for-profit enterprises to exploit directly the patents generated by their faculty; research universities have whole bureaucracies devoted to finding ways to make money from the university’s knowledge stock, just as the same or parallel university bureaucracies sell rights to use the university’s athletics logos. It is not at all an exaggeration to talk of an academic-government-industry complex whose prime objective is not the search for abstract scientific truth.

Widely known is that President Eisenhower had warned of the dangers of a military-industrial complex. Much less well known is that Eisenhower was just as insightful and prescient about the dangers from Big Science: in holding scientific research and discovery in respect . . . we must also be alert to the . . . danger that public policy could itself become the captive of a scientific-technological elite.

That describes in a nutshell today’s knowledge monopolies. A single theory acts as dogma once the senior, established researchers have managed to capture the cooperation of the political powers. The media take their cues also from the powers that be and from the established scientific authorities, so "no one" even knows that alternatives exist to HIV/AIDS theory, to the theory that human activities are contributing to climate change, that the Big Bang might not have happened, that it wasn’t an asteroid that killed the dinosaurs, and so on.

The bitter lesson is that the traditionally normal process of science, open argument and unfettered competition, can no longer be relied upon to deliver empirically arrived at, relatively objective understanding of the world’s workings. Political and social activism and public-relations efforts are needed, as public policies are increasingly determined by the actions of lobbyists backed by tremendous resources and pushing a single dogmatic approach.

No collection of scientifically impeccable writings can compete against an International Panel on Climate Change and a Nobel Peace Prize awarded for Albert Gore’s activism and "documentary" film – and that is no prophesy, for the evidence is here already, in the thousands of well-qualified environmental scientists who have for years petitioned for an unbiased analysis of the data.

No collection of scientifically impeccable writings can compete against the National Institutes of Health, the World Health Organization, UNAIDS, innumerable eminent charities like the Bill and Melinda Gates Foundation, when it comes to questions of HIV and AIDS – and again that is no prophesy, because the data have been clear for a couple of decades that HIV is not, cannot be the cause of AIDS.


As to HIV and AIDS, maybe the impetus to truth may come from politicians who insist on finding out exactly what the benefits are of the roughly $20 billion we – the United States – are spending annually under the mistaken HIV/AIDS theory. Or maybe the impetus to truth may come from African Americans, who may finally rebel against the calumny that it is their reprehensible behavior that makes them 7 to 20 times more likely to test
"HIV-positive" than their white American compatriots; or perhaps from South African blacks who are alleged to be "infected" at rates as high as 30%, supposedly because they are continually engaged in "concurrent multiple sexual relationships,"
having multiple sexual partners at any given time but changing them every few weeks or months. Or from a court case or series of them, because of ill health caused by toxic antiretroviral drugs administered on the basis of misleading "HIV" tests; or perhaps because one or more of the "AIDS denialists" wins libel judgment against one or more of those who call them Holocaust deniers.

Maybe the impetus to truth may come from the media finally seizing on any of the above as something "news-worthy."
At any rate, the science has long been clear, and the need is for action at a political, social, public-relations, level.

In this age of knowledge monopolies and research cartels, scientific truth is suppressed by the most powerful forces in society. It used to be that this sort of thing would be experienced only in Nazi Germany or the Soviet Union, but nowadays it happens in democratic societies as a result of what President Eisenhower warned against: "public policy . . . become the captive of a scientific-technological elite."

Henry H. Bauer is Dean Emeritus of Arts & Sciences and Professor Emeritus of Chemistry & Science Studies at Virginia Tech. His books about science and scientific unorthodoxies include Scientific Literacy and the Myth of the Scientific Method (1992), Science or Pseudoscience (2001), and The Origin, Persistence and Failings of HIV/AIDS Theory (2007). He currently writes an HIV Skepticism blog.

The Mapmakers of Society: The Beginnings of a Scientific Dictatorship

"If the fully planned and conditioned world comes into existence... the restive species [humanity]... will be vexed no longer by its chatter for truth and mercy and beauty and happiness... if the eugenics are efficient enough there will be no second revolt, but all snug beneath the Conditioners, and the Conditioners beneath her, till the moon falls or the sun grows cold." -- C. S. Lewis, The Abolition of Man, 1944

By Daniel Taylor, Old-Thinker News
August 4, 2010

In order to understand our history, the development of our society and political structure, the influence of the large foundations in America is an essential area of research. Their investment into the social sciences and medical establishment shaped their direction for the 20th Century and beyond. Social control and eugenics became a primary directive. These ideas, primarily due to the work of the Rockefeller and Carnegie philanthropies, spread throughout the intelligentsia and elite circles throughout the western world.

Dr. Lily E. Kay's 1993 book "The Molecular Vision of Life: Caltech, the Rockefeller Foundation, and the Rise of the New Biology" documents much of the early history behind the rise of eugenics and life sciences. Kay demonstrates that the drive for social control and eugenics was largely responsible for the emergence and growth of the science of molecular biology. Dr. Kay is a recipient of the Smithsonian Fellowship at the National Museum of American History, and an assistant professor of history of science at the Massachusetts Institute of Technology. Dr. Kay's 2001 obituary from MIT describes her as "...one of the outstanding historians of biology of her generation."

As Dr. Kay documents, large foundations effectively drew the maps for society to follow. The intelligentsia, trained and schooled under the strong influence of the foundations, closely followed the vision of the elite. This vision extended into the realms of education, politics, religion, and the financial world. As Dr. Lily Kay has painstakingly documented, this influential group set out in the United States to engage in a massive research campaign to discover the inner workings of man and in turn to devise methods of social-biological control. The United States, in turn, became the 20th Century progenitor of eugenics.

Dr. Kay paints a clear picture of the massive influence that the wealthy elite in the United States wields, even to the "...development of culture and the production of knowledge in the United States..." Kay writes:
"Thus by the end of the Progressive Era, even before the large-scale commitment to the "advancement of knowledge" spurred by World War I, the human sciences received considerable support from the large foundations. Their numerous projects and the unprecedented scope of their financial and institutional resources shaped the development of culture and the production of knowledge in the United States. Through education, public opinion, stimulation of specific research agenda, and the promotion of selective categories of knowledge and research, the Foundation played a key role in the creation of a hegemonic bloc; the resources and prestige flowing into those fields relevant to problems of social control were instrumental in the formation of consensus between social and political elites, on the one hand, and academic interests on the other."
Large foundations - primarily Rockefeller and Carnegie - were investigated in 1915 by the United States Congress, which reported nearly identical findings to the later 1953 Reece Committee, dedicated to the same cause. The 1915 U.S. Commission on Industrial Relations reported that:
"The domination by the men in whose hands the final control of a large part of American industry rests is not limited to their employees, but is being rapidly extended to control the education and social survival of the nation. This control is being extended largely through the creation of enormous privately managed funds for indefinite purposes, hereafter designated "foundations", by the endowment of colleges and universities, by the creation of funds for the pensioning of teachers, by contributions to private charities, as well as through controlling or influencing the public press...
As Dr. Kay documents, many of the original members of the large foundations and their offshoots were driven by the philosophy that they were the chosen elite. In their minds, moral authority was on their side. They sought to guide the direction of the nation and mold mankind's development. Frederick T. Gates, a Baptist minister who worked closely with the Rockefeller family and its many initiatives, is quoted as saying:
"...when you die and come to approach the judgment of Almighty God what do you think He will demand of you...? Do you think he will inquire into your trivial sins, your paltry values? NO! He will ask you just One Question: 'What did you do as a trustee of the Rockefeller Foundation?'"
Chester Bernard, who served as president of the Rockefeller Foundation from 1948-1952, was unquestionably a member of the establishment. He saw what the Rockefeller Foundation and much of the scientific community was attempting to do and spoke out against it, but couched his criticism with the assumption of pure motives. Bernard writes in the Rockefeller Foundation's 1948 Annual Report:
"Inherent in our systematic efforts to promote the welfare of mankind there may be an assumption that... by reason and science we may govern the future of unborn generations in ways that we know are right... Do we mean that because we have learned to navigate the tides we shall also control them? ... We have already begun the attempts to regulate local weather. Where do we think we shall stop -- with the control of the speed of rotation of the earth, of its revolution around the sun? ... Pride goeth before a fall."
Dr. Kay comments on Bernard's criticism, stating that:
"Given this wisdom, it is paradoxical that Barnard did not hear the dissonance between his poignant words and the Rockefeller Foundation's agenda in biology, where the primary justification for studying the fundamental mechanisms of soma and psyche was the promise of intervening in the course of human behavior on a global scale."
This original directive has remained unchanged. However, Dr. Kay concludes by stating that:
"The eugenic goals, which had informed the design of the molecular biology program and had been attenuated by the lessons of the Holocaust, revived by the late 1950's... a new eugenics... came to rest in safety on the high ground of medical discourse and latter-day rhetoric of population control."
Today we see this agenda moving full speed ahead. Foundations are acting more and more like governments. In an interview with the Seattle Times, UN Secretary General Ban Ki-moon was asked:
"Some say the emergence of super rich philanthropies like the Gates Foundation has undermined the effectiveness of the U.N. and its member organizations, like the WHO."
Moon responded:
"On the contrary that is what we really want -- contributions from the business community as well as philanthropies. We need to have political support, but it doesn't give us all that we need. NGOs and philanthropies and many foundations such as Bill Gates Foundation -- they're taking a very important role."
In 1996 the Rockefeller Foundation supplied grant money for early research on edible vaccines. The $58,000 grant, given to the Boyce Thompson Institute for Plant Research at Cornell University, was aimed at developing and transferring edible vaccine technology to developing countries.

Edible vaccines, according to the Indian Journal of Medical Microbiology, will be a more socio-culturally acceptable alternative to needles. In other words, people will be less resistant to eating a mundane banana than taking a shot in the arm. The Journal states that new edible vaccine technology may serve a dual purpose of birth control.

As calls are made for lithium to be added to water supplies world-wide and genetically modified organisms spread throughout the ecosystem, the elite agenda of "...intervening in the course of human behavior on a global scale..." is fast becoming reality.

College Education?: Shocking New Research Proves That Our College Students Are Learning Next To Nothing
Tax-exempt Foundations in the U.S. Operate to Promote Collectivism (Communism)
Student Loan and College Education Scam

No comments:

Post a Comment

Go to The Lamb Slain Home Page