The Personal Website of Mark W. Dawson

Containing His Articles, Observations, Thoughts, Meanderings,
and some would say Wisdom (and some would say not).

The Problems with Modern Science

Table of Contents
  1. Introduction
  2. Acknowledgments
  3. The Problems with Science
    1. The Limitations of Science
    2. The End of Science?
    3. The Issues and Concerns with Science
      1. Science and Mathematics
      2. The Arrow of Time
      3. Statistical and Probability Methods
      4. Hard Data vs. Soft Data
      5. Data Mining, Data Massaging and Data Quality
        1. Data Mining
        2. Data Massaging
        3. Data Quality
      6. Computer Modeling Issues, Concerns & Limitations
      7. GIGO – Garbage In Garbage Out
      8. Open and Closed Systems
    4. The Troubles with Science
      1. Big Science
      2. Publish or Perish
      3. Studies and Statistics Show
        1. Studies Show
        2. Statistics Show
      4. Peer Review
      5. Time to Think
      6. Shoehorning
      7. Group Think
      8. Consensus
    5. Scientific Speculation
      1. String (or M Theory) Theory
      2. The Multiverse (A Part of M Theory)
    6. It's Time to Rethink the Nobel Prizes
  4. Miscellaneous Thoughts
  5. Final Thoughts
  6. Further Readings
  7. Disclaimer


This article is an Outline on The The Problems with Science in the latter half of the 20th century and the 21st century. It is not about the science, but the way modern science is pursued. It does not delve into the details of science and utilizes no mathematics, but instead highlights the issues regarding Modern Science (post World War II) This paper was written to provide the general public with the background on these problems so that when they encounter public policy issues that utilize science they will have a basis for interpreting the scientific information.

I should point out that I am NOT a scientist or engineer, nor have I received any education or training in science or engineering. This paper is the result of my readings on this subject in the past decades. Many academics, scientists, and engineers would critique what I have written here as not accurate nor through. I freely acknowledge that these critiques are correct. It was not my intentions to be accurate or through, as I am not qualified to give an accurate nor through description. My intention was to be understandable to a lay-person so that they can grasp the concepts. Academics, scientists and engineers’ entire education and training is based on accuracy and thoroughness, and as such, they strive for this accuracy and thoroughness. When writing for the general public this accuracy and thoroughness can often lead to less understandability. I believe it is essential for all lay-persons to grasp the concepts of within this paper, so they make more informed decisions on those areas of human endeavors that deal with this subject. As such, I did not strive for accuracy and thoroughness, only understandability.


The Author wishes to thank the editors and contributors of the Wikipedia website. Their many fine articles on this subject have made my task in creating this paper much easier. I have attributed and hyperlinked the Wikipedia article whenever I have utilized it in this article (and place the article in a frame such as this). For a more fuller understanding of these topics, I would direct you to the appropriate Wikipedia article.

I would also acknowledge that the graphics contained in this article were obtained from various websites.I have tried to obtain non-copyrighted images,
and if I have failed to do so I would sincerely apologize to the copyright holders for not attributing the image to them.

The Problems with Science

The Limitations of ScienceTop

There are some things that scientists cannot explain because they are outside the realm of science. The best example of this is "Is there a God?", and "What is the nature of God if he exists?" Another scientific limitation is the nature of the mind and consciousness. Scientists are beginning to explain the physiology of the brain and how it works, but they cannot explain what the brains connection to the mind or consciousness is, and many scientists believe that this question has no scientific explanation. Scientists also cannot explain why we love someone, what is beauty, why the arts affect us, and many other aspects of being human. These are questions of metaphysics, philosophy, theology, and morality and ethics which cannot be explained by science.

There are also things within science that cannot be proven. To prove something science utilizes mathematics, logical reasoning, and observation and experiments. But science never proves anything - it simply states that a scientific theory best fits the mathematics, logical reasoning, and observation and experiments. However, mathematics and logic have their limitations. In fact, it has been proven (mathematically) that mathematics and logic cannot prove everything (mainly by Bertrand Russell and Kurt Gödel - two of the greatest logicians and mathematicians of the 20th century). It has also been proven (by Werner Heisenberg in his Uncertainty Principle) that observations and experiments can never be completely accurate. Therefore, nothing can be completely proven. It has therefore been said Truth is Bigger than Proof and many scientists rely on Belief and Intuition to achieve their results. This does not make it less scientific, but when and where you run into the limitations of mathematics, logical reasoning, and observation and experiments a scientist must supplement their proofs with belief and intuition. The best explanation of this I have ever read is Dr. Michael Guillen's book "Amazing Truths" in Chapter 7 - "The Certainty of Uncertainty”. Another very good book on what science cannot prove, but more difficult to read and comprehend, is "The Outer Limits of Reason - What Science, Mathematics, and Logic Cannot Tell Us" by Noson S. Yanofsky.

The End of Science?Top

Is science as we have known it coming to an end?  A quiet debate within the scientific community has been occurring, for the last few decades, on this question. Modern science has led to the discovery of General Relativity, Quantum Physics, DNA & Molecular Biology, and Modern Evolution (the evolution of the universe as well as the evolution of life). These are the BIG questions on how the Universe works. Are there any BIGGER questions that science can discover, or is there only filling in the details for the already discovered BIG questions? It is also questionable that there may be any future paradigm shifts on the BIG questions.

The other issue is that in physics we are approaching the boundaries of the knowable. Quantum Physics is now examining and experimenting with sub-atomic forces and particles (the quantum and the structure and properties of these quantum forces and particles). However, due to the extremely small sizes and weak states of the quantum, as well as inherent scientific limitations in examining the very small and the very weak, it may never be possible to directly observe or experiment on these quantum (no one has directly observed a quantum – they have only observed the traces of quantum particles and forces). And the technology needed to do this is very expensive and energy consumptive.

In addition, modern String and Multiverse Theory (M Theory) may be impossible to prove or disprove. This is because M Theory requires multi-dimensional space that we could never observe, as it is outside the bounds of our universe. As it cannot be scientifically observed or experimented upon String and M Theory falls within the category of Scientific Speculation, and therefore it is a scientific belief and not a scientific fact. The mathematics for String and M Theory is very good and elegant, but just because mathematics says that something is possible does not mean that it has happened, is happening, or may happen. It is just as possible that it has never happened, is not happening, and will never happen. A good TED talk on this issue is “Have we reached the end of physics” by Harry Cliff.

It is also taking larger amounts of monies to make smaller scientific discoveries. At what point is it not worth the investment of the monies for the return on the scientific discoveries? These and other questions have led scientists, philosophers, and even politicians to question the future of scientific inquiry and discovery, and the monies to be spent on science (see “Big Science”).

For more information on his subject, I would recommend two books “The End of Science: Facing the Limits of Knowledge in The Twilight Of The Scientific Age” by John Horgan, and “The End of Discovery: Are We Approaching the Boundaries of the Knowable?” by Russell Stannard.

The Issues and Concerns with Science

Modern scientists have tools and techniques that were unavailable to previous scientists. Yet these tools and  techniques have several issues and concerns as to their limitations, accuracy, and appropriateness. There are also a few unanswered questions in science that could potentially have a significant impact on the science. Some of the most important are as follows.

Science and MathematicsTop

Science and mathematics are intertwined, and it has been said that all good science has a mathematical basis. Mathematics, however, is not grounded in reality - it is abstract. Mathematicians will often develop mathematical theorems that have no (apparent) relationship to the universe. The theorem can be mathematically proven but may have no basis in the real universe. However, it has often been the case that a mathematical theorem is developed that future scientists can utilize to explain their theories. Scientific theories must be grounded in observations and experiments based on reality and utilize mathematics to buttress or help prove the theory. A scientific theory is not proven because the mathematics is correct, but it can be disproven if the mathematics is incorrect. Many scientists have become so enamored by the mathematics of their science that they have forgotten that just because mathematics says that something is possible does not mean that it has happened, is happening, or may happen. It is just as possible that it has never happened, is not happening, and will never happen. Therefore, mathematics is only a tool for science – not a proof of the science.

In today’s science, there has been a movement to substitute mathematical proofs for observational and experimental proofs in some fields of science. This is due to the difficulties in obtaining observational and experimental proofs due to the very small or very large sizes and times to be observed or experimented upon. When this occurs in science we should not abandon observational and experimental proofs but instead categorize the science as speculation awaiting observational or experimental proof. If you substitute mathematical proof then you no longer have science, you have a belief.

This issue is often intertwined with “Studies and Statistics Show” as I shall explain further on. Another issue with science and mathematics is the utilization of Infinity. It is well known in science and mathematics that you must account for infinity in any science and mathematics theory, but that you cannot utilize infinity to prove the science and mathematics. This is because if you utilize infinity as a proof you can prove anything. And any science or mathematics that proves anything proves nothing. Beware any science and mathematics that requires infinity in its proof as it is probably wrong.

The Arrow of TimeTop

The arrow of time refers to the question of what is the meaning of time, why and how time flows, and what is the physical nature of time? In the early 20th century the eminent scientist Dr. Arthur Eddington postulated the entropy was the cause of the arrow of time. Entropy is an idea that comes from a principle of thermodynamics dealing with energy. It usually refers to the idea that everything in the universe eventually moves from order to disorder, and entropy is the measurement of that change. Dr. Eddington believes that entropy was the reason for the arrow of time. It has been the accepted explanation of times arrow since it was postulated, as no other scientific idea on time has arisen that provides a satisfactory explanation to scientists. Yet this idea has some inherent problems that in many ways make it unsatisfactory.

To date, there has been no scientific resolution on this question. Yet to understand time is essential to a complete understanding of all areas of science. Without a full understanding of time, you cannot reach a full understanding of science. Some scientific work in the past century has been done on time, but it is well known that this is an intractable problem. Those scientists that have explored time refer to it as a rabbit hole that once you enter the hole you very rarely get out of the hole. Often, the only way out of the rabbit hole is to just abandon the study of time. Science needs to resolve the issue of time in order to have a fuller understanding of scientific processes. This issue needs to be resolved in the near term for a more complete understanding of science.

Statistical and Probability MethodsTop

Statistical and probability methods are often incorporated into science observations and experimental results, and scientific computer models and simulations. However, when this happens statistical and probability methods are often educated guesstimates. If the algorithms and data are incorrect the science or computer models or simulations will be incorrect.  In addition, the more you actually know the more reliable your computer model or simulation. This infers the less you know, and the more you rely on statistical and probability models, the less reliable is your computer model or simulation. However, it is not possible to know everything, so you must resort to statistical and probability methods. This involves utilizing Data Mining, Data Massaging, and Data Quality techniques discussed in the next section. When you do this, however, you must reveal the algorithms, raw data, and the data mining, massaging, and quality controls that you have utilized. This is necessary so that other scientist and mathematicians can examine what has been done to verify the veracity of what was done. If this is not revealed or obscured the computer model or simulation is highly suspect. Unfortunately, far too often this information is not fully revealed in scientific research. You must also remember the following famous quote about statistics:

If you torture the data long enough, it will confess to anything. - from Darrell Huff's book "How to Lie With Statistics" (1954).

And always remember the following humorous cartoon of Sidney Harris:

Hard Data vs. Soft DataTop

Hard Data vs Soft Data from

Hard data - is a verifiable fact that is acquired from reliable sources according to a robust methodology.

Soft data - is data based on qualitative information such as a rating, survey or poll.

The Difference -

Hard data implies data that is directly measurable, factual and indisputable.

Soft data implies data that has been collected from qualitative observations and quantified. This doesn't mean that such data is unreliable. In many cases, the best data available is soft data. It is common to base business decisions on soft data such as customer satisfaction and product reviews.

Hard data is the foundation of Science. Without hard data the scientific results may be questionable (but not always). Even with hard data there are other problems with data utilization in science. Some of these problems are as follows.

Data Mining, Data Massaging and Data QualityTop

Data MiningTop

Data mining is the process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science with an overall goal to extract information (with intelligent methods) from a data set and transform the information into a comprehensible structure for further use. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.

The term "data mining" is in fact a misnomer, because the goal is the extraction of patterns and knowledge from large amounts of data, not the extraction (mining) of data itself. It also is a buzzword and is frequently applied to any form of large-scale data or information processing (collection, extraction, warehousing, analysis, and statistics) as well as any application of computer decision support system, including artificial intelligence (e.g., machine learning) and business intelligence. The book Data mining: Practical machine learning tools and techniques with Java (which covers mostly machine learning material) was originally to be named just Practical machine learning, and the term data mining was only added for marketing reasons. Often the more general terms (large scale) data analysis and analytics – or, when referring to actual methods, artificial intelligence and machine learning – are more appropriate.

The actual data mining task is the semi-automatic or automatic analysis of large quantities of data to extract previously unknown, interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection), and dependencies (association rule mining, sequential pattern mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting is part of the data mining step, but do belong to the overall KDD process as additional steps.

- From the Wikipedia article on "Data Mining"

Data Mining in Science is often utilized to aggregate data from varying and disparate sources for use in computer models. Data mining is not an easy task, as the algorithms used can get very complex and data is not always available at one place. It needs to be integrated from various heterogeneous data sources. These factors also create some issues. Although Data Mining is very important and helpful you need to very careful when utilizing Data Mining. If done incorrectly or improperly you will encounter the problem of Garbage In (see below) into your computer model.

Data MassagingTop

Data massage is a term for cleaning up data that is poorly formatted or missing required data for a particular purpose. The term implies manual processing or highly specific queries to target data that is breaking or limiting an automated process or analysis. The term is also informally used to indicate outliers in data were dropped because they were interfering with visual presentation or confirmation of a particular theory. As such, data massage has potential ethical, compliance and risk implications.

Data Massaging in Scientific Research: When Does It Go Too Far?

One of the joys of research is feeding a mass of data into a computer program, pushing the return button, and seeing a graph magically appear. A straight line! That ought to give some nice kinetic data. But on second glance the plot is not quite satisfactory. There are some annoying outlying points that skew the rate constant away from where it ought to be. No problem. This is just a plot of the raw data. Time to clean it up. Let’s exclude all data that falls outside the three-sigma range. There, that helped. Tightened that error bar and moved the constant closer to where it should be. Let’s try a two-sigma filter. Even better! Now that’s some data that’s publishable.

You have just engaged in the venerable practice of data massaging. A common practice, but should it be?

Every scientist will agree that you should not choose data—selecting data that supports your argument and ignoring data that does not. But even here there are some grey areas. Not every reaction system gives clean kinetics. Is there anything wrong with studying a system that can be analyzed, rather than beating your head against the wall of an intractable system? Gregor Mendel didn’t think so. In his studies of plant heredity, he did not randomly sample data from every plant in his garden. He found that some plants gave easily analyzed data while others did not. Naturally, he studied those that gave results that made sense. But among those systems he studied, he did not pick and choose his data. Even some of the best scientists will apply what they consider rigorous statistical filters to improve the data, to clean it up, to tighten the error bars. Is this acceptable?

Some statisticians say it is not. They argue that no data should be excluded on the basis of statistics. Statistics may point out which data should be further scrutinized but no data should be excluded on the basis of statistics. I agree with this point of view. When you “improve” data, you exclude data. Should not all the data be available to the public? If there is a wide spread in the data, is not that fact in itself a valuable piece of information? A reader ought to know how reliable the data is and not have to guess how good it was before the two-sigma filter was applied.

Is data massaging unethical? Not if you clearly state what you have done. But the practice is unwise and ought to be discouraged.

Data QualityTop

One of the problems for modern science is obtaining precise data. Precision being in the measurement of reality. Very small or very large measurements of size and/or time scales are often difficult to obtain due to the nature of what is being examined. Hard data (actual measurements) have only been accurate for the latter half of the 20th century, and they often lack insufficient precision required for computer modeling or simulation. Data from the 21st century has been more precise and therefore more useful for the computer models and simulations. There is also the problem that there may be insufficient data for accurate computer modeling or simulation. Prior to the latter half of the 20th century it becomes necessary to utilize soft data (extrapolated data) in many areas of science. For instance, we need to know the temperature of land masses, atmosphere, and oceans for the past several thousand years over various parts of the Earth for climate modelimg purposes. As there were no meteorological stations recording this information we must extrapolate it from such things as ice core samples, tree ring growths, sediment deposits, etc... This extrapolation process has its own issues, concerns, and limitations, and depending on what and how you extrapolated the data it must be massaged to make it useful. Therefore, this data must be carefully and correctly massaged to be useful. Even after it is carefully and correctly massaged it is suspect as it has margins of errors that could impact the Climate Model. This is true for many other of the sciences as well. You must always remember that the higher the quality of data the more likely the results of computer modeling or dimulations are correct but as a result of imprecise data they may also be incorrect.

Computer Modeling Issues, Concerns & LimitationsTop

Most scientific and engineering endeavors utilize Computer Modeling. Therefore, you need to know the issues, concerns, and limitations of Computer Modeling to determine its impact on the scientific and engineering endeavors. Computer Modeling is another paper I have written that examine the issues, concerns, and limitations of Computer Modeling. I would direct you to this paper to better understand Computer Modeling. However, the conclusions of this paper are as follows.

Computer modeling has at its core three levels of difficulty – Simple Modeling, Complex Modeling, and Dynamic Modeling. Simple modeling is when you are working on a model that has a limited function; a few (hundreds or maybe a thousand) of constants and variables within the components of the model, and a dozen or so interactions between the components of the model. Complex modeling occurs when you incorporate many simple models (subsystems) together to form a whole system, or where there are complex interactions and/or feedback within and between the components of the computer model. Not only must the subsystems be working properly, but the interactions between the subsystems must be modeled properly. Dynamic modeling occurs when you have subsystems of complex modeling working together, or when external factors that are complex and varying are incorporated into the computer model. Dynamic computer models tend to be of Open Systems, while simple computer models are usually of a closed system. Complex computer models tend to have subsystems of closed computer models while the entire system can be an open system or closed system.

The base problem with computer modeling is twofold; 1) verification, validation, and confirmation, and 2) correlation vs. causation. Verification and validation of numerical models of natural systems are impossible. This is because natural systems are never closed and because model results are always nonunique. The other issue of correlation vs. causation is how can you know that the results of your computer model are reflecting the actual cause (causation), or are they merely appearing to be the cause (correlation). To determine this, you need to mathematically prove that your computer model is scientifically correct, which may be impossible.

Therefore, all computer models are wrong – but many of them are useful! The first thing to keep in mind when dealing with computer models is that when thinking about a computer model it is very important to remember three things:

  1. That we know what we know, and we need to be sure that what we know is correct.
  2. That we know what we don't know, and that allowances are made for what we don’t know.
  3. That we don't know that we don't know, which cannot be allowed for as it is totally unknown.

It is numbers 2 and 3 that often are the killer in computer modeling which often leads to incorrect computer models.

You also need to keep in mind the other factors when utilizing computer models:

  1. Constants and Variables within a component of the computer model are often imprecisely known leading to incorrect results within the components, which then get propagated throughout the computer model.
  2. The interactions between the components are often not fully understood and allowed for, and therefore not computer modeled correctly.
  3. The feedback and/or dynamics within the computer model is imprecise, or not fully known, which leads to an incorrect computer model.

All these factors will result in the computer model being wrong. And as always remember GIGPGO (Garbage In -> Garbage Processing -> Garbage Out).

You should also be aware that when computer modeling is utilized to model for a long period of time the longer the time modeled the more inaccurate the computer model will become. This is because the dynamics and feedback errors within the computer model build up which affects the long-term accuracy of the computer model. Therefore, long-term predictions of a computer model are highly suspect. Another thing to be aware of is that if you are computer modeling for a shorter time period it needs to be of a sufficient period of time to determine the full effects of the computer model, or to provide results that are truly useful. Too short of a time period will provide inconclusive (or wrong) results to be practicable. Therefore, short-term predictions of a computer model can be suspect.

And finally, Chaos, Complexity, and Network Science have an impact on computer models and cannot be properly accounted for, which affects the accuracy of the computer model.

The lesson from this topic is that a computer model is not an answer but a tool – and don’t trust the computer model but utilize the computer model. The computer modeling system itself may contain errors in its programming. The information that goes into the computer model may be incorrect or imprecise, or the interactions between the components may not be known or knowable. And there may simply be too many real-world constants and variables to be computer modeled. Use the computer model as a tool and not an answer, and above all use your common sense when evaluating the computer model. If something in the computer model is suspicious examine it until you understand what is happening.

GIGO – Garbage In Garbage OutTop

At the dawn of the computer age there was an acronym that was frequently used; GIGO – Garbage In -> Garbage Out. It referred to the situation that if your inputs were incorrect your outputs would be incorrect.  It left unspoken that if your processing instructions (computer program) were incorrect anything in would produce incorrect outputs. The correct acronym should have been GIGPGO (Garbage In -> Garbage Processing -> Garbage Out), not pronounceable, and an admittance that computer programs contained mistakes. As computer programs became more complex GIGPGO became more pronounced. Today’s computer programming is very complex, and the most complex programming is computer modeling. In computer modeling, the number of formulas (algorithm’s) and the number of interrelationships between the algorithms are so complex that rarely is a computer model that is written is done by one person. Many computer models are produced by general computer modeling programming toolkits, which themselves are very complex. Computer modeling and computer modeling toolkits are usually written by a team of very knowledgeable and very experienced computer programmers. But team programming is fraught with problems, as team members are human, and humans make mistakes and miscommunicate, the results of which are imperfect computer modeling programs. Many team computer programming efforts have extensive procedures, policies, and testing to reduce these potential mistakes, but mistakes will happen. In addition, the computer programmers are rarely Subject Matter Experts, and Subject Matter Experts are rarely (excellent) computer programmers. Therefore, GIGPGO is always present, and indeed it has been stated that all computer models are incorrect - but many are useful. GIGPGO must always be kept in mind when examining the results of a computer model as all computer modeling contain GIGPGO.

It should also be noted that fixing an error in a computer model may be easy or difficult. Fixing the consequence of utilizing an incorrect computer model is usually difficult, and the consequences can often lead to a disaster. Many of today’s computer models are reliable but imperfect. When mistakes are uncovered they are corrected and learned from (every problem is a learning experience). The accumulation of learning experiences leads to better and more reliable computer models, but many more learning experiences are to be expected, and many improvements to computer modeling software will have to be undertaken.

GIGPGO also applies to Scientific Inquiry, as well as all human endeavors. If the observations and experiments produce garbage in then nothing you do with them can produce good scientific results. If you have good observations and experimental data but process them incorrectly then the results will be garbage. Scientists are very cognizant of this and attempt to assure that the observations and experiments produce quality results and that these results are interpreted correctly. But GIGPGO does happen in science, mostly because of equipment problems or human error. Hopefully, peer review weeds out these problems, but peer review has its own issues and concerns, and there have been too many scientific papers that have passed peer review that was not worthy of passing.

Open and Closed SystemsTop

Properties of Isolated, Closed, and Open systems in exchanging energy and matter.

Open and closed systems

A closed system is a physical system that does not allow certain types of transfers (such as transfer of mass and energy transfer) in or out of the system. The specification of what types of transfers are excluded varies in the closed systems of physics, chemistry or engineering.

From the Wikipedia Article on Closed System

An open system is a system that has external interactions. Such interactions can take the form of information, energy, or material transfers into or out of the system boundary, depending on the discipline which defines the concept. An open system is contrasted with the concept of an isolated system which exchanges neither energy, matter, nor information with its environment. An open system is also known as a constant volume system or a flow system.

The concept of an open system was formalized within a framework that enabled one to interrelate the theory of the organism, thermodynamics, and evolutionary theory. This concept was expanded upon with the advent of information theory and subsequently systems theory. Today the concept has its applications in the natural and social sciences.

In the natural sciences, an open system is one whose border is permeable to both energy and mass. In thermodynamics, a closed system, by contrast, is permeable to energy but not to matter.

Open systems have a number of consequences. A closed system contains limited energies. The definition of an open system assumes that there are supplies of energy that cannot be depleted; in practice, this energy is supplied from some source in the surrounding environment, which can be treated as infinite for the purposes of the study. One type of open system is the radiant energy system, which receives its energy from solar radiation – an energy source that can be regarded as inexhaustible for all practical purposes.

From the Wikipedia Article on Open System (systems theory)

Closed Systems are easier and more accurate to observe, experiment, and hypothesize for scientific purposes but are less reliable in the real-world which is an open system. Open Systems are much more difficult to observe, experiment, and hypothesize for scientific purposes and are less accurate of the real-world as the inputs and outputs to the closed system are numerous, imprecise, and variable.

At the beginning of the Galilean Age of Science, all scientific research was on closed systems. This was due to a lack of scientific knowledge and scientific equipment that made it impossible to experiment on open systems. It took over two hundred years for our scientific knowledge and equipment to progress to the point where science could investigate open systems. With the development of modern supercomputers, scientific observations and experiments on open systems became more practical. Yet open systems are much more difficult to observe and perform experiments as the number of variables and constants are numerous, imprecise, or unknown. Today, much of science is concerned with open systems. The means to observe an open system are still limited by the equipment science utilizes, and experimentation is generally by computer modeling (which has its own problems as discussed below).

As such, the science on open systems is more suspect than the science of closed systems. Therefore, you need to be more concerned about the validity of the science of open systems.

The Troubles with ScienceTop

Science is in trouble in the 21st century, and it has been in trouble since the latter part of the 20th century. I have insufficient knowledge to provide an examination of all the issues facing science, but I have highlighted the most important (in my opinion) of these issues.

A more thorough examination of these issues can be found in the book “The Trouble with Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next” by Lee Smolin. This book deals with the issues of String Theory in Parts I through III, and these parts can be tough going for a layperson to read and understand (but it can be done if you commit the time and effort to think about what he says). However, Part IV – Learning Through Experience, can easily be read and understood by a layperson. And it should be read by laypersons as it is a lucid explanation of the trouble with science in regard to physics (and I suspect other areas of science as well). It is also important to understand the issues he raises as it impacts the public funding, and public support, of science.

The following are some of the troubles regarding science in the 21st century.

Big ScienceTop

Big science is a term used by scientists and historians of science to describe a series of changes in science which occurred in industrial nations during and after World War II, as scientific progress increasingly came to rely on large-scale projects usually funded by national governments or groups of governments. Individual or small group efforts, or Small Science, is still relevant today as theoretical results by individual authors may have a significant impact, but very often the empirical verification requires experiments using constructions, such as the Large Hadron Collider, costing between $5 and $10 billion.

-          from the Wikipedia Article on Big Science

Small Science refers (in contrast to Big Science) to science performed in a smaller scale, such as by individuals, small teams or within community projects.

Bodies which fund research, such as the National Science Foundation, DARPA, and the EU with its Framework programs, have a tendency to fund larger-scale research projects. Reasons include the idea that ambitious research needs significant resources devoted for its execution and the reduction of administrative and overhead costs on the funding body side. However, small science which has data that is often local and is not easily shared is funded in many areas such as chemistry and biology by these funding bodies.

-          from the Wikipedia Article on Small Science

Big science has several inherent problems. The major problem is money and control. In order to obtain the monies, you need to get funding from those who are not scientists; Politicians, Bureaucrats, Administrators, et. al... These people are interested in practicable goals and tangible results. By scientific research is often not practical or tangible. In addition, these people are more interested in positive results rather than negative results. Positive results are a proof of something while negative results disprove something. In science, negative results are as important as positive results.

A proof of impossibility, also known as negative proof, proof of an impossibility theorem, or negative result, is a proof demonstrating that a particular problem cannot be solved, or cannot be solved in general. Often proofs of impossibility have put to rest decades or centuries of work attempting to find a solution. To prove that something is impossible is usually much harder than the opposite task; it is necessary to develop a theory. Impossibility theorems are usually expressible as universal propositions in logic (see universal quantification).

-          from the Wikipedia Proof of Impossibility

In science, a null result is a result without the expected content: that is, the proposed result is absent. It is an experimental outcome which does not show an otherwise expected effect. This does not imply a result of zero or nothing, simply a result that does not support the hypothesis. The term is a translation of the scientific Latin nullus resultarum, meaning "no consequence".

-          from the Wikipedia Article on Null Result

The General Public, who fund Big Science, does not like to hear negative results as they often consider the activity as a waste of monies, which it is not. Knowing what can’t be solved is as important as knowing what can be solved.

An example of this is the Superconducting Super Collider and the Large Hadron Collider.

In the early 1980s the US began plans for the Superconducting Super Collider, or SSC, which would accelerate protons to 20 TeV, three times the maximum energy that will be available at the CERN Large Hadron Collider. After a decade of work, the design was completed, a site was selected in Texas, land bought, and construction begun on a tunnel and on magnets to steer the protons.

Then in 1992 the House of Representatives canceled funding for the SSC. Funding was restored by a House–Senate conference committee, but the next year the same happened again, and this time the House would not go along with the recommendation of the conference committee. After the expenditure of almost two billion dollars and thousands of man-years, the SSC was dead.

One thing that killed the SSC was an undeserved reputation for over-spending. There was even nonsense in the press about spending on potted plants for the corridors of the administration building. Projected costs did increase, but the main reason was that, year by year, Congress never supplied sufficient funds to keep to the planned rate of spending. This stretched out the time and hence the cost to complete the project. Even so, the SSC met all technical challenges, and could have been completed for about what has been spent on the LHC, and completed a decade earlier.

The Large Hadron Collider (LHC) at CERN (the European Organization for Nuclear Research) sits astride the Franco-Swiss border near Geneva . It is an underground ring seventeen miles in circumference crossing the border between Switzerland and France. In it two beams of protons are accelerated in opposite directions to energies that will eventually reach 7 TeV in each beam, that is, about 7,500 times the energy in the mass of a proton. The beams are made to collide at several stations around the ring, where detectors with the mass of World War II cruisers sort out the various particles created in these collisions.

Excerpted from “The Crisis of Big Science” by Steven Weinberg.

As Steven Weinberg has stated if the SSC had been funded appropriately we would now have better science, at the same cost, and a decade earlier.

The Laser Interferometer Gravity Wave Observatory (LIGO) is another example. In the early 1990s, a scientist thought that with the increased technological development of atomic clocks, lasers, computers, as well as construction techniques that it may be possible to detect gravity waves. This scientist convinced a board member of the National Science Foundation (NSF) that he could detect gravity waves. The board member pushed his proposal through the NSF despite objections from all the other members of the NSF. Their objections were that it would take many years to create this observatory, and it would be very expensive, the most expensive project ever funded by the NSF. This money and time that could be better spent on other scientific endeavors that were more likely to produce scientific results.

The board member was eventually able to push it through the NSF and 260 million dollars and five years were allocated to create this observatory, referred to as the Laser Interferometer Gravity Wave Observatory (LIGO). As with most government contracts it actually took 7 years and 320 million dollars to complete. At the end of this development, the scientists responsible for the LIGO realized that their experiment would not work, as the threshold for detecting gravity waves was lower than the threshold of their equipment. However, they believe that advances in the technology, and what they had learned from their LIGO observatory development could help them develop an advanced LIGO (aLIGO) that would be able to detect gravity waves. By this time the NSF member who had push through the original experiment was now the Chairman of the NSF, and he pushed through the funding to create an advanced LIGO, with a budget of 360 million dollars and seven years. Again, it took longer and more money to create this experiment, but at the end of the experiment, they succeeded. While in stage 4 of a 5-stage calibration process they detected a gravitational wave in January 2015. This gravitational wave was created by the merger of two black holes over 6 billion light-years away. Since that time, they have detected four other gravity waves, the last one being the merger of two neutron stars about 250 million light-years away. Because of their success, another aLIGO Observatory was built, and others are under construction or being planned for construction.

The success of aLIGO was very exciting and opens a new way to observe the universe. It has been likened to the development of sound in Motion Pictures. Prior to sound in Motion Pictures all you could see where the movement of what was being filmed. This produced dramatic moving images but not much in the way of storytelling or understanding of what was being filmed. Once sound was added to Motion Pictures the dramatic impact was immense, and motion picture technology and its usefulness increased dramatically. This is how it will be with LIGO. Prior to LIGO all that astronomers could observe were images of astronomical objects in the electromagnetic spectrum. With the success of aLIGO astronomers now have a tool in which they can hear the universe as they never could before. This could lead to revolutionary advances in astronomy.

Because of the foresight and persistence of one person the LIGO was funded and developed which has led to new and significant science. If Big Science, rather than a Big Person, had made the final decision the LIGO would have never been built.

There are other problems with Big Science that are hampering scientific progress; the movement from basic to applied research, scientific findings can be classified by military interests or patented by corporations, and the sharing of data can be impeded for any number of reasons. The other problem is the requirement for increased funding makes a large part of the scientific activity filling out grant requests and other budgetary bureaucratic activity, and the intense connections between academic, governmental, and industrial interests have raised the question of whether scientists can be completely objective when their research contradicts the interests and intentions of their benefactors. These and other problems with Big Science have a possible negative impact on the progress of science.

And finally, as stated earlier it is also taking larger amounts of monies to make smaller scientific discoveries. At what point is it not worth the investment of the monies for the return on the scientific discoveries? These and other questions have led scientists, philosophers, and even politicians to question the future of scientific inquiry and discovery, and the monies to be spent on science.

Publish or PerishTop

"Publish or perish" is a phrase coined to describe the pressure in academia to rapidly and continually publish academic work to sustain or further one's career.

Frequent publication is one of the few methods at scholars' disposal to demonstrate academic talent. Successful publications bring attention to scholars and their sponsoring institutions, which can facilitate continued funding and an individual's progress through a chosen field. In popular academic perception, scholars who publish infrequently, or who focus on activities that do not result in publications, such as instructing undergraduates, may lose ground in competition for available tenure-track positions. The pressure to publish has been cited as a cause of poor work being submitted to academic journals. The value of published work is often determined by the prestige of the academic journal it is published in. Journals can be measured by their impact factor (IF), which is the average number of citations to articles published in a particular journal.


The earliest known use of the term in an academic context was in a 1927 journal article. The phrase appeared in a non-academic context in the 1932 book, Archibald Cary Coolidge: Life and Letters, by Harold Jefferson Coolidge. In 1938, the phrase appeared in a college-related publication. According to Eugene Garfield, the expression first appeared in an academic context in Logan Wilson's book, "The Academic Man: A Study in the Sociology of a Profession", published in 1942.


Research-oriented universities may attempt to manage the unhealthy aspects of the publish or perish practices, but their administrators often argue that some pressure to produce cutting-edge research is necessary to motivate scholars early in their careers to focus on research advancement, and learn to balance its achievement with the other responsibilities of the professorial role. The call to abolish tenure is very much a minority opinion in such settings.


This phenomenon has been strongly criticized, the most notable grounds being that the emphasis on publishing may decrease the value of resulting scholarship, as scholars must spend more time scrambling to publish whatever they can get into print, rather than spending time developing significant research agendas. Similarly, humanities scholar Camille Paglia has described the publish or perish paradigm as "tyranny" and further writes that "The [academic] profession has become obsessed with quantity rather than quality. [...] One brilliant article should outweigh one mediocre book."

The pressure to publish or perish also detracts from the time and effort professors can devote to teaching undergraduate courses and mentoring graduate students. The rewards for exceptional teaching rarely match the rewards for exceptional research, which encourages faculty to favor the latter whenever they conflict.

Many universities do not focus on teaching ability when they hire new faculty; rather, they emphasize candidates' publications list (and, especially in technology-related areas, the ability to bring in research money). This single-minded focus on the professor as researcher may cause faculty to neglect or be unable to perform some other responsibilities.

Regarding the humanities, teaching and passing on the tradition of Literae Humaniores is given secondary consideration in research universities and treated as a non-scholarly activity.

Also, publish-or-perish is linked to scientific misconduct or at least questionable ethics. It has also been argued that the quality of scientific work has suffered due to publication pressures. Physicist Peter Higgs, namesake of the Higgs boson, was quoted in 2013 as saying that academic expectations since the 1990s would likely have prevented him from both making his groundbreaking research contributions and attaining tenure. "It's difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964," he said. "Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough."

The publish or perish culture also perpetuates bias in academic institutions. Overall, women publish less frequently than men, and when they do publish their work receives fewer citations than their male counterparts, even when it is published in journals with significantly higher Impact Factors.

-          from the Wikipedia Article on Publish or Perish

Not only is this a problem in Academia and Science but it has unintended governmental and/or social policy consequences. With so much out there, and much of it contradictory, what can be taken as fact or truth in the implementation of social or governmental policy? Knowing what is important, what is unimportant, and what is misleading when reviewing studies or statistics is crucial to discovering the truth.

Studies and Statistics ShowTop

Studies ShowTop

Studies can show anything. For every study that shows something, there is another's study that shows the opposite. This is because every study has an inherent bias of the person or persons conducting the study, or the person organization that commissioned the study. A very good person conducting the study recognizes their biases and compensates for them, to ensure that the study is as accurate as possible. Having been the recipient of many studies (and the author of a few) I can attest to this fact. Therefore, you should be very wary when a person says "studies show". You should always look into a study to determine who the authors are, who commissioned the study and to examine the study for any inherent biases.

Statistics ShowTop

Everything that I said in "studies shows" also apply in statistics show. However, statistic show requires more elaboration, as it deals with the rigorous mathematical science of statistics. Statistics is a science that requires very rigorous education and experience to get it right. The methodology of gathering data, processing the data, and analyzing the data is very intricate. Interpreting the results of the data accurately requires that you understand this methodology, and how it was applied to the statistics being interpreted. If you are not familiar with the science of statistics, and you did not carefully examine the statistics and how they were developed, you can often be led astray. Also, many statistics are published with a policy goal in mind, and therefore should be suspect. As a famous wag once said, "Figures can lie, and liars can figure". So be careful when someone presents you with statistics. Be wary of both the statistics and the statistician.

Studies and statistics often claim to be scientific and rigorous. However, most of them are not as scientific or rigorous as we may believe. Most studies are based on statistics, and most statistics become studies. But most studies based on statistics have issues with correlation, sampling, and confidence level, not to mention risk factors and probabilities, along with a host of other issues. The best book I have read that explains these issues is "Studies Show: A Popular Guide to Understanding Scientific Studies" By John H. Fennick and Naked Statistics: Stripping The Dread From The Data By Charles Wheelan.

Peer ReviewTop

Peer review is the evaluation of work by one or more people of similar competence to the producers of the work (peers). It constitutes a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are employed to maintain standards of quality, improve performance, and provide credibility. In academia, scholarly peer review is often used to determine an academic paper's suitability for publication. Peer review can be categorized by the type of activity and by the field or profession in which the activity occurs, e.g., medical peer review.

-          from the Wikipedia Article on Peer Review

Peer review has become a big problem in science. With a large number of publications (due to Publish or Perish), the time and effort to peer review have become greater. The peer reviewers often do not have the time or resources to thoroughly examine the contents of the publication. They often simply examine the publication to determine if it has met the standards of scientific investigation. They leave it to other scientists who are knowledgeable on the topic to thoroughly investigate the content, data, and methods to determine if the publication is scientifically correct. As a result, many scientific publications are published that on the surface are correct, but the details are suspect. When defects are discovered the publication is often modified or withdrawn because of this discovery. But until this happens, which can take several months or years to discover, the publication stands and is utilized in science. This can lead to incorrect or false science. The question is what percentage of scientific publications contain incorrect science? The answer is probably unknowable, but some research has put this number above 50%.  This percentage is unacceptably high, and this issue needs to be thoroughly examined and rectified.

Therefore, when you hear that a scientific publication has been peer reviewed you can assume that it has not violated scientific methods but that it may be scientifically incorrect. This is especially true for newer scientific publications, as the science within has not been independently verified. Until independent verification of a publication has occurred you should be suspect of the science in the publication.

Time to ThinkTop

This topic is also closely related to “Shoehorning”, in that If you do not have the time to think you often shoehorn. A historical example of “Time to Think” is as follows:

Albert Einstein had difficulty obtaining employment as a University teacher or research assistant upon his graduation from the Swiss Federal Polytechnic in Zürich. In Germany at that time, the way you obtained these positions was through a recommendation by your professor, and Einstein could not obtain any recommendations by any of his professors as they disliked him as he was always questioning their authority and knowledge. He was therefore unable to obtain a job in his chosen profession. However, his uncle and Marcel Grossmann father were able to obtain a job for Einstein in the Bern Switzerland Patent Office. As Einstein was newly married and had a child with another on the way he accepted this job to support his family. This unfortunate circumstance, however, turned out to be one of the best things that could have happened to him.

His position (Patent Clerk 2nd class) at the Swiss patent office in Bern Switzerland (from 1902–1909), required him to punctually show up for work where a stack of patent applications was waiting on his desk for him to review. He was responsible for reviewing the patent applications for any scientific problems or inconsistencies, and if he found any problems or inconsistencies the patent application was rejected. Otherwise, it was passed on to the Patent Clerk (1st class) who reviewed the application to determine if another patent conflicted with it. He was so good at this job that it only took him a few hours to go through the stack of patent applications that was assigned to him. He, therefore, worked on a few of the patent applications, then paused to read physics journals and think about what he had read. He would then review a few more patent applications, pause, and read and think ad infinitum throughout the day. This allowed Einstein plenty of time to keep current or what was happening in the world of physics. In 1904 he started concentrating on three subjects concerning physics; the existence of atoms, the photoelectric effect, and special relativity. In 1905 he had his “Annus Mirabilis” (Miracle Year), in which he published four papers on these three subjects (and a fifth paper in 1906), which resolved these subjects. When Einstein was hired as a University Professor two years later he was given the latitude to do whatever interested him, a time he utilized to develop his theory of “General Relativity”.

It is this freedom to choose what and when to think that is an issue in modern science.

Today the best and the brightest of new scientists are often identified during their university years. Upon graduation, they are often hired by Universities as Graduate Assistants, or by Research Institutes as Research Assistants. As such, they are directed and supervised by their University Professors or Research Scientist into what topics to explore. Their workload is often labor-intensive and time-consuming. So much so that they often do not have time to think about other topics they may be interested in. They are also motivated to establish their credentials in the hopes of becoming a University Professor or Research Scientist. By the time they accomplish this they are often in their mid-thirties to early forties years of age. It is a well known historical fact that most revolutionary scientific discoveries occur by scientists who are in their twenties or early thirties years of age. So, by the time they can think about what they want to think about they are past their prime age of discovery. Is this time to think problem impacting scientific discoveries, and how can science ameliorate this problem? In my opinion, this is a problem that needs to be addressed.


A historic long-standing problem in the study of the Solar System was that the orbit of Mercury did not behave as required by Newton's equations. This problem became observable in the 19th century as advancements in telescopes and measuring instruments made it possible to accurately measure the precession. The problem is that as Mercury orbits the Sun it follows an ellipse...but only approximately. It was found that the point of closest approach of Mercury to the sun does not always occur at the same place in space, but that it slowly moves forward in Mercury’s orbit. This effect is known as precession. The anomalous rate of precession of the perihelion of Mercury's orbit was first recognized in 1859 as a problem in celestial mechanics.

This discrepancy cannot be accounted for using Newton's formalism. Many ad-hoc fixes were devised to explain this discrepancy. One explanation was that an undiscovered planet orbited between the Sun and Mercury, causing the perturbation of Mercury’s orbit which showed up as precession. The race was then on for astronomers to discover this planet. This supposed planet was even given the name “Vulcan”. A few astronomers actually claimed that they have discovered Vulcan, but it was determined that the discoveries were equipment anomaly’s, observational errors, or very small, long duration sunspots. No astronomer ever discovered Vulcan for the simple fact that it did not exist.

When Einstein developed his Theory of General Relativity he applied it to the problem of Mercury’s Orbit. Einstein was able to predict, without any adjustments whatsoever, that the exact orbit of Mercury is correctly predicted by the General Theory of Relativity. When he did this Einstein realized that General Relativity was correct. However, he required an additional observation of phenomena that Newton’s Universal Gravitation had no allowance for in order to prove his General Relativity was correct. He found this in his prediction of the Deflection of Starlight, and General Relativity has become a foundation of modern science.

This historic problem is an example were the then current scientific theories could not explain a scientific anomaly. Torturous means were developed to shoehorn the scientific anomaly into the then current scientific theory, to no avail. It required a brilliant mind, Albert Einstein, to reject the then current scientific approaches and to rethink the problem. Rethinking resolved this problem and set science on a new course.

Today, there are scientific discoveries that occur that do not precisely fit the scientific theory. In most cases, the scientific theory can be modified to account for these discoveries. This is a good thing for the advancement of science. Just because the new discovery does not fit the current theory does not invalidate the current scientific theory. A new discovery usually requires a minor adjustment to the scientific theory. The question is if this shoehorning is appropriate or does a new scientific theory need to be developed? Most often the answer is – No! However, the answer is sometimes yes, and unfortunately, many of today’s scientists are unwilling to say yes as it may impact their funding and perhaps prestige.  More of today’s scientist need to be willing to admit they were wrong and say yes to a need for a replacement of a scientific theory.

And in no case that I know of does a new discovery invalidate the BIG questions in science (see The End of Science?). It would take a huge new discovery to rethink the BIG questions in science!

Group ThinkTop



Groupthink occurs when a homogenous highly cohesive group is so concerned with maintaining unanimity that they fail to evaluate all their alternatives and options. Groupthink members see themselves as part of an in-group working against an outgroup opposed to their goals. You can tell if a group suffers from groupthink if it:

  1. overestimates its invulnerability or high moral stance,
  2. collectively rationalizes the decisions it makes,
  3. demonizes or stereotypes outgroups and their leaders,
  4. has a culture of uniformity where individuals censor themselves and others so that the facade of group unanimity is maintained, and
  5. contains members who take it upon themselves to protect the group leader by keeping information, theirs or other group members', from the leader.

Groups engaged in groupthink tend to make faulty decisions when compared to the decisions that could have been reached using a fair, open, and rational decision-making process. Groupthinking groups tend to:

  1. fail to adequately determine their objectives and alternatives,
  2. fail to adequately assess the risks associated with the group's decision,
  3. fail to cycle through discarded alternatives to reexamine their worth after a majority of the group discarded the alternative,
  4. do not seek outside expert advice,
  5. select and use only information that supports their position and conclusions, and
  6. does not make contingency plans in case their decision and resulting actions fail.

Group leaders can prevent groupthink by:

  1. encouraging members to raise objections and concerns;
  2. refraining from stating their preferences at the onset of the group's activities;
  3. allowing the group to be independently evaluated by a separate group with a different leader;
  4. splitting the group into sub-groups, each with different chairpersons, to separately generate alternatives, then bringing the sub-groups together to hammer out differences;
  5. allowing group members to get feedback on the group's decisions from their own constituents;
  6. seeking input from experts outside the group;
  7. assigning one or more members to play the role of the devil's advocate;
  8. requiring the group to develop multiple scenarios of events upon which they are acting, and contingencies for each scenario; and
  9. calling a meeting after a decision consensus is reached in which all group members are expected to critically review the decision before final approval is given.

-          From the Oregon State University Summary of Groupthink.

With the rise of Big Science and Universities and Research Organization pursuing grants and funding to pursue Big Science, and the need of young doctoral students to obtain positions and employment after graduation, the unfortunate tendency for groupthink has become part of modern science (see the Lee Smolin book chapter on “How Do You Fight Sociology?” for a further explanation).


Science requires “Consensus” to accept the results of science as factual. However, consensus does not always mean that the science is correct. Indeed, much of historical science consensus has been overturned by new scientific discoveries. The willingness of scientists to admit that they may be wrong, and to examine new discoveries and reject the old consensus if the new discovery is better science is very important to the advancement of science. However, in today's science consensus is often utilized to “freeze-out” dissenting science, especially regarding employment, funding, and grants to explore dissenting science. Much of the funding for modern science come from government agencies or private organizations that are reluctant to fund dissenting science as it is quite possible that it may not achieve the intended results. But without great risk there is often no great reward, and science needs to take great risks to achieve great rewards (a perfect example of this is the aLIGO experiment). Much more needs to be done to fund dissenting science and dissenting scientists to achieve this great reward. Perhaps it is necessary to establish and fund Research Organizations and/or University Departments whose sole purpose is to explore dissenting science, with no expectations of achieving their intended goals but only the expectation of exploring dissenting science. An example of this is from my past. A wealthy friend was funding a cutting-edge technology that I became interested in. I considered investing in this technology until another wealthy friend pointed out the chances of success were small. He advised me that the best approach to investing in this technology was to discover others who were exploring this technology and invest part of my money in a least a dozen such efforts. That way if one of them succeeded I could recoup my investment and make a profit on the successful investment, while being able to write off my investment on those efforts that did not succeed. Not being wealthy myself I could not do any of this, but his advice was sound.  Invest in multiple opportunities in the hopes that one of them came to fruition. That approach is what I am proposing for funding dissenting science. Without doing this the advancement of science may stumble or stall.

All of the above issues on the Troubles of Science have become a significant problem for the advancement of science, and all of them need to be addressed and corrected for science to advance in the 21st century.

Scientific SpeculationTop

As I have mentioned previously Scientific Speculation can be a positive. However, it can be a negative if utilized improperly. By improperly I mean that it is treated as “Real Science” rather than “Speculative Science”. Real Science requires "observation and experimentation” as its foundation, and “predictability and falsifiability” are necessary for any scientific theory. Without these items, you have Speculative Science, and Speculative Science should never be accepted (even by consensus) as Real Science.

Two current Speculative Science examples that I am familiar with are “String Theory” and “The Multiverse”.

String (or M Theory) TheoryTop

A good example of Speculative Science is Modern String Theory, or M Theory, of quantum physics. This theory is entirely based on mathematics. So far, no scientist has been able to observe strings or perform an experiment on strings. It is even possible that we may never be able to observe strings due to their very nature. Therefore, they reside within the sphere of Scientific Speculation. The math for String or M theory is very good, and seems to work, but has been said earlier "observation and experimentation is the foundation of science", and as there is no observations or experiments regarding String or M theory it is not proven. And reproducibility of scientific experiments is also a foundation of science, "No Reproducibility – No Science", and String or M theory has not been reproducible as there is no observations or experiments that can be reproduced. This is not to mention predictability and falsifiability for something that cannot be observed or experimented upon. As Richard Feynman, one of the greatest quantum physicists of the 20th century said: “String theorists don’t make predictions, they make excuses”. Some String Theorists have even suggested that due to the nature of String Theory that we may never be able to “prove” String Theory, but must accept their theories based on mathematics and belief. I, however, believe it is the responsibility of physicist to explain the real universe and prove that their explanation is factual and real.  Abandoning your theories and hypotheses has nothing to do with apologizing, it has to do with being willing to admit that an idea doesn’t work and move on to something else. In science, this happens all the time and requires no apology, as most scientific ideas don’t work out in the long run. Given that there have been no observations or experiments for over thirty years to prove String Theory perhaps it is time to move on from String Theory to another theory and let String Theory go into the dustbin of history.

The Multiverse (A Part of M Theory)Top

The Multiverse is a scientific speculation to answer the question of why our universe exists, and how it is that compatible with life, as explained below.

The Anthropic Principle

The anthropic principle is a philosophical consideration that observations of the universe must be compatible with the conscious and sapient life that observes it. Some proponents of the anthropic principle reason that it explains why this universe has the age and the fundamental physical constants necessary to accommodate conscious life. As a result, they believe it is unremarkable that this universe has fundamental constants that happen to fall within the narrow range thought to be compatible with life. The strong anthropic principle (SAP) as explained by John D. Barrow and Frank Tipler states that this is all the case because the universe is in some sense compelled to eventually have conscious and sapient life emerge within it. Some critics of the SAP argue in favor of a weak anthropic principle (WAP) similar to the one defined by Brandon Carter, which states that the universe's ostensible fine tuning is the result of selection bias (specifically survivor bias): i.e., only in a universe capable of eventually supporting life will there be living beings capable of observing and reflecting on the matter. Most often such arguments draw upon some notion of the multiverse for there to be a statistical population of universes to select from and from which selection bias (our observance of only this universe, compatible with our life) could occur.

- excerpted from the Wikipedia Article on the Anthropic Principle


The multiverse (or meta-universe) is a hypothetical group of multiple separate universes including the universe in which humans live in. Together, these universes comprise everything that exists: the entirety of space, time, matter, energy, the physical laws and the constants that describe them. The different universes within the multiverse are called the "parallel universes", "other universes" or "alternative universes"

The structure of the multiverse, the nature of each universe within it, and the relationships among these universes vary from one multiverse hypothesis to another.

Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, and literature, particularly in science fiction and fantasy. In these contexts, parallel universes are also called "alternate universes", "quantum universes", "interpenetrating dimensions", "parallel dimensions", "parallel worlds", "parallel realities", "quantum realities", "alternate realities", "alternate timelines", "alternate dimensions", and "dimensional planes".

The physics community continues to debate the multiverse hypotheses. Prominent physicists are divided in opinion about whether any other universes exist.

Some physicists say the multiverse is not a legitimate topic of scientific inquiry. Concerns have been raised about whether attempts to exempt the multiverse from experimental verification could erode public confidence in science and ultimately damage the study of fundamental physics. Some have argued that the multiverse is a philosophical rather than a scientific hypothesis because it cannot be falsified. The ability to disprove a theory by means of scientific experiment has always been part of the accepted scientific method. Paul Steinhardt has famously argued that no experiment can rule out a theory if the theory provides for all possible outcomes.

In 2007, Nobel laureate Steven Weinberg suggested that if the multiverse existed, "the hope of finding a rational explanation for the precise values of quark masses and other constants of the standard model that we observe in our Big Bang is doomed, for their values would be an accident of the particular part of the multiverse in which we live."

- excerpted from the Wikipedia Article on Multiverse

Some scientists believe the Multiverse is the explanation for the Universe, while other scientists believe that since there is no observations or experiments to demonstrate the existence of the Multiverse it is not scientific. The Multiverse also utilizes the mathematical concept of infinity (∞) as a basis. But infinity in mathematics is highly debatable as to its existence in the real universe, or if it is just a mathematical construct. As such, to utilize infinity in scientific theories makes the theory suspect.

I am of the belief that the Multiverse is highly unlikely. Until science has observations or experiments that demonstrate the reality of the Multiverse it is just a belief. As a belief, you are free to accept or reject the Multiverse. However, in reaching your decision you must remember that "observation and experimentation” is the foundation of science, and “predictability and falsifiability” are required for any scientific theory.

It's Time to Rethink the Nobel PrizesTop

The following article is from a blog post by a distinguished scientist. It has presented her for food-for-thought.

It's Time to Rethink the Nobel Prizes

They can go to a maximum of three people, and they can't be awarded posthumously, but that wasn't part of Alfred Nobel's original vision. 

By Brian Keating on October 4, 2017 in a "Scientific American" blog post.

Each October, chemists, physicians, poets, physicists, and peacemakers delight in what has become almost a sacramental ritual for intellectuals: the annual Nobel Prize announcements. Like nature itself, the well-choreographed and publicized set of rituals surrounding the prize comes complete with its own distinct seasons: the season of “revelation,” experienced this week, and the season of “coronation”—the awards ceremony, held annually on December 10, the anniversary of Alfred Nobel’s death.

But there is a lesser-known Nobel season as well: the season of “nomination,” an epoch which closes in the dead of winter, at midnight in Stockholm on January 31 each year. This marks the date by which nominators must submit their Nobel Prize nominations. There is no grace period; it is never postponed, and there is no allowance for nominators who tarry.

This can have problematic consequences. In 2009, for example, President Barack Obama had been on the job for only 11 days when nomination season ended. Some said he hadn’t even had time to measure the proverbial drapes in the Oval Office, let alone to have reduced “the world’s standing armies”—the criterion Nobel’s will stipulated for the Peace prize. But he won the Peace Prize that year nevertheless. His nomination was perfectly in line with the Prize committee’s technical requirements; it had beaten the deadline.

The January deadline came into play with a twinge of sadness this week as the 2017 Nobel Prize in Physics was announced. The press release issued by the Nobel Prize committee states that Rainer Weiss, Kip Thorne, and Barry Barish were rewarded "for decisive contributions to the LIGO detector and the observation of gravitational waves.” It goes on to say that “On 14 September 2015, the universe's gravitational waves were observed for the very first time. The waves, which were predicted by Albert Einstein a hundred years ago, came from a collision between two black holes.” While the waves, traveling at lightspeed, took 1.3 billion years to reach LIGO’s twin detectors, it was the far briefer span of two weeks that fundamentally altered the calculus of this year’s prize.

LIGO’s detectors had fortuitously come online only weeks before catching the first gravitational wave signals on that fateful September day. After months of painstaking analysis by more than 1,000 members of the consortium, the team was finally ready to go forward and make their announcement public, which they did on 11 February 2016. Whispers of a Nobel began immediately, and eight months later, as Nobel revelation season approached, those whispers intensified.

Writing in Science in October 2016, Adrian Cho—unaware LIGO’s February announcement had missed the nomination deadline—said “Next week, the 2016 Nobel Prize in Physics will be announced, and many scientists expect it to honor the detection of ripples in space called gravitational waves, reported in February. If other prizes are a guide, the Nobel will go to the troika of physicists who 32 years ago conceived of LIGO, the duo of giant detectors responsible for the discovery: Rainer Weiss of the Massachusetts Institute of Technology (MIT) in Cambridge, and Ronald Drever and Kip Thorne of the California Institute of Technology (Caltech) in Pasadena. But some influential physicists, including previous Nobel laureates, say the prize, which can be split three ways at most, should include somebody else: Barry Barish.”

At the time, Barish, who this week shared one-quarter of the 2017 prize for his decisive role in LIGO, agreed that Weiss, Drever, and Thorne deserved science’s highest accolade. Yet, evidently equally unaware of the January 31 nomination deadline, he added a note regarding the due diligence of the committee: “If they wait a year and give it to these three guys, at least I’ll feel that they thought about it,” he says. “If they decide [to give it to them] this October, I’ll have more bad feelings because they won’t have done their homework.”

But if the committee had recognized the LIGO discovery that year, why couldn’t Barish have been included as well. Why a trio and not a quartet? This restriction comes from a stipulation that a maximum of three scientists can share a prize—a rule added by the Nobel committee years after the awards were established in Alfred Nobel’s will. In fact, the will requires that the prizes be given to “the person…”—that’s “person,” in the singular—whose discovery or invention has provided “the greatest benefit to mankind.” The committee jettisoned that requirement 1902, the prize’s second year, according to science historian Elizabeth Crawford in her book The Beginnings of the Nobel Institution: The Science Prizes, 1901-1915.

Had LIGO beaten the 31 January 2016 deadline, even Barish seemed to agree that he might have been justifiably lost a share of last year’s Nobel Prize due to the “rule of three.” Early this year, the same headaches seemed likely for the 2017 Nobel Prize committee—how to choose three of the four?

Sadly, in March 2017, their dilemma was resolved. The death of Ron Drever permanently eliminated him from Nobel consideration thanks to the Nobel Foundation’s statutes, which forbid posthumous awards. The posthumous stipulation, however, like the restriction to a maximum of three winners is not found anywhere in Alfred’s will. It was enacted 73 years after the first prizes.

While most see the Nobel Prizes as an inspirational celebration of the human mind—the one chance for basic science to share the spotlight on a par with Hollywood celebrities, for at least a week—others feel it has some detrimental effects on the portrayal of science. The Astronomer Royal, Sir Martin Rees, said this week that the three new laureates were "outstanding individuals whose contributions were distinctive and complementary.” But he also added: "Of course LIGO’s success was owed to literally hundreds of dedicated scientists and engineers. The fact that the Nobel committee refuses to make group awards is causing them increasingly frequent problems—and giving a misleading and unfair impression of how a lot of science is actually done."

Worse yet, younger scientists, are becoming disillusioned by the lack of racial and gender diversity of the winners. Some, like Matthew Francis, say it might be time to retire the Nobels entirely, saying that they are “not an adequate reflection of real science, and they reinforce the worst aspects of the culture of science….Maybe we should dump 'em and start over."

There have long been calls for Nobel Prize reform. Many are dismayed when scientists treat the rules of the Nobel Prize as inviolable as laws of nature itself. These voices, like distant signals from in-spiraling black holes, may prove too loud for the Nobel committees to continue to ignore. After all, we are living in a populist era where long-held traditions and institutions are coming under intense scrutiny. Some institutions are responding with reform, others remain steadfast, committed to maintaining their outdated rules.

No one questions the appropriateness of this weeks’ winners. Yet it is impossible to disregard how the Nobel committee solved one problem—forbidding awards to more than three winners—by virtue of another arbitrary prohibition: the one that forbids posthumous prizes. In an era increasingly concerned with transparency and fairness, it will be hard not to heed the clarion calls for substantive reform.

Instead of boycott or retirement, modest reforms should take place. One proposal would be to give the first Nobel Prize in physics intentionally awarded posthumously to Vera Rubin for her indisputable discovery of dark matter.

Imagine the statement that would make!

The views expressed are those of the author(s) and are not necessarily those of Scientific American.


Brian Keating is an astrophysicist and professor at the University of California San Diego. He and his collaborators have built numerous cosmic microwave background experiments at the South Pole, Antarctica and in the Atacama Desert of Chile. Keating is a Fellow of the American Physical Society and is the author of Losing the Nobel Prize, to be published by W.W. Norton in April 2018.

Miscellaneous ThoughtsTop

What is the difference between Science and Engineering?

Generally, Science is the study of the physical world, while Engineering applies scientific knowledge to design processes, structures or equipment. Both Engineers and Scientists will have a strong knowledge of science, mathematics, and technology, but Engineering students will learn to apply these principles to design creative solutions to Engineering challenges. Generally, a scientist attempts to be as precise as possible to prove their hypothesis and theorems, while an engineer attempts to be approximate (good enough) to be able to create something from the scientific principles.

The Pecking Order of Science

Hard Science vs. Soft Science has been debated throughout the Scientific Revolution. Many considerations determine where a scientific branch exists in the pecking order. Most importantly, it is that the results of the scientific branch can be validated - i.e. can it be proven with the constraints of the Limitations of Science (as described above). Given the above philosophers of science can rank the pecking order of hard vs. soft sciences as follows by what is more provable vs less provable:

More Provable:

  1. Mathematics
  2. Physics
  3. Chemistry
  4. Geology
  5. Biology
  6. Medicine

Less Provable:

  1. Economics
  2. Psychology
  3. Sociology
  4. Anthropology
  5. Political Science
  6. The Other Sciences

You should keep this in main when you are listening or reading the scientific information given by an expert in a scientific field. Remember that they could be right, but they could also be wrong, and that the science may shift under their feet in the future.

Final ThoughtsTop

I would leave you with two final observations on science:

Science is the best way to understand the natural process that governs the universe and the objects within the universe.
What it cannot explain is best left to Philosophers, Moralist, Ethicists, and Theologians.
  - Mark W. Dawson

"What we know is a drop, what we don't know is an ocean."
   - Isaac Newton

I would note that since the time of Isaac Newton we know much more – but have not gained an ocean of knowledge – probably just a sea of knowledge.

Further ReadingsTop

Below are the books I would recommend that you read for more background information on these subjects. They were chosen as they are a fairly easy read for the general public and have a minimum of mathematics.

For a brief introduction on these topics I would recommend the Oxford University Press series “A Very Short Introduction” on these subjects:

The following are books that challenge the current thinking of science, and specifically Quantum Physics. While I do not always agree with the authors I believe it is important to consider their arguments.

Some interesting website with general scientific topics are:


Please Note - many academics, scientist and engineers would critique what I have written here as not accurate nor through. I freely acknowledge that these critiques are correct. It was not my intentions to be accurate or through, as I am not qualified to give an accurate nor through description. My intention was to be understandable to a layperson so that they can grasp the concepts. Academics, scientist and engineers’ entire education and training is based on accuracy and thoroughness, and as such, they strive for this accuracy and thoroughness. I believe it is essential for all laypersons to grasp the concepts of this paper, so they make more informed decisions on those areas of human endeavors that deal with this subject. As such, I did not strive for accuracy and thoroughness, only understandably.

Most academics, scientist, and engineers when speaking or writing for the general public (and many science writers as well) strive to be understandable to the general public. However, they often fall short on the understandably because of their commitment to accuracy and thoroughness, as well as some audience awareness factors. Their two biggest problems are accuracy and the audience knowledge of the topic.

Accuracy is a problem because academics, scientist, engineers and science writers are loath to be inaccurate. This is because they want the audience to obtain the correct information, and the possible negative repercussions among their colleagues and the scientific community at large if they are inaccurate. However, because modern science is complex this accuracy can, and often, leads to confusion among the audience.

The audience knowledge of the topic is important as most modern science is complex, with its own words, terminology, and basic concepts the audience is unfamiliar with, or they misinterpret. The audience becomes confused (even while smiling and lauding the academics, scientists, engineers or science writer), and the audience does not achieve understandability. Many times, the academics, scientists, engineers or science writer utilizes the scientific disciplines own words, terminology, and basic concepts without realizing the audience misinterpretations, or has no comprehension of these items.

It is for this reason that I place undesirability as the highest priority in my writing, and I am willing to sacrifice accuracy and thoroughness to achieve understandability. There are many books, websites, and videos available that are more accurate and through. The subchapter on “Further Readings” also contains books on various subjects that can provide more accurate and thorough information. I leave it to the reader to decide if they want more accurate or through information and to seek out these books, websites, and videos for this information.