I'm not sure what I'm going to do with this thread -- other than applying the Scientific-Method to Law, Politics, Religion, Journalism, and Conspiracy-Theories. This should be interesting. To start this thread off -- what is the difference between a Lawyer, Politician, Theologian, Preacher, Journalist, Conspiracy-Theorist -- and a Scientist? Is derisively calling a person a "Conspiracy-Theorist" sort of like calling an African-American the "N" Word??? Here is the Wikipedia link. http://en.wikipedia.org/wiki/Scientific_method
Scientific method refers to a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge.[1] To be termed scientific, a method of inquiry must be based on gathering empirical and measurable evidence subject to specific principles of reasoning.[2] The Oxford English Dictionary says that scientific method is: "a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses."[3]
The chief characteristic which distinguishes a scientific method of inquiry from other methods of acquiring knowledge is that scientists seek to let reality speak for itself, and contradict their theories about it when those theories are incorrect,[4] i. e., falsifiability. Although procedures vary from one field of inquiry to another, identifiable features distinguish scientific inquiry from other methods of obtaining knowledge. Scientific researchers propose hypotheses as explanations of phenomena, and design experimental studies to test these hypotheses via predictions which can be derived from them. These steps must be repeatable, to guard against mistake or confusion in any particular experimenter. Theories that encompass wider domains of inquiry may bind many independently derived hypotheses together in a coherent, supportive structure. Theories, in turn, may help form new hypotheses or place groups of hypotheses into context.
Scientific inquiry is generally intended to be as objective as possible, to reduce biased interpretations of results. Another basic expectation is to document, archive and share all data and methodology so they are available for careful scrutiny by other scientists, giving them the opportunity to verify results by attempting to reproduce them. This practice, called full disclosure, also allows statistical measures of the reliability of these data to be established.
See also: History of scientific method and Timeline of the history of scientific method
Ibn al-Haytham (Alhazen), 965–1039 Iraq. The Arab scholar who lived during the Islamic golden age is considered by some to be the father of modern scientific methodology.[5]
"Modern science owes its present flourishing state to a new scientific method which was fashioned by Galileo Galilei (1564-1642)" —Morris Kline[6]
Johannes Kepler (1571–1630). "Kepler shows his keen logical sense in detailing the whole process by which he finally arrived at the true orbit. This is the greatest piece of Retroductive reasoning ever performed." —C. S. Peirce, circa 1896, on Kepler's reasoning through explanatory hypotheses[7]
Scientific methodology has been practiced in some form for at least one thousand years.[5] There are difficulties in a formulaic statement of method, however. As William Whewell (1794–1866) noted in his History of Inductive Science (1837) and in Philosophy of Inductive Science (1840), "invention, sagacity, genius" are required at every step in scientific method. It is not enough to base scientific method on experience alone;[8] multiple steps are needed in scientific method, ranging from our experience to our imagination, back and forth.
In the 20th century, a hypothetico-deductive model[9] for scientific method was formulated (for a more formal discussion, see below):
1. Use your experience: Consider the problem and try to make sense of it. Look for previous explanations. If this is a new problem to you, then move to step 2.
2. Form a conjecture: When nothing else is yet known, try to state an explanation, to someone else, or to your notebook.
3. Deduce a prediction from that explanation: If you assume 2 is true, what consequences follow?
4. Test: Look for the opposite of each consequence in order to disprove 2. It is a logical error to seek 3 directly as proof of 2. This error is called affirming the consequent.[10]
This model underlies the scientific revolution. One thousand years ago, Alhazen demonstrated the importance of steps 1 and 4.[11] Galileo 1638 also showed the importance of step 4 (also called Experiment) in Two New Sciences.[12] One possible sequence in this model would be 1, 2, 3, 4. If the outcome of 4 holds, and 3 is not yet disproven, you may continue with 3, 4, 1, and so forth; but if the outcome of 4 shows 3 to be false, you will have to go back to 2 and try to invent a new 2, deduce a new 3, look for 4, and so forth.
Note that this method can never absolutely verify (prove the truth of) 2. It can only falsify 2.[13] (This is what Einstein meant when he said, "No amount of experimentation can ever prove me right; a single experiment can prove me wrong."[14]) However, as pointed out by Carl Hempel (1905–1997) this simple view of scientific method is incomplete; the formulation of the conjecture might itself be the result of inductive reasoning. Thus the likelihood of the prior observation being true is statistical in nature[15] and would strictly require a Bayesian analysis. To overcome this uncertainty, experimental scientists must formulate a crucial experiment,[16] in order for it to corroborate a more likely hypothesis.
In the 20th century, Ludwik Fleck (1896–1961) and others argued that scientists need to consider their experiences more carefully, because their experience may be biased, and that they need to be more exact when describing their experiences.[17]
DNA example Four basic elements of scientific method are illustrated by the following example from the discovery of the structure of DNA:
DNA characterization: Although DNA had been identified as at least one and possibly the only genetic substance by Avery, Macleod and McCarty at the Rockefeller Institute in 1944, the mechanism was still unclear to anyone in 1950.
DNA hypotheses: Crick and Watson hypothesized that the genetic material had a physical basis that was helical.[18]
DNA prediction: From earlier work on tobacco mosaic virus,[19] Watson was aware of the significance of Crick's formulation of the transform of a helix.[20] Thus he was primed to recognize the significance of the X-shape in photo 51, the remarkable photograph of the X-ray diffraction image of DNA taken by Rosalind Franklin.
DNA experiment: Watson saw photo 51.[21]
The examples are continued in "Evaluations and iterations" with DNA-iterations.[22]
Main article: Truth
In the same way that Alhazen sought truth during his pioneering studies in optics 1000 years ago, arriving at the truth is the goal of a scientific inquiry.[23]
Beliefs and biases
Flying gallop falsified; see image below.Belief can alter observation; human confirmation bias is a heuristic that leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree. Researchers have often noted that first observations are often somewhat imprecise, whereas the second and third were "adjusted to the facts". Eventually, factors such as openness to experience, self-esteem, time, and comfort can produce a readiness for new perception.[24]
Needham's Science and Civilization in China uses the 'flying gallop' image as an example of observation bias:[25] In these images, the legs of a galloping horse are shown splayed, while the first stop-action pictures of a horse's gallop by Eadweard Muybridge showed this to be false. In a horse's gallop, at the moment that no hoof touches the ground, a horse's legs are gathered together—not splayed. Earlier paintings show an incorrect flying gallop observation.
This image illustrates Ludwik Fleck's suggestion that people be cautious lest they observe what is not so; people often observe what they expect to observe. Until shown otherwise; their beliefs affect their observations (and, therefore, any subsequent actions which depend on those observations, in a self-fulfilling prophecy). This is one of the reasons (mistake, confusion, inadequate instruments, etc. are others) why scientific methodology directs that hypotheses be tested in controlled conditions which can be reproduced by others. The scientific community's pursuit of experimental control and reproducibility, diminishes the effects of cognitive biases.
Any scientific theory is closely tied to empirical findings, and always remains subject to falsification if new experimental observation incompatible with it is found. That is, no theory can ever be seriously considered certain as new evidence falsifying it can be discovered. Most scientific theories don't result in large changes in human understanding. Improvements in theoretical scientific understanding is usually the result of a gradual synthesis of the results of different experiments, by various researchers, across different domains of science.[26] Theories vary in the extent to which they have been experimentally tested and for how long, and in their acceptance in the scientific community.
In contrast to the always-provisional status of scientific theory, a myth can be believed and acted upon, or depended upon, irrespective of its truth.[27] Imre Lakatos has noted that once a narrative is constructed its elements become easier to believe (this is called the narrative fallacy).[28][29] That is, theories become accepted by a scientific community as evidence for the theory is presented, and as presumptions that are inconsistent with the evidence are falsified. -- The difference between a theory and a myth reflects a preference for a posteriori versus a priori knowledge. --[citation needed]
Thomas Brody notes that confirmed theories are subject to subsumption by other theories, as special cases of a more general theory. For example, thousands of years of scientific observations of the planets were explained by Newton's laws. Thus the body of independent, unconnected, scientific observation can diminish.[30] Yet there is a preference in the scientific community for new, surprising statements, and the search for evidence that the new is true.[1] Goldhaber & Nieto 2010, p. 941 additionally state that "If many closely neighboring subjects are described by connecting theoretical concepts, then a theoretical structure acquires a robustness which makes it increasingly hard —though certainly never impossible— to overturn."
There are different ways of outlining the basic method used for scientific inquiry. The scientific community and philosophers of science generally agree on the following classification of method components. These methodological elements and organization of procedures tend to be more characteristic of natural sciences than social sciences. Nonetheless, the cycle of formulating hypotheses, testing and analyzing the results, and formulating new hypotheses, will resemble the cycle described below.
Four essential elements[31][32][33] of a scientific method[34] are iterations,[35][36] recursions,[37] interleavings, or orderings of the following:
Characterizations (observations,[38] definitions, and measurements of the subject of inquiry)
Hypotheses[39][40] (theoretical, hypothetical explanations of observations and measurements of the subject)[41]
Predictions (reasoning including logical deduction[42] from the hypothesis or theory)
Experiments[43] (tests of all of the above)
Each element of a scientific method is subject to peer review for possible mistakes. These activities do not describe all that scientists do (see below) but apply mostly to experimental sciences (e.g., physics, chemistry, and biology). The elements above are often taught in the educational system as "the scientific method".[44]
The scientific method is not a single recipe: it requires intelligence, imagination, and creativity.[45] In this sense, it is not a mindless set of standards and procedures to follow, but is rather an ongoing cycle, constantly developing more useful, accurate and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton's Principia. On the contrary, if the astronomically large, the vanishingly small, and the extremely fast are removed from Einstein's theories — all phenomena Newton could not have observed — Newton's equations are what remain. Einstein's theories are expansions and refinements of Newton's theories and, thus, increase our confidence in Newton's work.
A linearized, pragmatic scheme of the four points above is sometimes offered as a guideline for proceeding:[46]
Define a question
Gather information and resources (observe)
Form an explanatory hypothesis
Test the hypothesis by performing an experiment and collecting data in a reproducible manner
Analyze the data
Interpret the data and draw conclusions that serve as a starting point for new hypothesis
Publish results
Retest (frequently done by other scientists)
The iterative cycle inherent in this step-by-step methodology goes from point 3 to 6 back to 3 again.
While this schema outlines a typical hypothesis/testing method,[47] it should also be noted that a number of philosophers, historians and sociologists of science (perhaps most notably Paul Feyerabend) claim that such descriptions of scientific method have little relation to the ways science is actually practiced.
The "operational" paradigm combines the concepts of operational definition, instrumentalism, and utility:
The essential elements of a scientific method are operations, observations, models, and a utility function for evaluating models.[48][not in citation given]
Operation - Some action done to the system being investigated
Observation - What happens when the operation is done to the system
Model - A fact, hypothesis, theory, or the phenomenon itself at a certain moment
Utility Function - A measure of the usefulness of the model to explain, predict, and control, and of the cost of use of it. One of the elements of any scientific utility function is the refutability of the model. Another is its simplicity, on the Principle of Parsimony more commonly known as Occam's Razor.
Scientific method depends upon increasingly sophisticated characterizations of the subjects of investigation. (The subjects can also be called unsolved problems or the unknowns.) For example, Benjamin Franklin conjectured, correctly, that St. Elmo's fire was electrical in nature, but it has taken a long series of experiments and theoretical changes to establish this. While seeking the pertinent properties of the subjects, careful thought may also entail some definitions and observations; the observations often demand careful measurements and/or counting.
The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference between pseudo-sciences, such as alchemy, and science, such as chemistry or biology. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such as correlation and regression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specialized scientific instruments such as thermometers, spectroscopes, particle accelerators, or voltmeters, and the progress of a scientific field is usually intimately tied to their invention and improvement.
"I am not accustomed to saying anything with certainty after only one or two observations."—Andreas Vesalius (1546) [49]
Measurements in scientific work are also usually accompanied by estimates of their uncertainty. The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due to data collection limitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon the sampling method used and the number of samples taken.
Measurements demand the use of operational definitions of relevant quantities. That is, a scientific quantity is described or defined by how it is measured, as opposed to some more vague, inexact or "idealized" definition. For example, electrical current, measured in amperes, may be operationally defined in terms of the mass of silver deposited in a certain time on an electrode in an electrochemical device that is described in some detail. The operational definition of a thing often relies on comparisons with standards: the operational definition of "mass" ultimately relies on the use of an artifact, such as a particular kilogram of platinum-iridium kept in a laboratory in France.
The scientific definition of a term sometimes differs substantially from its natural language usage. For example, mass and weight overlap in meaning in common discourse, but have distinct meanings in mechanics. Scientific quantities are often characterized by their units of measure which can later be described in terms of conventional physical units when communicating the work.
New theories are sometimes developed after realizing certain terms have not previously been sufficiently clearly defined. For example, Albert Einstein's first paper on relativity begins by defining simultaneity and the means for determining length. These ideas were skipped over by Isaac Newton with, "I do not define time, space, place and motion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations. Francis Crick cautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood.[50] In Crick's study of consciousness, he actually found it easier to study awareness in the visual system, rather than to study free will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them.
The history of the discovery of the structure of DNA is a classic example of the elements of scientific method: in 1950 it was known that genetic inheritance had a mathematical description, starting with the studies of Gregor Mendel, and that DNA contained genetic information (Oswald Avery's transforming principle).[51] But the mechanism of storing genetic information (i.e., genes) in DNA was unclear. Researchers in Bragg's laboratory at Cambridge University made X-ray diffraction pictures of various molecules, starting with crystals of salt, and proceeding to more complicated substances. Using clues painstakingly assembled over decades, beginning with its chemical composition, it was determined that it should be possible to characterize the physical structure of DNA, and the X-ray images would be the vehicle.[52] ..2. DNA-hypotheses
Another example: precession of Mercury
Precession of the perihelion (exaggerated)The characterization element can require extended and extensive study, even centuries. It took thousands of years of measurements, from the Chaldean, Indian, Persian, Greek, Arabic and European astronomers, to fully record the motion of planet Earth. Newton was able to include those measurements into consequences of his laws of motion. But the perihelion of the planet Mercury's orbit exhibits a precession that cannot be fully explained by Newton's laws of motion (see diagram to the right), though it took quite some time to realize this. The observed difference for Mercury's precession between Newtonian theory and observation was one of the things that occurred to Einstein as a possible early test of his theory of General Relativity. His relativistic calculations matched observation much more closely than did Newtonian theory (the difference is approximately 43 arc-seconds per century), .
Main article: Hypothesis formation
A hypothesis is a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena.
Normally hypotheses have the form of a mathematical model. Sometimes, but not always, they can also be formulated as existential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic.
Scientists are free to use whatever resources they have — their own creativity, ideas from other fields, induction, Bayesian inference, and so on — to imagine possible explanations for a phenomenon under study. Charles Sanders Peirce, borrowing a page from Aristotle (Prior Analytics, 2.25) described the incipient stages of inquiry, instigated by the "irritation of doubt" to venture a plausible guess, as abductive reasoning. The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea. Michael Polanyi made such creativity the centerpiece of his discussion of methodology.
William Glen observes that the success of a hypothesis, or its service to science, lies not simply in its perceived "truth", or power to displace, subsume or reduce a predecessor idea, but perhaps more in its ability to stimulate the research that will illuminate … bald suppositions and areas of vagueness.[53]
In general scientists tend to look for theories that are "elegant" or "beautiful". In contrast to the usual English use of these terms, they here refer to a theory in accordance with the known facts, which is nevertheless relatively simple and easy to handle. Occam's Razor serves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses.
DNA-hypothesesLinus Pauling proposed that DNA might be a triple helix.[54] This hypothesis was also considered by Francis Crick and James D. Watson but discarded. When Watson and Crick learned of Pauling's hypothesis, they understood from existing data that Pauling was wrong[55] and that Pauling would soon admit his difficulties with that structure. So, the race was on to figure out the correct structure (except that Pauling did not realize at the time that he was in a race—see section on "DNA-predictions" below)
Main article: Prediction in science
Any useful hypothesis will enable predictions, by reasoning including deductive reasoning. It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and deal only with probabilities.
It is essential that the outcome testing such a prediction be currently unknown. Only in this case does the eventuation increase the probability that the hypothesis be true. If the outcome is already known, it's called a consequence and should have already been considered while formulating the hypothesis.
If the predictions are not accessible by observation or experience, the hypothesis is not yet testable and so will remain to that extent unscientific in a strict sense. A new technology or theory might make the necessary experiments feasible. Thus, much scientifically based speculation might convince one (or many) that the hypothesis that other intelligent species exist is true, but there being not experiment now known which can test this hypothesis, science itself can have little to say about the possibility. In future, some new technique might lead to an experimental test and the speculation become part of accepted science.
James D. Watson, Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'.[56][57] This prediction followed from the work of Cochran, Crick and Vand[20] (and independently by Stokes). The Cochran-Crick-Vand-Stokes theorem provided a mathematical explanation for the empirical observation that diffraction from helical structures produces x shaped patterns.
In their first paper, Watson and Crick also noted that the double helix structure they proposed provided a simple mechanism for DNA replication, writing "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material".[58] ..4. DNA-experiments
Another example: general relativity
Einstein's prediction (1907): Light bends in a gravitational fieldEinstein's theory of General Relativity makes several specific predictions about the observable structure of space-time, such as that light bends in a gravitational field, and that the amount of bending depends in a precise way on the strength of that gravitational field. Arthur Eddington's observations made during a 1919 solar eclipse supported General Relativity rather than Newtonian gravitation.[59]
Main article: Experiment
Once predictions are made, they can be sought by experiments. If test results contradict the predictions, the hypotheses which made them are called into question and become less tenable. Sometimes experiments are conducted incorrectly and are not very useful. If the results confirm the predictions, then the hypotheses are considered more likely to be correct, but might still be wrong and continue to be subject to further testing. The experimental control is a technique for dealing with observational error. This technique uses the contrast between multiple samples (or observations) under differing conditions to see what varies or what remains the same. We vary the conditions for each measurement, to help isolate what has changed. Mill's canons can then help us figure out what the important factor is.[60] Factor analysis is one technique for discovering the important factor in an effect.
Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, a double-blind study or an archaeological excavation. Even taking a plane from New York to Paris is an experiment which tests the aerodynamical hypotheses used for constructing the plane.
Scientists assume an attitude of openness and accountability on the part of those conducting an experiment. Detailed record keeping is essential, to aid in recording and reporting on the experimental results, and supports the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results, likely by others. Traces of this approach can be seen in the work of Hipparchus (190-120 BCE), when determining a value for the precession of the Earth, while controlled experiments can be seen in the works of Jābir ibn Hayyān (721-815 CE), al-Battani (853–929) and Alhazen (965-1039).[61]
Watson and Crick showed an initial (and incorrect) proposal for the structure of DNA to a team from Kings College - Rosalind Franklin, Maurice Wilkins, and Raymond Gosling. Franklin immediately spotted the flaws which concerned the water content. Later Watson saw Franklin's detailed X-ray diffraction images which showed an X-shape and was able to confirm the structure was helical.[21][62] This rekindled Watson and Crick's model building and led to the correct structure. ..1. DNA-characterizations
The scientific process is iterative. At any stage it is possible to refine its accuracy and precision, so that some consideration will lead the scientist to repeat an earlier part of the process. Failure to develop an interesting hypothesis may lead a scientist to re-define the subject under consideration. Failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject. Failure of an experiment to produce interesting results may lead a scientist to reconsider the experimental method, the hypothesis, or the definition of the subject.
Other scientists may start their own research and enter the process at any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction, and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility.
After considerable fruitless experimentation, being discouraged by their superior from continuing, and numerous false starts,[63][64][65] Watson and Crick were able to infer the essential structure of DNA by concrete modeling of the physical shapes of the nucleotides which comprise it.[22][66] They were guided by the bond lengths which had been deduced by Linus Pauling and by Rosalind Franklin's X-ray diffraction images. ..DNA Example
Science is a social enterprise, and scientific work tends to be accepted by the scientific community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. Researchers have given their lives for this vision; Georg Wilhelm Richmann was killed by ball lightning (1753) when attempting to replicate the 1752 kite-flying experiment of Benjamin Franklin.[67]
To protect against bad science and fraudulent data, government research-granting agencies such as the National Science Foundation, and science journals including Nature and Science, have a policy that researchers must archive their data and methods so other researchers can test the data and methods and build on the research that has gone before. Scientific data archiving can be done at a number of national archives in the U.S. or in the World Data Center.
Main article: Models of scientific inquiry
The classical model of scientific inquiry derives from Aristotle,[68] who distinguished the forms of approximate and exact reasoning, set out the threefold scheme of abductive, deductive, and inductive inference, and also treated the compound forms such as reasoning by analogy.
See also: Pragmatic theory of truth
In 1877,[69] Charles Sanders Peirce ( /ˈpɜrs/ like "purse"; 1839–1914) characterized inquiry in general not as the pursuit of truth per se but as the struggle to move from irritating, inhibitory doubts born of surprises, disagreements, and the like, and to reach a secure belief, belief being that on which one is prepared to act. He framed scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal or hyperbolic doubt, which he held to be fruitless.[70] He outlined four methods of settling opinion, ordered from least to most successful:
The method of tenacity (policy of sticking to initial belief) — which brings comforts and decisiveness but leads to trying to ignore contrary information and others' views as if truth were intrinsically private, not public. It goes against the social impulse and easily falters since one may well notice when another's opinion is as good as one's own initial opinion. Its successes can shine but tend to be transitory.
The method of authority — which overcomes disagreements but sometimes brutally. Its successes can be majestic and long-lived, but it cannot operate thoroughly enough to suppress doubts indefinitely, especially when people learn of other societies present and past.
The method of congruity or the a priori or the dilettante or "what is agreeable to reason" — which promotes conformity less brutally but depends on taste and fashion in paradigms and can go in circles over time, along with barren disputation. It is more intellectual and respectable but, like the first two methods, sustains accidental and capricious beliefs, destining some minds to doubts. The scientific method — the method wherein inquiry regards itself as fallible and purposely tests itself and criticizes, corrects, and improves itself.
Peirce held that slow, stumbling ratiocination can be dangerously inferior to instinct and traditional sentiment in practical matters, and that the scientific method is best suited to theoretical research,[71] which in turn should not be trammeled by the other methods and practical ends; reason's "first rule" is that, in order to learn, one must desire to learn and, as a corollary, must not block the way of inquiry.[72] The scientific method excels the others by being deliberately designed to arrive — eventually — at the most secure beliefs, upon which the most successful practices can be based. Starting from the idea that people seek not truth per se but instead to subdue irritating, inhibitory doubt, Peirce showed how, through the struggle, some can come to submit to truth for the sake of belief's integrity, seek as truth the guidance of potential practice correctly to its given goal, and wed themselves to the scientific method.[69][73]
For Peirce, rational inquiry implies presuppositions about truth and the real; to reason is to presuppose (and at least to hope), as a principle of the reasoner's self-regulation, that the real is discoverable and independent of our vagaries of opinion. In that vein he defined truth as the correspondence of a sign (in particular, a proposition) to its object and, pragmatically, not as actual consensus of some definite, finite community (such that to inquire would be to poll the experts), but instead as that final opinion which all investigators would reach sooner or later but still inevitably, if they were to push investigation far enough, even when they start from different points.[74] In tandem he defined the real as a true sign's object (be that object a possibility or quality, or an actuality or brute fact, or a necessity or norm or law), which is what it is independently of any finite community's opinion and, pragmatically, depends only on the final opinion destined in a sufficient investigation. That is a destination as far, or near, as the truth itself to you or me or the given finite community. Thus his theory of inquiry boils down to "Do the science." Those conceptions of truth and the real involve the idea of a community both without definite limits (and thus potentially self-correcting as far as needed) and capable of definite increase of knowledge.[75] As inference, "logic is rooted in the social principle" since it depends on a standpoint that is, in a sense, unlimited.[76]
Paying special attention to the generation of explanations, Peirce outlined scientific method as a coordination of three kinds of inference in a purposeful cycle aimed at settling doubts, as follows (in §III–IV in "A Neglected Argument"[77] except as otherwise noted):
1. Abduction (or retroduction). Guessing, inference to explanatory hypotheses for selection of those best worth trying. From abduction, Peirce distinguishes induction as inferring, on the basis of tests, the proportion of truth in the hypothesis. Every inquiry, whether into ideas, brute facts, or norms and laws, arises from surprising observations in one or more of those realms (and for example at any stage of an inquiry already underway). All explanatory content of theories comes from abduction, which guesses a new or outside idea so as to account in a simple, economical way for a surprising or complicative phenomenon. Oftenest, even a well-prepared mind guesses wrong. But the modicum of success of our guesses far exceeds that of sheer luck and seems born of attunement to nature by instincts developed or inherent, especially insofar as best guesses are optimally plausible and simple in the sense, said Peirce, of the "facile and natural", as by Galileo's natural light of reason and as distinct from "logical simplicity". Abduction is the most fertile but least secure mode of inference. Its general rationale is inductive: it succeeds often enough and, without it, there is no hope of sufficiently expediting inquiry (often multi-generational) toward new truths.[78] Coordinative method leads from abducing a plausible hypothesis to judging it for its testability[79] and for how its trial would economize inquiry itself.[80] Peirce calls his pragmatism "the logic of abduction".[81] His pragmatic maxim is: "Consider what effects that might conceivably have practical bearings you conceive the objects of your conception to have. Then, your conception of those effects is the whole of your conception of the object".[74] His pragmatism is a method of reducing conceptual confusions fruitfully by equating the meaning of any conception with the conceivable practical implications of its object's conceived effects — a method of experimentational mental reflection hospitable to forming hypotheses and conducive to testing them. It favors efficiency. The hypothesis, being insecure, needs to have practical implications leading at least to mental tests and, in science, lending themselves to scientific tests. A simple but unlikely guess, if uncostly to test for falsity, may belong first in line for testing. A guess is intrinsically worth testing if it has instinctive plausibility or reasoned objective probability, while subjective likelihood, though reasoned, can be misleadingly seductive. Guesses can be chosen for trial strategically, for their caution (for which Peirce gave as example the game of Twenty Questions), breadth, and incomplexity.[82] One can hope to discover only that which time would reveal through a learner's sufficient experience anyway, so the point is to expedite it; the economy of research is what demands the leap, so to speak, of abduction and governs its art.[80]
2. Deduction. Two stages:
i. Explication. Unclearly premissed, but deductive, analysis of the hypothesis in order to render its parts as clear as possible.
ii. Demonstration: Deductive Argumentation, Euclidean in procedure. Explicit deduction of hypothesis's consequences as predictions, for induction to test, about evidence to be found. Corollarial or, if needed, Theorematic.
3. Induction. The long-run validity of the rule of induction is deducible from the principle (presuppositional to reasoning in general[74]) that the real is only the object of the final opinion to which adequate investigation would lead;[83] anything to which no such process would ever lead would not be real. Induction involving ongoing tests or observations follows a method which, sufficiently persisted in, will diminish its error below any predesignate degree. Three stages:
i. Classification. Unclearly premissed, but inductive, classing of objects of experience under general ideas.
ii. Probation: direct Inductive Argumentation. Crude (the enumeration of instances) or Gradual (new estimate of proportion of truth in the hypothesis after each test). Gradual Induction is Qualitative or Quantitative; if Qualitative, then dependent on weightings of qualities or characters;[84] if Quantitative, then dependent on measurements, or on statistics, or on countings.
iii. Sentential Induction. "...which, by Inductive reasonings, appraises the different Probations singly, then their combinations, then makes self-appraisal of these very appraisals themselves, and passes final judgment on the whole result".
Computational approachesMany subspecialties of applied logic and computer science, such as artificial intelligence, machine learning, computational learning theory, inferential statistics, and knowledge representation, are concerned with setting out computational, logical, and statistical frameworks for the various types of inference involved in scientific inquiry. In particular, they contribute hypothesis formation, logical deduction, and empirical testing. Some of these applications draw on measures of complexity from algorithmic information theory to guide the making of predictions from prior distributions of experience, for example, see the complexity measure called the speed prior from which a computable strategy for optimal inductive reasoning can be derived.
Frequently a scientific method is employed not only by a single person, but also by several people cooperating directly or indirectly. Such cooperation can be regarded as one of the defining elements of a scientific community. Various techniques have been developed to ensure the integrity of that scientific method within such an environment.
Scientific journals use a process of peer review, in which scientists' manuscripts are submitted by editors of scientific journals to (usually one to three) fellow (usually anonymous) scientists familiar with the field for evaluation. The referees may or may not recommend publication, publication with suggested modifications, or, sometimes, publication in another journal. This serves to keep the scientific literature free of unscientific or pseudoscientific work, to help cut down on obvious errors, and generally otherwise to improve the quality of the material. The peer review process can have limitations when considering research outside the conventional scientific paradigm: problems of "groupthink" can interfere with open and fair deliberation of some new research.[85]
Main article: Reproducibility
Sometimes experimenters may make systematic errors during their experiments, unconsciously veer from a scientific method (Pathological science) for various reasons, or, in rare cases, deliberately report false results. Consequently, it is a common practice for other scientists to attempt to repeat the experiments in order to duplicate the results, thus further validating the hypothesis.
As a result, researchers are expected to practice scientific data archiving in compliance with the policies of government funding agencies and scientific journals. Detailed records of their experimental procedures, raw data, statistical analyses and source code are preserved in order to provide evidence of the effectiveness and integrity of the procedure and assist in reproduction. These procedural records may also assist in the conception of new experiments to test the hypothesis, and may prove useful to engineers who might examine the potential practical applications of a discovery.
When additional information is needed before a study can be reproduced, the author of the study is expected to provide it promptly. If the author refuses to share data, appeals can be made to the journal editors who published the study or to the institution which funded the research.
Since it is impossible for a scientist to record everything that took place in an experiment, facts selected for their apparent relevance are reported. This may lead, unavoidably, to problems later if some supposedly irrelevant feature is questioned. For example, Heinrich Hertz did not report the size of the room used to test Maxwell's equations, which later turned out to account for a small deviation in the results. The problem is that parts of the theory itself need to be assumed in order to select and report the experimental conditions. The observations are hence sometimes described as being 'theory-laden'.
Dimensions of practiceFurther information: Rhetoric of science
The primary constraints on contemporary western science are:
Publication, i.e. Peer review
Resources (mostly funding)
It has not always been like this: in the old days of the "gentleman scientist" funding (and to a lesser extent publication) were far weaker constraints.
Both of these constraints indirectly bring in a scientific method — work that too obviously violates the constraints will be difficult to publish and difficult to get funded. Journals do not require submitted papers to conform to anything more specific than "good scientific practice" and this is mostly enforced by peer review. Originality, importance and interest are more important - see for example the author guidelines for Nature.
Philosophy and sociology of scienceMain articles: Philosophy of science and Sociology of science
Philosophy of science looks at the underpinning logic of the scientific method, at what separates science from non-science, and the ethic that is implicit in science. There are basic assumptions derived from philosophy that form the base of the scientific method - namely, that reality is objective and consistent, that humans have the capacity to perceive reality accurately, and that rational explanations exist for elements of the real world. These assumptions from methodological naturalism form the basis on which science is grounded. Logical Positivist, empiricist, falsificationist, and other theories have claimed to give a definitive account of the logic of science, but each has in turn been criticized.
Thomas Kuhn examined the history of science in his The Structure of Scientific Revolutions, and found that the actual method used by scientists differed dramatically from the then-espoused method. His observations of science practice are essentially sociological and do not speak to how science is or can be practiced in other times and other cultures.
Norwood Russell Hanson, Imre Lakatos and Thomas Kuhn have done extensive work on the "theory laden" character of observation. Hanson (1958) first coined the term for the idea that all observation is dependent on the conceptual framework of the observer, using the concept of gestalt to show how preconceptions can affect both observation and description.[86] He opens Chapter 1 with a discussion of the Golgi bodies and their initial rejection as an artefact of staining technique, and a discussion of Brahe and Kepler observing the dawn and seeing a "different" sun rise despite the same physiological phenomenon. Kuhn [87] and Feyerabend [88] acknowledge the pioneering significance of his work.
Kuhn (1961) said the scientist generally has a theory in mind before designing and undertaking experiments so as to make empirical observations, and that the "route from theory to measurement can almost never be traveled backward". This implies that the way in which theory is tested is dictated by the nature of the theory itself, which led Kuhn (1961, p. 166) to argue that "once it has been adopted by a profession ... no theory is recognized to be testable by any quantitative tests that it has not already passed".[89]
Paul Feyerabend similarly examined the history of science, and was led to deny that science is genuinely a methodological process. In his book Against Method he argues that scientific progress is not the result of applying any particular method. In essence, he says that for any specific method or norm of science, one can find a historic episode where violating it has contributed to the progress of science. Thus, if believers in a scientific method wish to express a single universally valid rule, Feyerabend jokingly suggests, it should be 'anything goes'.[90] Criticisms such as his led to the strong programme, a radical approach to the sociology of science.
Highly controlled experimentation allows researchers to catch their mistakes, but it also makes anomalies (which no one knew to look for) easier to seeIn his 1958 book, Personal Knowledge, chemist and philosopher Michael Polanyi (1891–1976) criticized the common view that the scientific method is purely objective and generates objective knowledge. Polanyi cast this view as a misunderstanding of the scientific method and of the nature of scientific inquiry, generally. He argued that scientists do and must follow personal passions in appraising facts and in determining which scientific questions to investigate. He concluded that a structure of liberty is essential for the advancement of science - that the freedom to pursue science for its own sake is a prerequisite for the production of knowledge through peer review and the scientific method.
The postmodernist critiques of science have themselves been the subject of intense controversy. This ongoing debate, known as the science wars, is the result of conflicting values and assumptions between the postmodernist and realist camps. Whereas postmodernists assert that scientific knowledge is simply another discourse (note that this term has special meaning in this context) and not representative of any form of fundamental truth, realists in the scientific community maintain that scientific knowledge does reveal real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate method of deriving truth.[91]
Somewhere between 33% and 50% of all scientific discoveries are estimated to have been stumbled upon, rather than sought out. This may explain why scientists so often express that they were lucky.[92] Louis Pasteur is credited with the famous saying that "Luck favours the prepared mind", but some psychologists have begun to study what it means to be 'prepared for luck' in the scientific context. Research is showing that scientists are taught various heuristics that tend to harness chance and the unexpected.[92][93] This is what professor of economics Nassim Nicholas Taleb calls "Anti-fragility"; while some systems of investigation are fragile in the face of human error, human bias, and randomness, the scientific method is more than resistant or tough - it actually benefits from such randomness in many ways (it is anti-fragile). Taleb believes that the more anti-fragile the system, the more it will flourish in the real world.[94]
Psychologist Kevin Dunbar says the process of discovery often starts with researchers finding bugs in their experiments. These unexpected results lead researchers to try and fix what they think is an error in their methodology. Eventually, the researcher decides the error is too persistent and systematic to be a coincidence. The highly controlled, cautious and curious aspects of the scientific method are thus what make it well suited for identifying such persistent systematic errors. At this point, the researcher will begin to think of theoretical explanations for the error, often seeking the help of colleagues across different domains of expertise.[92][93]
Scientific method refers to a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge.[1] To be termed scientific, a method of inquiry must be based on gathering empirical and measurable evidence subject to specific principles of reasoning.[2] The Oxford English Dictionary says that scientific method is: "a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses."[3]
The chief characteristic which distinguishes a scientific method of inquiry from other methods of acquiring knowledge is that scientists seek to let reality speak for itself, and contradict their theories about it when those theories are incorrect,[4] i. e., falsifiability. Although procedures vary from one field of inquiry to another, identifiable features distinguish scientific inquiry from other methods of obtaining knowledge. Scientific researchers propose hypotheses as explanations of phenomena, and design experimental studies to test these hypotheses via predictions which can be derived from them. These steps must be repeatable, to guard against mistake or confusion in any particular experimenter. Theories that encompass wider domains of inquiry may bind many independently derived hypotheses together in a coherent, supportive structure. Theories, in turn, may help form new hypotheses or place groups of hypotheses into context.
Scientific inquiry is generally intended to be as objective as possible, to reduce biased interpretations of results. Another basic expectation is to document, archive and share all data and methodology so they are available for careful scrutiny by other scientists, giving them the opportunity to verify results by attempting to reproduce them. This practice, called full disclosure, also allows statistical measures of the reliability of these data to be established.
See also: History of scientific method and Timeline of the history of scientific method
Ibn al-Haytham (Alhazen), 965–1039 Iraq. The Arab scholar who lived during the Islamic golden age is considered by some to be the father of modern scientific methodology.[5]
"Modern science owes its present flourishing state to a new scientific method which was fashioned by Galileo Galilei (1564-1642)" —Morris Kline[6]
Johannes Kepler (1571–1630). "Kepler shows his keen logical sense in detailing the whole process by which he finally arrived at the true orbit. This is the greatest piece of Retroductive reasoning ever performed." —C. S. Peirce, circa 1896, on Kepler's reasoning through explanatory hypotheses[7]
Scientific methodology has been practiced in some form for at least one thousand years.[5] There are difficulties in a formulaic statement of method, however. As William Whewell (1794–1866) noted in his History of Inductive Science (1837) and in Philosophy of Inductive Science (1840), "invention, sagacity, genius" are required at every step in scientific method. It is not enough to base scientific method on experience alone;[8] multiple steps are needed in scientific method, ranging from our experience to our imagination, back and forth.
In the 20th century, a hypothetico-deductive model[9] for scientific method was formulated (for a more formal discussion, see below):
1. Use your experience: Consider the problem and try to make sense of it. Look for previous explanations. If this is a new problem to you, then move to step 2.
2. Form a conjecture: When nothing else is yet known, try to state an explanation, to someone else, or to your notebook.
3. Deduce a prediction from that explanation: If you assume 2 is true, what consequences follow?
4. Test: Look for the opposite of each consequence in order to disprove 2. It is a logical error to seek 3 directly as proof of 2. This error is called affirming the consequent.[10]
This model underlies the scientific revolution. One thousand years ago, Alhazen demonstrated the importance of steps 1 and 4.[11] Galileo 1638 also showed the importance of step 4 (also called Experiment) in Two New Sciences.[12] One possible sequence in this model would be 1, 2, 3, 4. If the outcome of 4 holds, and 3 is not yet disproven, you may continue with 3, 4, 1, and so forth; but if the outcome of 4 shows 3 to be false, you will have to go back to 2 and try to invent a new 2, deduce a new 3, look for 4, and so forth.
Note that this method can never absolutely verify (prove the truth of) 2. It can only falsify 2.[13] (This is what Einstein meant when he said, "No amount of experimentation can ever prove me right; a single experiment can prove me wrong."[14]) However, as pointed out by Carl Hempel (1905–1997) this simple view of scientific method is incomplete; the formulation of the conjecture might itself be the result of inductive reasoning. Thus the likelihood of the prior observation being true is statistical in nature[15] and would strictly require a Bayesian analysis. To overcome this uncertainty, experimental scientists must formulate a crucial experiment,[16] in order for it to corroborate a more likely hypothesis.
In the 20th century, Ludwik Fleck (1896–1961) and others argued that scientists need to consider their experiences more carefully, because their experience may be biased, and that they need to be more exact when describing their experiences.[17]
DNA example Four basic elements of scientific method are illustrated by the following example from the discovery of the structure of DNA:
DNA characterization: Although DNA had been identified as at least one and possibly the only genetic substance by Avery, Macleod and McCarty at the Rockefeller Institute in 1944, the mechanism was still unclear to anyone in 1950.
DNA hypotheses: Crick and Watson hypothesized that the genetic material had a physical basis that was helical.[18]
DNA prediction: From earlier work on tobacco mosaic virus,[19] Watson was aware of the significance of Crick's formulation of the transform of a helix.[20] Thus he was primed to recognize the significance of the X-shape in photo 51, the remarkable photograph of the X-ray diffraction image of DNA taken by Rosalind Franklin.
DNA experiment: Watson saw photo 51.[21]
The examples are continued in "Evaluations and iterations" with DNA-iterations.[22]
Main article: Truth
In the same way that Alhazen sought truth during his pioneering studies in optics 1000 years ago, arriving at the truth is the goal of a scientific inquiry.[23]
Beliefs and biases
Flying gallop falsified; see image below.Belief can alter observation; human confirmation bias is a heuristic that leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree. Researchers have often noted that first observations are often somewhat imprecise, whereas the second and third were "adjusted to the facts". Eventually, factors such as openness to experience, self-esteem, time, and comfort can produce a readiness for new perception.[24]
Needham's Science and Civilization in China uses the 'flying gallop' image as an example of observation bias:[25] In these images, the legs of a galloping horse are shown splayed, while the first stop-action pictures of a horse's gallop by Eadweard Muybridge showed this to be false. In a horse's gallop, at the moment that no hoof touches the ground, a horse's legs are gathered together—not splayed. Earlier paintings show an incorrect flying gallop observation.
This image illustrates Ludwik Fleck's suggestion that people be cautious lest they observe what is not so; people often observe what they expect to observe. Until shown otherwise; their beliefs affect their observations (and, therefore, any subsequent actions which depend on those observations, in a self-fulfilling prophecy). This is one of the reasons (mistake, confusion, inadequate instruments, etc. are others) why scientific methodology directs that hypotheses be tested in controlled conditions which can be reproduced by others. The scientific community's pursuit of experimental control and reproducibility, diminishes the effects of cognitive biases.
Any scientific theory is closely tied to empirical findings, and always remains subject to falsification if new experimental observation incompatible with it is found. That is, no theory can ever be seriously considered certain as new evidence falsifying it can be discovered. Most scientific theories don't result in large changes in human understanding. Improvements in theoretical scientific understanding is usually the result of a gradual synthesis of the results of different experiments, by various researchers, across different domains of science.[26] Theories vary in the extent to which they have been experimentally tested and for how long, and in their acceptance in the scientific community.
In contrast to the always-provisional status of scientific theory, a myth can be believed and acted upon, or depended upon, irrespective of its truth.[27] Imre Lakatos has noted that once a narrative is constructed its elements become easier to believe (this is called the narrative fallacy).[28][29] That is, theories become accepted by a scientific community as evidence for the theory is presented, and as presumptions that are inconsistent with the evidence are falsified. -- The difference between a theory and a myth reflects a preference for a posteriori versus a priori knowledge. --[citation needed]
Thomas Brody notes that confirmed theories are subject to subsumption by other theories, as special cases of a more general theory. For example, thousands of years of scientific observations of the planets were explained by Newton's laws. Thus the body of independent, unconnected, scientific observation can diminish.[30] Yet there is a preference in the scientific community for new, surprising statements, and the search for evidence that the new is true.[1] Goldhaber & Nieto 2010, p. 941 additionally state that "If many closely neighboring subjects are described by connecting theoretical concepts, then a theoretical structure acquires a robustness which makes it increasingly hard —though certainly never impossible— to overturn."
There are different ways of outlining the basic method used for scientific inquiry. The scientific community and philosophers of science generally agree on the following classification of method components. These methodological elements and organization of procedures tend to be more characteristic of natural sciences than social sciences. Nonetheless, the cycle of formulating hypotheses, testing and analyzing the results, and formulating new hypotheses, will resemble the cycle described below.
Four essential elements[31][32][33] of a scientific method[34] are iterations,[35][36] recursions,[37] interleavings, or orderings of the following:
Characterizations (observations,[38] definitions, and measurements of the subject of inquiry)
Hypotheses[39][40] (theoretical, hypothetical explanations of observations and measurements of the subject)[41]
Predictions (reasoning including logical deduction[42] from the hypothesis or theory)
Experiments[43] (tests of all of the above)
Each element of a scientific method is subject to peer review for possible mistakes. These activities do not describe all that scientists do (see below) but apply mostly to experimental sciences (e.g., physics, chemistry, and biology). The elements above are often taught in the educational system as "the scientific method".[44]
The scientific method is not a single recipe: it requires intelligence, imagination, and creativity.[45] In this sense, it is not a mindless set of standards and procedures to follow, but is rather an ongoing cycle, constantly developing more useful, accurate and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton's Principia. On the contrary, if the astronomically large, the vanishingly small, and the extremely fast are removed from Einstein's theories — all phenomena Newton could not have observed — Newton's equations are what remain. Einstein's theories are expansions and refinements of Newton's theories and, thus, increase our confidence in Newton's work.
A linearized, pragmatic scheme of the four points above is sometimes offered as a guideline for proceeding:[46]
Define a question
Gather information and resources (observe)
Form an explanatory hypothesis
Test the hypothesis by performing an experiment and collecting data in a reproducible manner
Analyze the data
Interpret the data and draw conclusions that serve as a starting point for new hypothesis
Publish results
Retest (frequently done by other scientists)
The iterative cycle inherent in this step-by-step methodology goes from point 3 to 6 back to 3 again.
While this schema outlines a typical hypothesis/testing method,[47] it should also be noted that a number of philosophers, historians and sociologists of science (perhaps most notably Paul Feyerabend) claim that such descriptions of scientific method have little relation to the ways science is actually practiced.
The "operational" paradigm combines the concepts of operational definition, instrumentalism, and utility:
The essential elements of a scientific method are operations, observations, models, and a utility function for evaluating models.[48][not in citation given]
Operation - Some action done to the system being investigated
Observation - What happens when the operation is done to the system
Model - A fact, hypothesis, theory, or the phenomenon itself at a certain moment
Utility Function - A measure of the usefulness of the model to explain, predict, and control, and of the cost of use of it. One of the elements of any scientific utility function is the refutability of the model. Another is its simplicity, on the Principle of Parsimony more commonly known as Occam's Razor.
Scientific method depends upon increasingly sophisticated characterizations of the subjects of investigation. (The subjects can also be called unsolved problems or the unknowns.) For example, Benjamin Franklin conjectured, correctly, that St. Elmo's fire was electrical in nature, but it has taken a long series of experiments and theoretical changes to establish this. While seeking the pertinent properties of the subjects, careful thought may also entail some definitions and observations; the observations often demand careful measurements and/or counting.
The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference between pseudo-sciences, such as alchemy, and science, such as chemistry or biology. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such as correlation and regression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specialized scientific instruments such as thermometers, spectroscopes, particle accelerators, or voltmeters, and the progress of a scientific field is usually intimately tied to their invention and improvement.
"I am not accustomed to saying anything with certainty after only one or two observations."—Andreas Vesalius (1546) [49]
Measurements in scientific work are also usually accompanied by estimates of their uncertainty. The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due to data collection limitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon the sampling method used and the number of samples taken.
Measurements demand the use of operational definitions of relevant quantities. That is, a scientific quantity is described or defined by how it is measured, as opposed to some more vague, inexact or "idealized" definition. For example, electrical current, measured in amperes, may be operationally defined in terms of the mass of silver deposited in a certain time on an electrode in an electrochemical device that is described in some detail. The operational definition of a thing often relies on comparisons with standards: the operational definition of "mass" ultimately relies on the use of an artifact, such as a particular kilogram of platinum-iridium kept in a laboratory in France.
The scientific definition of a term sometimes differs substantially from its natural language usage. For example, mass and weight overlap in meaning in common discourse, but have distinct meanings in mechanics. Scientific quantities are often characterized by their units of measure which can later be described in terms of conventional physical units when communicating the work.
New theories are sometimes developed after realizing certain terms have not previously been sufficiently clearly defined. For example, Albert Einstein's first paper on relativity begins by defining simultaneity and the means for determining length. These ideas were skipped over by Isaac Newton with, "I do not define time, space, place and motion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations. Francis Crick cautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood.[50] In Crick's study of consciousness, he actually found it easier to study awareness in the visual system, rather than to study free will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them.
The history of the discovery of the structure of DNA is a classic example of the elements of scientific method: in 1950 it was known that genetic inheritance had a mathematical description, starting with the studies of Gregor Mendel, and that DNA contained genetic information (Oswald Avery's transforming principle).[51] But the mechanism of storing genetic information (i.e., genes) in DNA was unclear. Researchers in Bragg's laboratory at Cambridge University made X-ray diffraction pictures of various molecules, starting with crystals of salt, and proceeding to more complicated substances. Using clues painstakingly assembled over decades, beginning with its chemical composition, it was determined that it should be possible to characterize the physical structure of DNA, and the X-ray images would be the vehicle.[52] ..2. DNA-hypotheses
Another example: precession of Mercury
Precession of the perihelion (exaggerated)The characterization element can require extended and extensive study, even centuries. It took thousands of years of measurements, from the Chaldean, Indian, Persian, Greek, Arabic and European astronomers, to fully record the motion of planet Earth. Newton was able to include those measurements into consequences of his laws of motion. But the perihelion of the planet Mercury's orbit exhibits a precession that cannot be fully explained by Newton's laws of motion (see diagram to the right), though it took quite some time to realize this. The observed difference for Mercury's precession between Newtonian theory and observation was one of the things that occurred to Einstein as a possible early test of his theory of General Relativity. His relativistic calculations matched observation much more closely than did Newtonian theory (the difference is approximately 43 arc-seconds per century), .
Main article: Hypothesis formation
A hypothesis is a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena.
Normally hypotheses have the form of a mathematical model. Sometimes, but not always, they can also be formulated as existential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic.
Scientists are free to use whatever resources they have — their own creativity, ideas from other fields, induction, Bayesian inference, and so on — to imagine possible explanations for a phenomenon under study. Charles Sanders Peirce, borrowing a page from Aristotle (Prior Analytics, 2.25) described the incipient stages of inquiry, instigated by the "irritation of doubt" to venture a plausible guess, as abductive reasoning. The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea. Michael Polanyi made such creativity the centerpiece of his discussion of methodology.
William Glen observes that the success of a hypothesis, or its service to science, lies not simply in its perceived "truth", or power to displace, subsume or reduce a predecessor idea, but perhaps more in its ability to stimulate the research that will illuminate … bald suppositions and areas of vagueness.[53]
In general scientists tend to look for theories that are "elegant" or "beautiful". In contrast to the usual English use of these terms, they here refer to a theory in accordance with the known facts, which is nevertheless relatively simple and easy to handle. Occam's Razor serves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses.
DNA-hypothesesLinus Pauling proposed that DNA might be a triple helix.[54] This hypothesis was also considered by Francis Crick and James D. Watson but discarded. When Watson and Crick learned of Pauling's hypothesis, they understood from existing data that Pauling was wrong[55] and that Pauling would soon admit his difficulties with that structure. So, the race was on to figure out the correct structure (except that Pauling did not realize at the time that he was in a race—see section on "DNA-predictions" below)
Main article: Prediction in science
Any useful hypothesis will enable predictions, by reasoning including deductive reasoning. It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and deal only with probabilities.
It is essential that the outcome testing such a prediction be currently unknown. Only in this case does the eventuation increase the probability that the hypothesis be true. If the outcome is already known, it's called a consequence and should have already been considered while formulating the hypothesis.
If the predictions are not accessible by observation or experience, the hypothesis is not yet testable and so will remain to that extent unscientific in a strict sense. A new technology or theory might make the necessary experiments feasible. Thus, much scientifically based speculation might convince one (or many) that the hypothesis that other intelligent species exist is true, but there being not experiment now known which can test this hypothesis, science itself can have little to say about the possibility. In future, some new technique might lead to an experimental test and the speculation become part of accepted science.
James D. Watson, Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'.[56][57] This prediction followed from the work of Cochran, Crick and Vand[20] (and independently by Stokes). The Cochran-Crick-Vand-Stokes theorem provided a mathematical explanation for the empirical observation that diffraction from helical structures produces x shaped patterns.
In their first paper, Watson and Crick also noted that the double helix structure they proposed provided a simple mechanism for DNA replication, writing "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material".[58] ..4. DNA-experiments
Another example: general relativity
Einstein's prediction (1907): Light bends in a gravitational fieldEinstein's theory of General Relativity makes several specific predictions about the observable structure of space-time, such as that light bends in a gravitational field, and that the amount of bending depends in a precise way on the strength of that gravitational field. Arthur Eddington's observations made during a 1919 solar eclipse supported General Relativity rather than Newtonian gravitation.[59]
Main article: Experiment
Once predictions are made, they can be sought by experiments. If test results contradict the predictions, the hypotheses which made them are called into question and become less tenable. Sometimes experiments are conducted incorrectly and are not very useful. If the results confirm the predictions, then the hypotheses are considered more likely to be correct, but might still be wrong and continue to be subject to further testing. The experimental control is a technique for dealing with observational error. This technique uses the contrast between multiple samples (or observations) under differing conditions to see what varies or what remains the same. We vary the conditions for each measurement, to help isolate what has changed. Mill's canons can then help us figure out what the important factor is.[60] Factor analysis is one technique for discovering the important factor in an effect.
Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, a double-blind study or an archaeological excavation. Even taking a plane from New York to Paris is an experiment which tests the aerodynamical hypotheses used for constructing the plane.
Scientists assume an attitude of openness and accountability on the part of those conducting an experiment. Detailed record keeping is essential, to aid in recording and reporting on the experimental results, and supports the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results, likely by others. Traces of this approach can be seen in the work of Hipparchus (190-120 BCE), when determining a value for the precession of the Earth, while controlled experiments can be seen in the works of Jābir ibn Hayyān (721-815 CE), al-Battani (853–929) and Alhazen (965-1039).[61]
Watson and Crick showed an initial (and incorrect) proposal for the structure of DNA to a team from Kings College - Rosalind Franklin, Maurice Wilkins, and Raymond Gosling. Franklin immediately spotted the flaws which concerned the water content. Later Watson saw Franklin's detailed X-ray diffraction images which showed an X-shape and was able to confirm the structure was helical.[21][62] This rekindled Watson and Crick's model building and led to the correct structure. ..1. DNA-characterizations
The scientific process is iterative. At any stage it is possible to refine its accuracy and precision, so that some consideration will lead the scientist to repeat an earlier part of the process. Failure to develop an interesting hypothesis may lead a scientist to re-define the subject under consideration. Failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject. Failure of an experiment to produce interesting results may lead a scientist to reconsider the experimental method, the hypothesis, or the definition of the subject.
Other scientists may start their own research and enter the process at any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction, and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility.
After considerable fruitless experimentation, being discouraged by their superior from continuing, and numerous false starts,[63][64][65] Watson and Crick were able to infer the essential structure of DNA by concrete modeling of the physical shapes of the nucleotides which comprise it.[22][66] They were guided by the bond lengths which had been deduced by Linus Pauling and by Rosalind Franklin's X-ray diffraction images. ..DNA Example
Science is a social enterprise, and scientific work tends to be accepted by the scientific community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. Researchers have given their lives for this vision; Georg Wilhelm Richmann was killed by ball lightning (1753) when attempting to replicate the 1752 kite-flying experiment of Benjamin Franklin.[67]
To protect against bad science and fraudulent data, government research-granting agencies such as the National Science Foundation, and science journals including Nature and Science, have a policy that researchers must archive their data and methods so other researchers can test the data and methods and build on the research that has gone before. Scientific data archiving can be done at a number of national archives in the U.S. or in the World Data Center.
Main article: Models of scientific inquiry
The classical model of scientific inquiry derives from Aristotle,[68] who distinguished the forms of approximate and exact reasoning, set out the threefold scheme of abductive, deductive, and inductive inference, and also treated the compound forms such as reasoning by analogy.
See also: Pragmatic theory of truth
In 1877,[69] Charles Sanders Peirce ( /ˈpɜrs/ like "purse"; 1839–1914) characterized inquiry in general not as the pursuit of truth per se but as the struggle to move from irritating, inhibitory doubts born of surprises, disagreements, and the like, and to reach a secure belief, belief being that on which one is prepared to act. He framed scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal or hyperbolic doubt, which he held to be fruitless.[70] He outlined four methods of settling opinion, ordered from least to most successful:
The method of tenacity (policy of sticking to initial belief) — which brings comforts and decisiveness but leads to trying to ignore contrary information and others' views as if truth were intrinsically private, not public. It goes against the social impulse and easily falters since one may well notice when another's opinion is as good as one's own initial opinion. Its successes can shine but tend to be transitory.
The method of authority — which overcomes disagreements but sometimes brutally. Its successes can be majestic and long-lived, but it cannot operate thoroughly enough to suppress doubts indefinitely, especially when people learn of other societies present and past.
The method of congruity or the a priori or the dilettante or "what is agreeable to reason" — which promotes conformity less brutally but depends on taste and fashion in paradigms and can go in circles over time, along with barren disputation. It is more intellectual and respectable but, like the first two methods, sustains accidental and capricious beliefs, destining some minds to doubts. The scientific method — the method wherein inquiry regards itself as fallible and purposely tests itself and criticizes, corrects, and improves itself.
Peirce held that slow, stumbling ratiocination can be dangerously inferior to instinct and traditional sentiment in practical matters, and that the scientific method is best suited to theoretical research,[71] which in turn should not be trammeled by the other methods and practical ends; reason's "first rule" is that, in order to learn, one must desire to learn and, as a corollary, must not block the way of inquiry.[72] The scientific method excels the others by being deliberately designed to arrive — eventually — at the most secure beliefs, upon which the most successful practices can be based. Starting from the idea that people seek not truth per se but instead to subdue irritating, inhibitory doubt, Peirce showed how, through the struggle, some can come to submit to truth for the sake of belief's integrity, seek as truth the guidance of potential practice correctly to its given goal, and wed themselves to the scientific method.[69][73]
For Peirce, rational inquiry implies presuppositions about truth and the real; to reason is to presuppose (and at least to hope), as a principle of the reasoner's self-regulation, that the real is discoverable and independent of our vagaries of opinion. In that vein he defined truth as the correspondence of a sign (in particular, a proposition) to its object and, pragmatically, not as actual consensus of some definite, finite community (such that to inquire would be to poll the experts), but instead as that final opinion which all investigators would reach sooner or later but still inevitably, if they were to push investigation far enough, even when they start from different points.[74] In tandem he defined the real as a true sign's object (be that object a possibility or quality, or an actuality or brute fact, or a necessity or norm or law), which is what it is independently of any finite community's opinion and, pragmatically, depends only on the final opinion destined in a sufficient investigation. That is a destination as far, or near, as the truth itself to you or me or the given finite community. Thus his theory of inquiry boils down to "Do the science." Those conceptions of truth and the real involve the idea of a community both without definite limits (and thus potentially self-correcting as far as needed) and capable of definite increase of knowledge.[75] As inference, "logic is rooted in the social principle" since it depends on a standpoint that is, in a sense, unlimited.[76]
Paying special attention to the generation of explanations, Peirce outlined scientific method as a coordination of three kinds of inference in a purposeful cycle aimed at settling doubts, as follows (in §III–IV in "A Neglected Argument"[77] except as otherwise noted):
1. Abduction (or retroduction). Guessing, inference to explanatory hypotheses for selection of those best worth trying. From abduction, Peirce distinguishes induction as inferring, on the basis of tests, the proportion of truth in the hypothesis. Every inquiry, whether into ideas, brute facts, or norms and laws, arises from surprising observations in one or more of those realms (and for example at any stage of an inquiry already underway). All explanatory content of theories comes from abduction, which guesses a new or outside idea so as to account in a simple, economical way for a surprising or complicative phenomenon. Oftenest, even a well-prepared mind guesses wrong. But the modicum of success of our guesses far exceeds that of sheer luck and seems born of attunement to nature by instincts developed or inherent, especially insofar as best guesses are optimally plausible and simple in the sense, said Peirce, of the "facile and natural", as by Galileo's natural light of reason and as distinct from "logical simplicity". Abduction is the most fertile but least secure mode of inference. Its general rationale is inductive: it succeeds often enough and, without it, there is no hope of sufficiently expediting inquiry (often multi-generational) toward new truths.[78] Coordinative method leads from abducing a plausible hypothesis to judging it for its testability[79] and for how its trial would economize inquiry itself.[80] Peirce calls his pragmatism "the logic of abduction".[81] His pragmatic maxim is: "Consider what effects that might conceivably have practical bearings you conceive the objects of your conception to have. Then, your conception of those effects is the whole of your conception of the object".[74] His pragmatism is a method of reducing conceptual confusions fruitfully by equating the meaning of any conception with the conceivable practical implications of its object's conceived effects — a method of experimentational mental reflection hospitable to forming hypotheses and conducive to testing them. It favors efficiency. The hypothesis, being insecure, needs to have practical implications leading at least to mental tests and, in science, lending themselves to scientific tests. A simple but unlikely guess, if uncostly to test for falsity, may belong first in line for testing. A guess is intrinsically worth testing if it has instinctive plausibility or reasoned objective probability, while subjective likelihood, though reasoned, can be misleadingly seductive. Guesses can be chosen for trial strategically, for their caution (for which Peirce gave as example the game of Twenty Questions), breadth, and incomplexity.[82] One can hope to discover only that which time would reveal through a learner's sufficient experience anyway, so the point is to expedite it; the economy of research is what demands the leap, so to speak, of abduction and governs its art.[80]
2. Deduction. Two stages:
i. Explication. Unclearly premissed, but deductive, analysis of the hypothesis in order to render its parts as clear as possible.
ii. Demonstration: Deductive Argumentation, Euclidean in procedure. Explicit deduction of hypothesis's consequences as predictions, for induction to test, about evidence to be found. Corollarial or, if needed, Theorematic.
3. Induction. The long-run validity of the rule of induction is deducible from the principle (presuppositional to reasoning in general[74]) that the real is only the object of the final opinion to which adequate investigation would lead;[83] anything to which no such process would ever lead would not be real. Induction involving ongoing tests or observations follows a method which, sufficiently persisted in, will diminish its error below any predesignate degree. Three stages:
i. Classification. Unclearly premissed, but inductive, classing of objects of experience under general ideas.
ii. Probation: direct Inductive Argumentation. Crude (the enumeration of instances) or Gradual (new estimate of proportion of truth in the hypothesis after each test). Gradual Induction is Qualitative or Quantitative; if Qualitative, then dependent on weightings of qualities or characters;[84] if Quantitative, then dependent on measurements, or on statistics, or on countings.
iii. Sentential Induction. "...which, by Inductive reasonings, appraises the different Probations singly, then their combinations, then makes self-appraisal of these very appraisals themselves, and passes final judgment on the whole result".
Computational approachesMany subspecialties of applied logic and computer science, such as artificial intelligence, machine learning, computational learning theory, inferential statistics, and knowledge representation, are concerned with setting out computational, logical, and statistical frameworks for the various types of inference involved in scientific inquiry. In particular, they contribute hypothesis formation, logical deduction, and empirical testing. Some of these applications draw on measures of complexity from algorithmic information theory to guide the making of predictions from prior distributions of experience, for example, see the complexity measure called the speed prior from which a computable strategy for optimal inductive reasoning can be derived.
Frequently a scientific method is employed not only by a single person, but also by several people cooperating directly or indirectly. Such cooperation can be regarded as one of the defining elements of a scientific community. Various techniques have been developed to ensure the integrity of that scientific method within such an environment.
Scientific journals use a process of peer review, in which scientists' manuscripts are submitted by editors of scientific journals to (usually one to three) fellow (usually anonymous) scientists familiar with the field for evaluation. The referees may or may not recommend publication, publication with suggested modifications, or, sometimes, publication in another journal. This serves to keep the scientific literature free of unscientific or pseudoscientific work, to help cut down on obvious errors, and generally otherwise to improve the quality of the material. The peer review process can have limitations when considering research outside the conventional scientific paradigm: problems of "groupthink" can interfere with open and fair deliberation of some new research.[85]
Main article: Reproducibility
Sometimes experimenters may make systematic errors during their experiments, unconsciously veer from a scientific method (Pathological science) for various reasons, or, in rare cases, deliberately report false results. Consequently, it is a common practice for other scientists to attempt to repeat the experiments in order to duplicate the results, thus further validating the hypothesis.
As a result, researchers are expected to practice scientific data archiving in compliance with the policies of government funding agencies and scientific journals. Detailed records of their experimental procedures, raw data, statistical analyses and source code are preserved in order to provide evidence of the effectiveness and integrity of the procedure and assist in reproduction. These procedural records may also assist in the conception of new experiments to test the hypothesis, and may prove useful to engineers who might examine the potential practical applications of a discovery.
When additional information is needed before a study can be reproduced, the author of the study is expected to provide it promptly. If the author refuses to share data, appeals can be made to the journal editors who published the study or to the institution which funded the research.
Since it is impossible for a scientist to record everything that took place in an experiment, facts selected for their apparent relevance are reported. This may lead, unavoidably, to problems later if some supposedly irrelevant feature is questioned. For example, Heinrich Hertz did not report the size of the room used to test Maxwell's equations, which later turned out to account for a small deviation in the results. The problem is that parts of the theory itself need to be assumed in order to select and report the experimental conditions. The observations are hence sometimes described as being 'theory-laden'.
Dimensions of practiceFurther information: Rhetoric of science
The primary constraints on contemporary western science are:
Publication, i.e. Peer review
Resources (mostly funding)
It has not always been like this: in the old days of the "gentleman scientist" funding (and to a lesser extent publication) were far weaker constraints.
Both of these constraints indirectly bring in a scientific method — work that too obviously violates the constraints will be difficult to publish and difficult to get funded. Journals do not require submitted papers to conform to anything more specific than "good scientific practice" and this is mostly enforced by peer review. Originality, importance and interest are more important - see for example the author guidelines for Nature.
Philosophy and sociology of scienceMain articles: Philosophy of science and Sociology of science
Philosophy of science looks at the underpinning logic of the scientific method, at what separates science from non-science, and the ethic that is implicit in science. There are basic assumptions derived from philosophy that form the base of the scientific method - namely, that reality is objective and consistent, that humans have the capacity to perceive reality accurately, and that rational explanations exist for elements of the real world. These assumptions from methodological naturalism form the basis on which science is grounded. Logical Positivist, empiricist, falsificationist, and other theories have claimed to give a definitive account of the logic of science, but each has in turn been criticized.
Thomas Kuhn examined the history of science in his The Structure of Scientific Revolutions, and found that the actual method used by scientists differed dramatically from the then-espoused method. His observations of science practice are essentially sociological and do not speak to how science is or can be practiced in other times and other cultures.
Norwood Russell Hanson, Imre Lakatos and Thomas Kuhn have done extensive work on the "theory laden" character of observation. Hanson (1958) first coined the term for the idea that all observation is dependent on the conceptual framework of the observer, using the concept of gestalt to show how preconceptions can affect both observation and description.[86] He opens Chapter 1 with a discussion of the Golgi bodies and their initial rejection as an artefact of staining technique, and a discussion of Brahe and Kepler observing the dawn and seeing a "different" sun rise despite the same physiological phenomenon. Kuhn [87] and Feyerabend [88] acknowledge the pioneering significance of his work.
Kuhn (1961) said the scientist generally has a theory in mind before designing and undertaking experiments so as to make empirical observations, and that the "route from theory to measurement can almost never be traveled backward". This implies that the way in which theory is tested is dictated by the nature of the theory itself, which led Kuhn (1961, p. 166) to argue that "once it has been adopted by a profession ... no theory is recognized to be testable by any quantitative tests that it has not already passed".[89]
Paul Feyerabend similarly examined the history of science, and was led to deny that science is genuinely a methodological process. In his book Against Method he argues that scientific progress is not the result of applying any particular method. In essence, he says that for any specific method or norm of science, one can find a historic episode where violating it has contributed to the progress of science. Thus, if believers in a scientific method wish to express a single universally valid rule, Feyerabend jokingly suggests, it should be 'anything goes'.[90] Criticisms such as his led to the strong programme, a radical approach to the sociology of science.
Highly controlled experimentation allows researchers to catch their mistakes, but it also makes anomalies (which no one knew to look for) easier to seeIn his 1958 book, Personal Knowledge, chemist and philosopher Michael Polanyi (1891–1976) criticized the common view that the scientific method is purely objective and generates objective knowledge. Polanyi cast this view as a misunderstanding of the scientific method and of the nature of scientific inquiry, generally. He argued that scientists do and must follow personal passions in appraising facts and in determining which scientific questions to investigate. He concluded that a structure of liberty is essential for the advancement of science - that the freedom to pursue science for its own sake is a prerequisite for the production of knowledge through peer review and the scientific method.
The postmodernist critiques of science have themselves been the subject of intense controversy. This ongoing debate, known as the science wars, is the result of conflicting values and assumptions between the postmodernist and realist camps. Whereas postmodernists assert that scientific knowledge is simply another discourse (note that this term has special meaning in this context) and not representative of any form of fundamental truth, realists in the scientific community maintain that scientific knowledge does reveal real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate method of deriving truth.[91]
Somewhere between 33% and 50% of all scientific discoveries are estimated to have been stumbled upon, rather than sought out. This may explain why scientists so often express that they were lucky.[92] Louis Pasteur is credited with the famous saying that "Luck favours the prepared mind", but some psychologists have begun to study what it means to be 'prepared for luck' in the scientific context. Research is showing that scientists are taught various heuristics that tend to harness chance and the unexpected.[92][93] This is what professor of economics Nassim Nicholas Taleb calls "Anti-fragility"; while some systems of investigation are fragile in the face of human error, human bias, and randomness, the scientific method is more than resistant or tough - it actually benefits from such randomness in many ways (it is anti-fragile). Taleb believes that the more anti-fragile the system, the more it will flourish in the real world.[94]
Psychologist Kevin Dunbar says the process of discovery often starts with researchers finding bugs in their experiments. These unexpected results lead researchers to try and fix what they think is an error in their methodology. Eventually, the researcher decides the error is too persistent and systematic to be a coincidence. The highly controlled, cautious and curious aspects of the scientific method are thus what make it well suited for identifying such persistent systematic errors. At this point, the researcher will begin to think of theoretical explanations for the error, often seeking the help of colleagues across different domains of expertise.[92][93]
Last edited by orthodoxymoron on Sat Mar 10, 2012 8:42 pm; edited 3 times in total