sansahansan Wrote:
-------------------------------------------------------
...
>
> The question you've raised in my mind however is
> that there must be a tipping point in the
> probabilities...
>
> At what point of 'probability' does research begin
> on a given topic to investigate a subject or
> theory?
>
This is a really important question, and makes an important point about science (defining "science" very generally here) that sometimes people have a hard time grasping.
One difference in judging probabilities is the datasets. A pseudoscientist is only responsible for gathering anomalies, but doesn't have to ride herd on all the other data and work that has been done. Making high-probability straight-line connections is easy when you only have a dozen or so data points. Not so easy when you have hundreds or thousands and an awareness of overall patterns that are not necessarily easily quantified.
Research doesn't proceed solely by a series of individual hypothesis-testing experiments. Probably a better way of thinking about it is that hypotheses are essentially tested against
all previous data, not just the data in the one experiment. So you not only need to look at the probability of the data given the hypothesis (as in classic hypothesis testing), but also the
probability of the hypothesis given the previous (or even subsequent) data ("prior probability").
On the hypothesis-confirming rather than hypothesis-generating end of things, a second problem is that nonspecialists (not just pseudoscientists) are often unaware of is just how easy it is for error to creep into research. Even the most controlled hard-science experiments (which your average archaeological sites are not) have a considerable possibility for error, or more precisely, lack of control for all variables. And there is also the unfortunate possibility of fraud. That is why
solid results, peer-review (which doesn't really happen properly until
after publication), and reproducibility are so important, as is the estimate of prior probability.
So when archaeologists look at these very iffy isolated bits of evidence presented by pseudo- or alternative-archaeology aficionados, their thought processes are a bit different--it comes down to some choices about probabilities ("What is higher probability--that everything we know is wrong, or that something is wrong with this one site?"; "Do we dismiss the anomaly, or take a wait-and-see position?"; "Does lousy evidence have additive properties?" etc.). This probabilistic "data-herding" is something that alternative types can find frustrating and opaque, which I think is a big reason for rantings about various archaeological and other scientific conspiracy theories.
To take a now non-controversial example, one can see this process with Piltdown Man. Piltdown became more and more anomalous as other data accumulated, and there was the increasing feeling among scientists that something was wrong with the Piltdown data. It started dropping out of the literature before it was conclusively tested--the probabilities were stacking against it. But I am sure pseudoarchaeologists would have looked at Piltdown as an anomaly that scientists were ignoring (and no doubt invoking various conspiracies to explain this)
.
If you are interested, this probabilistic notion of research is formalized in philosophy of science as Bayesianism, (after the statistical approach). It's a rough literature but it got quite a bit of play recently with some psi research that yielded a positive effect. Many of the same issues, like how many resources do we throw at hypotheses with vanishingly small prior probabilities?
Edited 1 time(s). Last edit at 04/02/2011 01:21PM by Khodok.