home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: sci.space
- Path: sparky!uunet!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!cis.ohio-state.edu!news.sei.cmu.edu!fs7.ece.cmu.edu!crabapple.srv.cs.cmu.edu!roberts@cmr.ncsl.nist.gov
- From: roberts@cmr.ncsl.nist.gov (John Roberts)
- Subject: Re: Scientific method
- Message-ID: <By2xE4.BnH.1@cs.cmu.edu>
- X-Added: Forwarded by Space Digest
- Sender: news+@cs.cmu.edu
- Organization: National Institute of Standards and Technology formerly National Bureau of Standards
- Original-Sender: isu@VACATION.VENARI.CS.CMU.EDU
- Distribution: sci
- Date: Sat, 21 Nov 1992 18:43:30 GMT
- Approved: bboard-news_gateway
- Lines: 53
-
-
- -From: henry@zoo.toronto.edu (Henry Spencer)
- -Subject: Re: Scientific method
- -Date: 20 Nov 92 18:38:27 GMT
-
- -In article <By059r.p8.1@cs.cmu.edu> roberts@cmr.ncsl.nist.gov (John Roberts) writes:
- ->... for instance, for instance, the Earth-impact model of the formation
- ->of the moon has risen from obscurity to the "most favored model", with
- ->(as far as I know) little or no new input of information - it's based on
- ->mathematical models and old Apollo and Voyager data...
-
- -There is no problem testing a new theory quite rigorously using old data,
- -if you do it carefully. The trick is simply to get some testable predictions
- -out of the theory before you look (closely) at the data, and then see if it
- -checks. This does get more difficult if the new theory has to be calibrated
- -using the same data, but sometimes it can still be done. It is more
- -satisfying to have prediction precede experiment, because that *guarantees*
- -that the theory was not custom-cooked to match the results,
-
- I agree it can be done, but there are some potential difficulties that
- investigators have to watch out for. Most obvious is "cheating" - theorists
- making use of information that they pretend not to have when constructing
- their models. To prevent that, somebody might get the idea of keeping
- significant portions of the observational data "secret" - to be released
- only after theories have been formulated. This of course would inhibit
- the distribution of information (slowing the development of better theories),
- and bring about the risk of "insider trading" of information ("so *that's*
- why Fred Smith keeps coming up with the best theories"). I'm also not sure
- how modifications of theories should be regarded in this context - if you
- come up with a theory based on partial information, and analysis of more of
- the data shows that some aspect of the theory needs to be modified, does
- that mean that the theory is now considered questionable until it can be
- checked against still more data? If so, there has to be some established
- mechanism of releasing the data in small portions, so that a theory can be
- taken through several iterations of evolution. But how can we be sure that
- the decision on what data to release each time doesn't introduce biases into
- the direction of development of the theory?
-
- -but having a
- -theory derived from general principles precisely explain measured phenomena
- -in detail is a valid test, and often a fairly good one.
-
- That seems to be the most straightforward approach in this context. I would
- guess that in *most* cases, doing it this way rather than trying to force
- in into the traditional mold would give the best results. (Spinoffs from
- other research may provide additional confirmation anyway.) Of course, as you
- say, the reason for the traditional method is to guard against researchers
- inadvertently convincing themselves that events *had* to take place a certain
- way because that's what was observed.
-
- John Roberts
- roberts@cmr.ncsl.nist.gov
-
-