Tilted Forum Project Discussion Community  

Go Back   Tilted Forum Project Discussion Community > The Academy > Tilted Philosophy


 
 
LinkBack Thread Tools
Old 01-24-2011, 01:40 PM   #1 (permalink)
 
roachboy's Avatar
 
Super Moderator
Location: essex ma
Is there something wrong with the scientific method?

from time to time in philo threads here---typically when the topic is one or another version of that barnes-and-noble favorite religion and why x is stupid to believe/not believe---there are various confessions of faith in Science and Scientific Method as if this pretty basic protocol was somehow a gateway to True Stuff.

this article from the new yorker poses some interesting problems for this sort of faith in scientific method.
i am curious as to what you make of it:

Quote:
Annals of Science
The Truth Wears Off
Is there something wrong with the scientific method?
by Jonah Lehrer December 13, 2010
Many results that are rigorously proved and accepted start shrinking in later studies.


On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties. The drugs, sold under brand names such as Abilify, Seroquel, and Zyprexa, had been tested on schizophrenics in several large clinical trials, all of which had demonstrated a dramatic decrease in the subjects’ psychiatric symptoms. As a result, second-generation antipsychotics had become one of the fastest-growing and most profitable pharmaceutical classes. By 2001, Eli Lilly’s Zyprexa was generating more revenue than Prozac. It remains the company’s top-selling drug.

But the data presented at the Brussels meeting made it clear that something strange was happening: the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.

Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.

But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.

For many scientists, the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.

Jonathan Schooler was a young graduate student at the University of Washington in the nineteen-eighties when he discovered a surprising new fact about language and memory. At the time, it was widely believed that the act of describing our memories improved them. But, in a series of clever experiments, Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”

The study turned him into an academic star. Since its initial publication, in 1990, it has been cited more than four hundred times. Before long, Schooler had extended the model to a variety of other tasks, such as remembering the taste of a wine, identifying the best strawberry jam, and solving difficult creative puzzles. In each instance, asking people to put their perceptions into words led to dramatic decreases in performance.

But while Schooler was publishing these results in highly reputable journals, a secret worry gnawed at him: it was proving difficult to replicate his earlier findings. “I’d often still see an effect, but the effect just wouldn’t be as strong,” he told me. “It was as if verbal overshadowing, my big new idea, was getting weaker.” At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”

Schooler tried to put the problem out of his mind; his colleagues assured him that such things happened all the time. Over the next few years, he found new research questions, got married and had kids. But his replication problem kept on getting worse. His first attempt at replicating the 1990 study, in 1995, resulted in an effect that was thirty per cent smaller. The next year, the size of the effect shrank another thirty per cent. When other labs repeated Schooler’s experiments, they got a similar spread of data, with a distinct downward trend. “This was profoundly frustrating,” he says. “It was as if nature gave me this great result and then tried to take it back.” In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”

Schooler is now a tenured professor at the University of California at Santa Barbara. He has curly black hair, pale-green eyes, and the relaxed demeanor of someone who lives five minutes away from his favorite beach. When he speaks, he tends to get distracted by his own digressions. He might begin with a point about memory, which reminds him of a favorite William James quote, which inspires a long soliloquy on the importance of introspection. Before long, we’re looking at pictures from Burning Man on his iPhone, which leads us back to the fragile nature of memory.

Although verbal overshadowing remains a widely accepted theory—it’s often invoked in the context of eyewitness testimony, for instance—Schooler is still a little peeved at the cosmos. “I know I should just move on already,” he says. “I really should stop talking about this. But I can’t.” That’s because he is convinced that he has stumbled on a serious problem, one that afflicts many of the most exciting new ideas in psychology.

One of the first demonstrations of this mysterious phenomenon came in the early nineteen-thirties. Joseph Banks Rhine, a psychologist at Duke, had developed an interest in the possibility of extrasensory perception, or E.S.P. Rhine devised an experiment featuring Zener cards, a special deck of twenty-five cards printed with one of five different symbols: a card was drawn from the deck and the subject was asked to guess the symbol. Most of Rhine’s subjects guessed about twenty per cent of the cards correctly, as you’d expect, but an undergraduate named Adam Linzmayer averaged nearly fifty per cent during his initial sessions, and pulled off several uncanny streaks, such as guessing nine cards in a row. The odds of this happening by chance are about one in two million. Linzmayer did it three times.

Rhine documented these stunning results in his notebook and prepared several papers for publication. But then, just as he began to believe in the possibility of extrasensory perception, the student lost his spooky talent. Between 1931 and 1933, Linzmayer guessed at the identity of another several thousand cards, but his success rate was now barely above chance. Rhine was forced to conclude that the student’s “extra-sensory perception ability has gone through a marked decline.” And Linzmayer wasn’t the only subject to experience such a drop-off: in nearly every case in which Rhine and others documented E.S.P. the effect dramatically diminished over time. Rhine called this trend the “decline effect.”

Schooler was fascinated by Rhine’s experimental struggles. Here was a scientist who had repeatedly documented the decline of his data; he seemed to have a talent for finding results that fell apart. In 2004, Schooler embarked on an ironic imitation of Rhine’s research: he tried to replicate this failure to replicate. In homage to Rhine’s interests, he decided to test for a parapsychological phenomenon known as precognition. The experiment itself was straightforward: he flashed a set of images to a subject and asked him or her to identify each one. Most of the time, the response was negative—the images were displayed too quickly to register. Then Schooler randomly selected half of the images to be shown again. What he wanted to know was whether the images that got a second showing were more likely to have been identified the first time around. Could subsequent exposure have somehow influenced the initial results? Could the effect become the cause?

The craziness of the hypothesis was the point: Schooler knows that precognition lacks a scientific explanation. But he wasn’t testing extrasensory powers; he was testing the decline effect. “At first, the data looked amazing, just as we’d expected,” Schooler says. “I couldn’t believe the amount of precognition we were finding. But then, as we kept on running subjects, the effect size”—a standard statistical measure—“kept on getting smaller and smaller.” The scientists eventually tested more than two thousand undergraduates. “In the end, our results looked just like Rhine’s,” Schooler said. “We found this strong paranormal effect, but it disappeared on us.”

The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time! Hell, it’s happened to me multiple times.” And this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”

In 1991, the Danish zoologist Anders Møller, at Uppsala University, in Sweden, made a remarkable discovery about sex, barn swallows, and symmetry. It had long been known that the asymmetrical appearance of a creature was directly linked to the amount of mutation in its genome, so that more mutations led to more “fluctuating asymmetry.” (An easy way to measure asymmetry in humans is to compare the length of the fingers on each hand.) What Møller discovered is that female barn swallows were far more likely to mate with male birds that had long, symmetrical feathers. This suggested that the picky females were using symmetry as a proxy for the quality of male genes. Møller’s paper, which was published in Nature, set off a frenzy of research. Here was an easily measured, widely applicable indicator of genetic quality, and females could be shown to gravitate toward it. Aesthetics was really about genetics.

In the three years following, there were ten independent tests of the role of fluctuating asymmetry in sexual selection, and nine of them found a relationship between symmetry and male reproductive success. It didn’t matter if scientists were looking at the hairs on fruit flies or replicating the swallow studies—females seemed to prefer males with mirrored halves. Before long, the theory was applied to humans. Researchers found, for instance, that women preferred the smell of symmetrical men, but only during the fertile phase of the menstrual cycle. Other studies claimed that females had more orgasms when their partners were symmetrical, while a paper by anthropologists at Rutgers analyzed forty Jamaican dance routines and discovered that symmetrical men were consistently rated as better dancers.

Then the theory started to fall apart. In 1994, there were fourteen published tests of symmetry and sexual selection, and only eight found a correlation. In 1995, there were eight papers on the subject, and only four got a positive result. By 1998, when there were twelve additional investigations of fluctuating asymmetry, only a third of them confirmed the theory. Worse still, even the studies that yielded some positive result showed a steadily declining effect size. Between 1992 and 1997, the average effect size shrank by eighty per cent.

And it’s not just fluctuating asymmetry. In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”

What happened? Leigh Simmons, a biologist at the University of Western Australia, suggested one explanation when he told me about his initial enthusiasm for the theory: “I was really excited by fluctuating asymmetry. The early studies made the effect look very robust.” He decided to conduct a few experiments of his own, investigating symmetry in male horned beetles. “Unfortunately, I couldn’t find the effect,” he said. “But the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.” For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.

Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.

While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts. Richard Palmer, a biologist at the University of Alberta, who has studied the problems surrounding fluctuating asymmetry, suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.

The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”

Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”

One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.

John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.” In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.

The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”

According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”

The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”

That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.) “I’ve learned the hard way to be exceedingly careful,” Schooler says. “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”

In a forthcoming paper, Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”

Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.

In the late nineteen-nineties, John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.

The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.

The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.

This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.

Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
The decline effect and the scientific method : The New Yorker

there seems to me a fair range of questions that come up here, some of which are more familiar to me than others (as someone more conversant with the philosophy of science than with actually doing experimental work)....

maybe it'd be interesting to simply ask what you make of this piece rather than try to direct discussion toward particular points at first. if this is of interest, attention can be directed toward different aspects of it as the thread unfolds.

do these problems (e.g. the non-repeatability of experimental results/routine non-testing of research from others, publication bias, preferences amongst practicing researchers for results that confirm preconceptions, epistemological loops) surprise you?

what do you think they imply about ordinary science in various fields?
do you see all sciences as potentially impacted in these ways?
__________________
a gramophone its corrugated trumpet silver handle
spinning dog. such faithfulness it hear

it make you sick.

-kamau brathwaite

Last edited by roachboy; 01-24-2011 at 01:52 PM..
roachboy is offline  
Old 01-24-2011, 02:05 PM   #2 (permalink)
Banned
 
Zeraph's Avatar
 
Location: The Cosmos
That is one big wall of text. Could you summarize the high points? I have memory problems and often forget the stuff I just read if I had to read that much.

Also, as usual, we answer our own questions when we start a thread.
Zeraph is offline  
Old 01-24-2011, 02:21 PM   #3 (permalink)
 
roachboy's Avatar
 
Super Moderator
Location: essex ma
it's worth plowing through the article.
but the font is small in the pasted version---click on the link for an easier page to read.
i considered taking down the pasted text after i saw it, but decided that i wouldn't because maybe folk don't want to click away to get to it.
an aesthetic choice.
__________________
a gramophone its corrugated trumpet silver handle
spinning dog. such faithfulness it hear

it make you sick.

-kamau brathwaite
roachboy is offline  
Old 01-24-2011, 02:49 PM   #4 (permalink)
Lover - Protector - Teacher
 
Jinn's Avatar
 
Location: Seattle, WA
I plowed (aptly) through this stinking pile of garbage at great pain. This article boils to the the common plea *OF* pseudoscience, that the scientific method itself, that empiricism or materialism are somehow inherently dirty, that they just don't know to measure the natural world and all of its dimensions and chakras and kis and "molecule vibrations." It reaches for some sort of luke-warm epistemology where we really can't Know Anything and evokes a false equivalence where peer-reivewed science has an equal chance as religion, or homeopathy at getting things right.

Now I'll deign the philosophical argument about the limitations of materialism, but if we provisionally conclude that things CAN be known about the material world and that they are objectively true or false relative to the frame of the (shared) observers, then science and the scientific method are the best human invention and the absolute best way of acquiring useful knowledge.

One raving lunatic (or even thousands) questioning the efficacy of the paradigm itself because a few drugs aren't as effective as they were thought is foolish and ridiculous, and I have a hard time even reading a portion of this article after the first reading. Science fails, results are falsified, results are omitted, good results are hyped and bad results ignored, etc., etc., it all happens because we're all human. But it has the best checks and balances, and it is all ironed out over time. If it weren't for science we'd still be geocentric cavemen. I'm sure ten years from now our understanding of the supposed failure of these drugs will be so far advanced to make our current understanding laughable. The same can't be said for the archaic bullshit of snake-oil peddlers and pontificating clergy.
__________________
"I'm typing on a computer of science, which is being sent by science wires to a little science server where you can access it. I'm not typing on a computer of philosophy or religion or whatever other thing you think can be used to understand the universe because they're a poor substitute in the role of understanding the universe which exists independent from ourselves." - Willravel

Last edited by Jinn; 01-24-2011 at 02:52 PM..
Jinn is offline  
Old 01-24-2011, 05:10 PM   #5 (permalink)
Junkie
 
filtherton's Avatar
 
Location: In the land of ice and snow.
I think the article is spot on. Good read, roach.

Jinn, the article doesn't speak of the inadequacy of the scientific method, it speaks of the inherent limitations of currently employed analytical methods along with problems that stem entirely from the fact that scientists are fallible.

I hope to have time to write more later.
filtherton is offline  
Old 01-24-2011, 07:31 PM   #6 (permalink)
 
roachboy's Avatar
 
Super Moderator
Location: essex ma
there is a slight of hand in the writing, particular in the subtitle---scientific method refers both to the ideal-typical form of experiment as a basis for scientific investigation and the methods that the sciences actually employ in their normal operations, their everyday practices. by the end of the piece this sleight of hand is explained, and the point is pretty clear: there is a contrast at the least--a contradiction at worst---between the ideal-typical notion of the scientific method and the methods with which normal science operates.

so the article isn't a simple-minded science is hooey thing...it's doing something else that's a lot more interesting. i mean, conceptually it's not surprising to read that there's a problem with researchers finding what they're looking for and tending to discount dissonant information---from the viewpoint of history or philosophy of science that's in any way informed by thomas kuhn (or anyone who's written since in that historically oriented mode, using the language game of paradigm/normal science or a variant) this is not surprising. what *is* surprising is the specific cases that the article talks about, and the *ways* in which questions of epistemologial loops arise within those cases because they come framed in the approaches of practioners within various areas of the sciences and not from historians or philosophers of science.

the opening gambit of mine about those quaint professions of Faith in Science that one reads that seem to proliferate in threads about religion---i simply find those professions naive, unaware of even the most rudimentary problems that attend the philosophy of language--which are also generate epistemological problems---that are in no way addressed by the experimental method because by the structure of experiment research is predisposed to find what it is looking for----research is basically the generating and tracking of variations within a general frame that is set in advance.

none of this goes in the direction of "therefore creation science"....rather the opposite. there's abundant research out there that argues that the separation of, say, philosophy in the more language oriented mode from science---which *is* a form of philosophizing about the world---operates to the detriment of both. no-one benefits from naivete.

i'm interested in what other folk think, you included jinn, even though i think you got a little thrown by the way i framed the article...
__________________
a gramophone its corrugated trumpet silver handle
spinning dog. such faithfulness it hear

it make you sick.

-kamau brathwaite
roachboy is offline  
Old 01-24-2011, 09:43 PM   #7 (permalink)
Junkie
 
filtherton's Avatar
 
Location: In the land of ice and snow.
A few thoughts. Maybe tangential.

One thing that is commonly glossed over in discussions about the nature of scientific certainty is the role that uncertainty plays. Uncertainty is everywhere. Typically, study designs seek to minimize the effects of likely sources of uncertainty and then statistical analysis is used post-data collection to gauge the extent of residual uncertainty and compensate for it.

This introduces at least two sets of problems. With respect to study design, investigators can only design to mitigate known sources of uncertainty and bias. There is no shortage of examples of clinical studies which seemed well designed, but ultimately failed because bias wasn't accounted for in the design of the study (at least my professors never seemed to run out of them) resulting in crippling levels of uncertainty. Obviously, if attempts to replicate flawed research themselves contain the same flaws, the newer results can either agree or disagree with the original results and still not accurately describe reality.

The second set of problems comes from the nature of statistical significance. Long story short, research generates data which is then analyzed using appropriate (hopefully) statistical methods. Each method has its own set of assumptions about the nature of the underlying data. Also, methods differ with respect to how accurate the results they generate are when their underlying assumptions are violated.

The basic strategy is this: gather data, look at it, determine appropriate statistical test, use test to generate appropriate test statistic (basically a number generated from the data via a test-specific method), compare this test statistic to what you'd expect it to be if your assumptions about the nature of the data are correct. If your test statistic is outside the range it should fall into 95% of the time, you say "Our results are significant (ie outside the 95% range) and they are ___________"

95% is arbitrary. Each time one of these tests is done, it's like someone is flipping a lopsided coin where 95% of the time heads comes up. Assuming the correct test is performed for each set of data, one should expect statistical significance to be erroneously found at most approximately 50 times for every 1000 significant results. I say at most, because many papers report a greater than 95% confidence level, say >99% or >99.9%. Even so, the sheer number of published results ensures that there will be many that find effects that aren't true.

The waters are further muddied by the fact that it is really easy to manipulate results using statistics. Your first analysis doesn't give you significant results? Try reformulating the age ranges in your analysis. Try limiting your analysis to a subset of your subjects. Repeat your analysis enough times and you're likely to stumble onto statistically significant results by sheer chance, never mind that they'll be illusory.

Further problems come from the fact that most consumers of scientific literature don't get beyond the press release or the abstract because they either don't have the time, don't want to pay to get past the pay wall or they lack the expertise to understand the paper.

None of this is to say that metaphysical alternatives are more compelling, or provide a more evidence-based foundation for understanding the world. However, I agree with roach that in certain types of discussions, the level of certainty generated by science is often given a level of reverence that is wholly unjustifiable in light of the amount of uncertainty inherent in actual research.
filtherton is offline  
Old 03-14-2011, 07:09 PM   #8 (permalink)
I change
 
ARTelevision's Avatar
 
Location: USA
The scientific method has given us the power to destroy ourselves at a very alarming rate. As an explanation for what is called "the real world" it fails to come to terms with the most real phenomenon we can experience - our conscious awareness (the "qualia" of our experience or the hard-problem of neurological research). The scientific method, reductionistic and materialistic to an absurd degree, turns out to be as phantasmagoric as the electrons, and quarks it proffers...

Science has done more damage than all other simplistic systems for working with the world that humans have yet devised. The problem, of course, is with us. We seem to require dangerously simple explanations of the universe and our experience within it. This causes terrible problems.
__________________
create evolution
ARTelevision is offline  
Old 03-14-2011, 07:40 PM   #9 (permalink)
... a sort of licensed troubleshooter.
 
Willravel's Avatar
 
Is there something wrong with the scientific method? Nope. The scientific method is our compass, something which exists outside of ourselves which we can rely on to always be the best tool available to point us in the direction of reality. It's not 100% perfect, in part because it's wielded by imperfect people, but so far nothing has been found that even holds a candle to it's success. I'm typing on a computer of science, which is being sent by science wires to a little science server where you can access it. I'm not typing on a computer of philosophy or religion or whatever other thing you think can be used to understand the universe because they're a poor substitute in the role of understanding the universe which exists independent from ourselves.
Willravel is offline  
Old 03-14-2011, 07:49 PM   #10 (permalink)
I change
 
ARTelevision's Avatar
 
Location: USA
Yes. The material world has some very entertaining aspects to it. And this method has allowed some small percentage of the world's population to enjoy them for brief moments between terribly stressful and unrewarding lives lived unconsciously, for the most part. I'm sure you have noticed these things. I use these machines, too. But I see no need to bow before the plumbing or the wires that have been strung up between the dead bodies of the industrial and post-industrial revolutions and which continue serving up daily misery. Religious reverence of this sort is not the kind of approach we need right now.
__________________
create evolution
ARTelevision is offline  
Old 03-14-2011, 07:51 PM   #11 (permalink)
warrior bodhisattva
 
Baraka_Guru's Avatar
 
Super Moderator
Location: East-central Canada
Don't shoot the messenger. Take aim at the masters.
__________________
Knowing that death is certain and that the time of death is uncertain, what's the most important thing?
—Bhikkhuni Pema Chödrön

Humankind cannot bear very much reality.
—From "Burnt Norton," Four Quartets (1936), T. S. Eliot
Baraka_Guru is offline  
Old 03-14-2011, 08:04 PM   #12 (permalink)
Sober
 
GreyWolf's Avatar
 
Location: Eastern Canada
The article fails to adequately differentiate between the scientific method... hypothesize, test, accept/reject... and the peer review of scientific studies. There is nothing wrong with the scientific method, or the use of 95/5 as an objective measure for rejecting a null hypothesis (although Bayesian statistics generally is preferable for hypothesis testing in the medical area).

What is a big problem today is the media and societal constraints on the acceptability of scientific research. This leads to the above mentioned issue of selective reporting... a major issue with peer review. For professors at universities, there is generally still the "publish or perish" imperative. This means, at the researcher level, there is a strong bias towards finding publishable results. This bias is supposed to be eliminated through the peer review process. Unfortunately, even in the world of science journals, social media control rears its ugly head.

Two areas in particular highlight the impact of media control on scientific research. First would be the area of interpersonal violence. Everyone knows that men are more violent, abuse their spouses/partners more often, and are more likely to harm their children. Unfortunately, this is rarely born out by the statistics. Study after study shows women are more violent, hit first more often, and are more likely to use a weapon. The safest possible situation for children in terms of physical/sexual abuse is actually the 2 gay men combination. The least is actually a single mother, but only because her non-parental male partner is the most likely to abuse the children.

The second area is climate change. The scandalous behaviour vis-à-vis data manipulation at the CRU at the University of East Anglia is only symptomatic of the fact that any data or study NOT conforming to the now accepted concept of anthropomorphic global warming (AGW) simply cannot get published, in peer reviewed journals or the mainstream press.

Even if it does get published, it's like the front-page headline announcing you're a rapist/mass murderer followed by the page 14 retraction the next day. The accepted view is page 1. Anything else is page 14 or under oddities in the news.
__________________
The secret to great marksmanship is deciding what the target was AFTER you've shot.
GreyWolf is offline  
Old 03-14-2011, 08:32 PM   #13 (permalink)
immoral minority
 
ASU2003's Avatar
 
Location: Back in Ohio
I think there is a problem. We mix 'real' science facts with theories too often. And we believe things to be true before they have been repeated by multiple scientists or have occurred multiple times in nature (gravity is pretty consistent, the molecular makeup of water isn't changing overnight).

I have a special hatred for the medical 'research' field. They are trying to do good, but they are also trying to make lots of money and become famous. Or they are trying to push their own agenda and setup their experiments to prove them right.

There needs to be a serious review of what constitutes medical and health related science. They need to agree that every person is a little bit different, and what might work for one person, won't work for another. Or there are also multiple variables in any test, and they need to have a matrix style result...

...A | B | C
A
_

B
_

C

And science isn't always right. It can evolve, yet it seems like once some scientists announce a big claim* , that thousands of us go to work trying to discredit them.

*(sometime prematurely in order to be First, because nobody remembers the second person to figure out something)

And the field of science is too disorganized. How much time should some scientist put in reading all the journals from all over the world, attending conferences, and talking to other researchers in the field who are working on the same thing? It sounds like a full time job, yet I'm not sure that position exists...

And then there is the money and funding issues. Results matter, not how many papers you get published proving that your 'theory' didn't pan out and you didn't find anything worthwhile.

Yet there is also the climate change theory, that even though is right and the evidence is there as predicted, people will still not believe it now because it would mean that the 'left' was right. But that isn't 'science'. Science is saying that if there are two enclosures, one filled with normal clean air, and one filled with more CO2, that if they are placed in the Sun for a day that the CO2 one will be hotter...

Statistics isn't science, unless it is 100%.

Last edited by ASU2003; 03-14-2011 at 08:39 PM..
ASU2003 is offline  
Old 03-14-2011, 09:11 PM   #14 (permalink)
Lover - Protector - Teacher
 
Jinn's Avatar
 
Location: Seattle, WA
I stole part of your post for my signature, will..
__________________
"I'm typing on a computer of science, which is being sent by science wires to a little science server where you can access it. I'm not typing on a computer of philosophy or religion or whatever other thing you think can be used to understand the universe because they're a poor substitute in the role of understanding the universe which exists independent from ourselves." - Willravel
Jinn is offline  
Old 03-14-2011, 09:42 PM   #15 (permalink)
Crazy, indeed
 
Location: the ether
Quote:
Originally Posted by GreyWolf View Post
The second area is climate change. The scandalous behaviour vis-à-vis data manipulation at the CRU at the University of East Anglia is only symptomatic of the fact that any data or study NOT conforming to the now accepted concept of anthropomorphic global warming (AGW) simply cannot get published, in peer reviewed journals or the mainstream press.

Even if it does get published, it's like the front-page headline announcing you're a rapist/mass murderer followed by the page 14 retraction the next day. The accepted view is page 1. Anything else is page 14 or under oddities in the news.
Bullshit.
There have been 5 inquiries into the matter of the data at East Anglia. 5. Penn State. The House of Commons. The Royal Society. East Anglia itself. And last, but not least, the Inspector General of the department of commerce of the USA (a Bush Appointee) by a request of senator James Inhofe (a republican). Not one of them found any evidence of academic wrong doing. I don't want to derail the thread, but bullshit needs to stop once it has been so thoroughly debunked. So some guys were reluctant to release information they had, and an email mentioned using different data sources to create a graph where no single data source could cover the whole period. This was all that was found after looking through thousands of email messages. To read this and then read that last line is ironic, to say the least. As the east anglia case is a perfect example of the sensational headline followed by the page 14 retraction.

And I've yet to see the great scientific research that disproves global warming but can't get published because of the vast existing conspiracy against it.

Last edited by dippin; 03-14-2011 at 09:49 PM..
dippin is offline  
Old 03-14-2011, 09:42 PM   #16 (permalink)
... a sort of licensed troubleshooter.
 
Willravel's Avatar
 
Quote:
Originally Posted by Jinn View Post
I stole part of your post for my signature, will..
I'm honored!
Willravel is offline  
Old 03-15-2011, 02:27 AM   #17 (permalink)
Sober
 
GreyWolf's Avatar
 
Location: Eastern Canada
Dippin:
As always, people (and scientists are people), see what they want to see. So are, and do, you. And that is not a problem with the scientific method, but with the peer review mechanism.

I have never denied global warming. Most evidence suggests it is happening. I have yet to see any conclusive evidence of AGW, because the data is inconclusive and extremely difficult to come by because of the extremely short time-span over which we have reliable data in geologic terms. As for East Anglia, I read the leaked documents myself. I stand by my own conclusions as a well-trained and reasonable person. Five inquiries coming to the wrong conclusion does not make them right, only five.

As for the issues about getting published, for heaven's sake READ the CRU e-mails and their comments on black-balling journals that would consider publishing dissenting views/studies. These were serious comments by senior researchers!! Or just ask Tom Tripp (one of the original lead authors on the IPCC, albeit a metallurgist, believe it or not) how easy it was to get his opinions/concerns published. His letters of dissent were regularly rejected by journals and magazines. Letters of opinion or criticism of the process! Or Richard Tol and the 1000+ scientist who have dissented to the IPCC AGW conclusion. Their extensive critique of the IPCC report has been widely ignored by the media because it flies in the face of the non-existent consensus on AGW.
__________________
The secret to great marksmanship is deciding what the target was AFTER you've shot.
GreyWolf is offline  
Old 03-15-2011, 05:47 AM   #18 (permalink)
Crazy, indeed
 
Location: the ether
Quote:
Originally Posted by GreyWolf View Post
Dippin:
As always, people (and scientists are people), see what they want to see. So are, and do, you. And that is not a problem with the scientific method, but with the peer review mechanism.

I have never denied global warming. Most evidence suggests it is happening. I have yet to see any conclusive evidence of AGW, because the data is inconclusive and extremely difficult to come by because of the extremely short time-span over which we have reliable data in geologic terms. As for East Anglia, I read the leaked documents myself. I stand by my own conclusions as a well-trained and reasonable person. Five inquiries coming to the wrong conclusion does not make them right, only five.

As for the issues about getting published, for heaven's sake READ the CRU e-mails and their comments on black-balling journals that would consider publishing dissenting views/studies. These were serious comments by senior researchers!! Or just ask Tom Tripp (one of the original lead authors on the IPCC, albeit a metallurgist, believe it or not) how easy it was to get his opinions/concerns published. His letters of dissent were regularly rejected by journals and magazines. Letters of opinion or criticism of the process! Or Richard Tol and the 1000+ scientist who have dissented to the IPCC AGW conclusion. Their extensive critique of the IPCC report has been widely ignored by the media because it flies in the face of the non-existent consensus on AGW.
I have read the emails. In context. And so have the 5 inquiries above.

The idea that the media somehow tried to cover up the whole "climategate" thing is absurd. A good chunk of the media does nothing but exaggerate anything that questions it even remotely. There is a reason "climategate" was so extensively covered, but almost no one knows of the results of the inquiries.

The specific exchanges about "black balling" journals refers to a case where a publication decided to publish previously rejected papers that were highly critical of Michael Mann's research without informing him or allowing him to respond, as it generally is the case when the paper to be published is specifically critical of past published work. The fact that Mann wasn't allowed to respond is more serious than his talks with others about the quality of the specific journal and its editorial practices.

Similarly, it is telling that the people that you mention are a metallurgist (who only participated in the IPCC in the section that estimated how much green house gas is produced in magnesium production, and as such is as qualified as any other non climatologist to comment on the issue), and economist (Tol, who by the way thinks AGW is real, only overstated in its economic impacts, in research where he openly assumed away part of the costs - AND he was invited for the next report, but complains that expenses aren't paid, using that as his evidence of a conspiracy against him), and so on.
dippin is offline  
Old 03-15-2011, 07:24 AM   #19 (permalink)
Junkie
 
filtherton's Avatar
 
Location: In the land of ice and snow.
Quote:
Originally Posted by GreyWolf View Post
The article fails to adequately differentiate between the scientific method... hypothesize, test, accept/reject... and the peer review of scientific studies. There is nothing wrong with the scientific method, or the use of 95/5 as an objective measure for rejecting a null hypothesis (although Bayesian statistics generally is preferable for hypothesis testing in the medical area).
There are plenty of things wrong with 95/5 hypothesis test paradigm. One of the primary problems is that it can shift the focus from clinical significance to statistical significance. So you'll have papers that get press for finding a statistically significant correlation between an exposure and a disease with little attention paid to the magnitude of the correlation. This is why many journals require more than boilerplate statements about acceptance and rejection of hypothesis tests, like p-values and/or confidence intervals. Any discerning reader of medical research should be fairly skeptical of any research which doesn't mention effect size and a p-value or confidence interval. Another problem with the 95/5 is that it doesn't take into account multiple tests. These problems are widely known and accepted and various workarounds exist. Finally, 95/5 is completely arbitrary, and if you think about it, being wrong every one in twenty times isn't really that discerning a criterion in light of the sheer number of papers published every year.

With regards to Bayesian analysis, there is no small amount of controversy between Frequentists and Bayesians about which method is "better". That being said, medical research, ie, the randomized, controlled clinical trial, is one of the few places that the underlying assumptions of Frequentist methods really obtain well, so that Bayesian statistics aren't necessarily preferable for hypothesis testing in medical research (at least, I have yet to see much Bayesian analysis in any of the literature searches I've done).

Bayesian methods are useful for epidemiologic research, like observational or case control studies, where one can't really make the assumptions required by Frequentist methods. That Frequentist methods are typically used anyway is possibly a cause of the problems mentioned in the OP.
filtherton is offline  
Old 03-18-2011, 09:51 PM   #20 (permalink)
still, wondering.
 
Ourcrazymodern?'s Avatar
 
Location: South Minneapolis, somewhere near the gorgeous gorge
...so the only thing wrong with the scientific method is what we make of it?


Since we made it up, that follows. Analyzing results is tricky.

I think the methods we use are progressing.
__________________
BE JUST AND FEAR NOT
Ourcrazymodern? is offline  
Old 03-19-2011, 02:23 PM   #21 (permalink)
More Than You Expect
 
Manic_Skafe's Avatar
 
Location: Queens
I really don't have too much to add other than that I found Leher's Proust Was A Neuroscientist to be exactly what I needed after having spent far too long turning my brain into soup with atheist, psych and philosophy books. I suggest that anyone even mildly intrigued by the stuff brought up in the OP get their hands on a copy.

Science is a useful enough means for making sense of what can sensibly reduced to scientific terms but sadly much of life (or reality as also entailing what existence feels like, qualia, etc.) can not be so easily reduced. I've come to find it rather ironic how much faith is required in seeing science as a means to the end of Ultimate Understanding of our Purpose when so far, in the history of our species it hasn't offered a single drop of relief from this or any of those other major problems.
__________________
"Porn is a zoo of exotic animals that becomes boring upon ownership." -Nersesian
Manic_Skafe is offline  
Old 03-19-2011, 03:22 PM   #22 (permalink)
still, wondering.
 
Ourcrazymodern?'s Avatar
 
Location: South Minneapolis, somewhere near the gorgeous gorge
Happily, "much of life...can not be so easily reduced"! What has been figured out has provided relief from quite a few major problems & created some few others. On the whole, I think it's been a net gain, thanks in no small part to the scientific method.
__________________
BE JUST AND FEAR NOT
Ourcrazymodern? is offline  
Old 03-19-2011, 04:27 PM   #23 (permalink)
More Than You Expect
 
Manic_Skafe's Avatar
 
Location: Queens
Yes, I do agree with you, OCM. They're all just different tools for different jobs, not nearly as opposed as some believe. All just as useless for picking apart the absurd.
__________________
"Porn is a zoo of exotic animals that becomes boring upon ownership." -Nersesian

Last edited by Manic_Skafe; 03-19-2011 at 04:32 PM..
Manic_Skafe is offline  
Old 03-22-2011, 06:01 AM   #24 (permalink)
Upright
 
kowalskil's Avatar
 
Location: Fort Lee, New Jersey, USA
Quote:
Originally Posted by Willravel View Post
Is there something wrong with the scientific method? Nope. The scientific method is our compass, something which exists outside of ourselves which we can rely on to always be the best tool available to point us in the direction of reality. It's not 100% perfect, in part because it's wielded by imperfect people, but so far nothing has been found that even holds a candle to it's success. I'm typing on a computer of science, which is being sent by science wires to a little science server where you can access it. I'm not typing on a computer of philosophy or religion or whatever other thing you think can be used to understand the universe because they're a poor substitute in the role of understanding the universe which exists independent from ourselves.
Yes, scientific method is a protocol for validating claims in our material world. It is not designed to deal with claims in our spiritual world. And vice versa.

Ludwik Kowalski
Professor Emeritus
Montclair State University
.
.
__________________
Ludwik Kowalski, author of a free ON-LINE book entitled “Diary of a Former Communist: Thoughts, Feelings, Reality.”
It is a testimony based on a diary kept between 1946 and 2004 (in the USSR, Poland, France and the USA).
kowalskil is offline  
Old 03-22-2011, 06:55 AM   #25 (permalink)
I change
 
ARTelevision's Avatar
 
Location: USA
"Don't shoot the messenger. Take aim at the masters."


True enough, baraka_guru. A large part of what I do involves science. I must admit to a deep anti-religious bias - mostly the kind of blind faith that supports religion. My issue with science, in general, is that it has become the new blind-faith religion of our time.

There are serious flaws with the scientific method, to be sure... From ontological/epistemological flaws, such as those that are hiding in the OP, regarding the repeatability of conditions, and others - especially what is called "peer review" - now there's a religion-of-science biased congregation who are heavily invested in the status quo enough to reflexively throw stones at unorthodox approaches.

You know, the first 40 millennia of human technological progress, the great migrations, civilizations, the pyramids, the Gothic cathedrals, and so much more, were not products of the scientific method. Without the acknowledgement of mind, as well as the inclusion of many-valued logic, indeterminacy, relativity, and non-repeatability into the very fabric of the current method, we will surely be brought headlong into the Matrix, by the current crop of mind-blind scientists.
__________________
create evolution
ARTelevision is offline  
Old 03-22-2011, 10:55 AM   #26 (permalink)
Lennonite Priest
 
pan6467's Avatar
 
Location: Mansfield, Ohio USA
To quote Steve Martin from his album "Wild and Crazy Guy":

Quote:
Science is pure impuricism and by virtue of it's method it completely excludes metaphysics
__________________
I just love people who use the excuse "I use/do this because I LOVE the feeling/joy/happiness it brings me" and expect you to be ok with that as you watch them destroy their life blindly following. My response is, "I like to put forks in an eletrical socket, just LOVE that feeling, can't ever get enough of it, so will you let me put this copper fork in that electric socket?"
pan6467 is offline  
Old 03-22-2011, 11:54 AM   #27 (permalink)
... a sort of licensed troubleshooter.
 
Willravel's Avatar
 
That which can explain what is cannot explain what isn't? That's fair.
Willravel is offline  
Old 03-22-2011, 12:12 PM   #28 (permalink)
I change
 
ARTelevision's Avatar
 
Location: USA
What is, is not in process of being explained by science. It is being defined by a set of assumptions, which are validated by a methodology predicated upon those assumptions. What is, is not explained.
__________________
create evolution
ARTelevision is offline  
Old 03-23-2011, 04:52 PM   #29 (permalink)
MSD
The sky calls to us ...
 
MSD's Avatar
 
Super Moderator
Location: CT
I've read about a quarter of the way into the article but I don't have much time and want to make a quick comment before I have to go. In the studies discussed so far and the difficulty of replicating them, what I see seems to be a rediscovery of regression to the mean, a known and acknowledged phenomenon, rather than a wholesale discrediting of the scientific method itself.
MSD is offline  
Old 04-04-2011, 10:40 AM   #30 (permalink)
I change
 
ARTelevision's Avatar
 
Location: USA


This addresses the essential issues, I think.
__________________
create evolution
ARTelevision is offline  
Old 04-04-2011, 11:27 AM   #31 (permalink)
 
ring's Avatar
 
Location: ❤
Indeed. The cure for a Nicholas Cage headache is a large bowl of coconut ice cream.

Thanks, Art. Nice.
ring is offline  
Old 04-29-2011, 06:00 PM   #32 (permalink)
Upright
 
Location: FL
The major problem I see with the scientific method it's that is based on perception. Mainly observation and interpretation, therefore it will be biased, misinterpreted, and ignored by different persons with different views.

You can in fact get very different and branching propositions of the same observable event.
Orogun01 is offline  
Old 05-10-2011, 07:41 PM   #33 (permalink)
Psycho
 
albania's Avatar
 
I guess I shouldn't really respond to an old thread now, but I feel invested and inclined to repeat(hehe) what people said a few months ago.

There might be a problem with scientists but not with the method. Socially it is disconcerting that there can be drastic shifts in consensus in fields that directly impact health care and well being. One day a drug is good for you and the next not so much doesn't seem palpable. But, to me it's just an expression of our imperfection. Eventually, these flaws will be corrected, new ones will pop up but I think they too will be examined and corrected using the scientific method ad infinitum.

The truth of the matter is that these types of problems are possible in all scientific fields. I don't think that has anything to do with the scientific method though. I'm biased but I do think physics is probably less susceptible to such things simply because the application is much more rigorous. Experiments can be constructed that are highly controlled, and repeatable.
albania is offline  
Old 05-11-2011, 04:30 PM   #34 (permalink)
still, wondering.
 
Ourcrazymodern?'s Avatar
 
Location: South Minneapolis, somewhere near the gorgeous gorge
Absolutely, it depends upon the subject matter. I'm biased, as well. Some things can be proven & some can't, as yet, because we still have to pretend our ideas are more important than what we observe (IRL). (IRL) might not really be there. I believe it is. The scientific method works because it needs no belief that what can be observed has no actual basis. The flaws in this post will never be corrected. I love chemistry & I eat butter.
__________________
BE JUST AND FEAR NOT
Ourcrazymodern? is offline  
Old 05-11-2011, 05:06 PM   #35 (permalink)
Psycho
 
EventHorizon's Avatar
 
Location: The Aluminum Womb
Quote:
Originally Posted by ARTelevision View Post
What is, is not in process of being explained by science. It is being defined by a set of assumptions, which are validated by a methodology predicated upon those assumptions. What is, is not explained.
but everything is based off of assumptions isn't it? i can't think of anything that is completely assumption-free
__________________
Does Marcellus Wallace have the appearance of a female canine? Then for what reason did you attempt to copulate with him as if he were a female canine?
Quote:
Originally Posted by canuckguy View Post
Pretty simple really, do your own thing as long as it does not fuck with anyone's enjoyment of life.
EventHorizon is offline  
Old 05-11-2011, 05:42 PM   #36 (permalink)
still, wondering.
 
Ourcrazymodern?'s Avatar
 
Location: South Minneapolis, somewhere near the gorgeous gorge
It's possible the Mary was a virgin. 1=1 is not an assumption.
__________________
BE JUST AND FEAR NOT
Ourcrazymodern? is offline  
Old 05-11-2011, 06:56 PM   #37 (permalink)
Psycho
 
EventHorizon's Avatar
 
Location: The Aluminum Womb
Quote:
Originally Posted by Ourcrazymodern? View Post
1=1 is not an assumption.
what if reality is a continuum instead of a quantum and things never actually end, they just get forever smaller and smaller and smaller, but then they never really begin either. i want to make reference to the story about how and arrow will never reach its target, but i dont know how. also, doesn't statistics say that nothing is impossible, it just hasn't happened yet? so maybe 1 = 1 might be true someday even though it hasn't happened yet.

everything is based of an assumption
__________________
Does Marcellus Wallace have the appearance of a female canine? Then for what reason did you attempt to copulate with him as if he were a female canine?
Quote:
Originally Posted by canuckguy View Post
Pretty simple really, do your own thing as long as it does not fuck with anyone's enjoyment of life.
EventHorizon is offline  
Old 05-12-2011, 12:01 AM   #38 (permalink)
Junkie
 
filtherton's Avatar
 
Location: In the land of ice and snow.
Quote:
Originally Posted by EventHorizon View Post
what if reality is a continuum instead of a quantum and things never actually end, they just get forever smaller and smaller and smaller, but then they never really begin either. i want to make reference to the story about how and arrow will never reach its target, but i dont know how. also, doesn't statistics say that nothing is impossible, it just hasn't happened yet? so maybe 1 = 1 might be true someday even though it hasn't happened yet.

everything is based of an assumption
I suspect you mean Zeno's Paradox with your allusion to the arrow. It is important to note that while mathematics is useful for representing reality in a surprisingly large number of ways, the fact that something is mathematically true doesn't necessarily make it actually true.

Statistics doesn't say that nothing is impossible. It's just a formalized method for dealing with uncertainty. Depending on how you've chosen to model a situation statistically, it does seem to imply that rare events will occur with a probability of 1 if given enough of an opportunity, but that doesn't mean that every conceiveable event qualifies as a rare event. Things still have to be possible to happen.
filtherton is offline  
Old 05-12-2011, 10:14 AM   #39 (permalink)
I change
 
ARTelevision's Avatar
 
Location: USA
As for the based-upon-undemonstrable-assumptions problem, this is a real problem...not to be ignored, swept under the rug, or minimized. It is at the heart of the great mess we have made of the world and that we continue to make of ourselves. The scientists I collaborate with are cognizant and do admit, finally, that their proofs are no more valid than religious belief. Just because they give us a great deal of blunt, brute force power to push matter and energy around, doesn't mean that they even nearly approximate significant truths. And it looks more and more that the mistakes of a reductionistic materialism are killing us all, as well as the planet we inhabit.
__________________
create evolution
ARTelevision is offline  
Old 05-12-2011, 03:34 PM   #40 (permalink)
still, wondering.
 
Ourcrazymodern?'s Avatar
 
Location: South Minneapolis, somewhere near the gorgeous gorge
So one does not equal one? God help us all.
__________________
BE JUST AND FEAR NOT
Ourcrazymodern? is offline  
 

Tags
method, scientific, wrong


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On



All times are GMT -8. The time now is 01:20 AM.

Tilted Forum Project

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
Search Engine Optimization by vBSEO 3.6.0 PL2
© 2002-2012 Tilted Forum Project

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360