Fingerprinting is not a panacea, and neither is DNA.
Two of my professors study exactly how the processes are fallable, yet hidden by our intrigue with technological advancement.
First of all, evidence is studied after a suspect is in mind. While this may seem logical, it opens the door for a few problematics:
DNA and fingerprinting ultimately relies upon human corroberation. While computers initially run the scan, experts ultimately decide on what matches and what doesn't. So far, states vary on what the minimum requirements for a "match" are before being evidence of one of those astronomical anomalies of snagging the wrong person.
If you look at an ink blot, do you think the process would be more reliable if you were asked: "Do you see a butterfly in there" or "What do you see" (and the person replies: butterfly"
That is, you have a suspect. You think in your mind the person committed the crime. Detective lays the crime scene DNA slide next to the suspect's slide and matches it. Things that don't match are "noise" and things that don't quite match are likely's. After some agreed upon number of matches are reached (differs by state, remember, and municipalities before it even goes to the DA for indictment), the noise is tossed (explained away) and the likely's "become" apparent or are scrubbed.
I hope this isn't starting to look too inconceivable that someone could be inadvertantly processed for a crime not committed by him or her. Add to this equation the idea that once a match is found, the police don't scrub the scene for more suspects--and they don't keep finding more people in the pool of 'applicants' either. I suggest (and evidence corroberates this) that the amount of time and care that goes into a case will vary indirectly porportionate to the class level of the victim (usually low class, btw--doesn't quite hold so strongly when an upper class citizen is harmed, which is actually very rare and even more rare by someone outside of one's class level) and the suspect.
This is one of the most pressing issues with both techno-identification processes. I don't enter into this equation the question of contamination (which is very real--at least one person was released just recently due to lab conditions in Texas, my prof was the guy who busted the case's asshole open when the lab techs were found to be doing all kinds of shit and the lab was leaking chems all over the place and etc).
But the problems I identified more closely are very persuasive because they can be dealt with and they lay no blame on the officers--they result from normal psychological prompting processes which researchers and academics (and detectives, btw, in their interview techniques) know and document copiously.
The single best way to address this that I can think of is to have one person identify what he or she thinks are points of match and have an independent observer identify points of match. Then you allow the two DNA strips to be brought together and a special statistic (the name of which escapes me right now, but it might be a Chi--hopefully another stats member can bring in the stat that is used to verify validity between more than one qualitative observer) is run to determine how statistically close they were to one another.
But that's not done. And until it is, DNA testing and claims about it should be viewed with skepticism. And all this is true for fingerprinting, as well.
__________________
"The theory of a free press is that truth will emerge from free discussion, not that it will be presented perfectly and instantly in any one account." -- Walter Lippmann
"You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists." -- Abbie Hoffman
|