Discussion about Inconclusives
Discussion about Inconclusives
In other articles on this website, I have briefly discussed inconclusive results in the field of firearm comparisons. This article will dive deeper into what’s the meaning of the conclusion and why it is becoming the object of intense discussion.
AFTE Range of Conclusions
AFTE (The Association of Firearm and Tool Mark Examiners) allows examiners to report three different inconclusive results: Inconclusive A, Inconclusive B, and Inconclusive C.
Inconclusive A
Inconclusive A would be chosen if there is some agreement of individual characteristics and all discernible class characteristics, but insufficient for an identification. This conclusion would mean the examiner was leaning towards an identification but did not have enough information to conclude an identification.
Inconclusive B
Inconclusive B would be chosen if there is an agreement of all discernable class characteristics without agreement or disagreement of individual characteristics due to an absence, insufficiency, or lack of reproducibility. This conclusion would be considered the middle ground where an examiner was not leaning towards an identification or an elimination.
Inconclusive C
Inconclusive C would be chosen if there is agreement of all discernible class characteristics and disagreement of individual characteristics, but is insufficient for an elimination. This conclusion would mean the examiner was leaning towards an elimination but there was just enough information to keep them from eliminating the two items from each other.
Opinion on Reporting Inconclusive
Although AFTE allows examiners to report three different types of inconclusives, I believe that examiners should try to only report “inconclusive” as their result regardless of what way they are leaning. This not only prevents bias but also more accurately conveys the intention of the examiner. For example, if an examiner were to testify and they tell the jury that they made an Inconclusive A conclusion he would have to tell them that there wasn’t enough information to report the comparison as an identification or elimination, but the comparison was leaning towards an identification. I feel like this explanation is taking the inconclusive conclusion and then adding a wink and a nod to the jury that the comparison could have been an identification. This confuses the jury members and also skews the results of the examination.
The examiner who reports “inconclusive” will be more transparent to the jury and the jury will have an easier time understanding the conclusion of the examination. The examiner would explain that during the comparison there was not enough information for an identification and just enough information to reject an elimination. If the markings were not sufficient enough for an identification then the conclusions would be inconclusive and not inconclusive A, because no matter the circumstances it was never enough for an identification.
The Rise of the “Problem”
Their Argument
After the release of the PCAST report the field went to work producing studies that would satisfy the PCAST recommendation for black box studies. The Ames I (Baldwin Study) and the AMES II (Monson Study) were conducted to satisfy that recommendation. These studies were very informative and contained a lot of data to be reviewed by anyone in the field. Although satisfied, people outside the field determined to move the goalpost and stated that the problem of the field and the studies were the inconclusive conclusions. This has now become the main focus of the field and once again examiners are trying to satisfy this challenge.
One of the first times the inconclusives results were stated to be problematic was with Dr. Scurich, who stated that inconclusives should be reported as false positives. He demonstrated this idea by taking the Baldwin study and recalculating the error rates based on his new idea. With the new treatment of inconclusive, the error rate went from 1.01% to 35%.
Dr. Scurich also recommended that a majority approach be implemented when determining the ground truth for a particular comparison. For example, if the preliminary comparison were given to a certain amount of examiners and the majority concluded that the comparison resulted in an inconclusive result; it would mean when the comparisons are given to the actual participants in a study, the conclusions would only be correct if they were to report inconclusive. This would also mean if the majority ruled that a comparison is an identification/elimination, that any participant who concluded an inconclusive would be marked wrong. No matter the ground truth the answer will always have to coincide with the majority rule.
My Argument
We will first look at combining the inconclusives into the error rate of a study. I believe that inconclusives should be treated separately from an identification and elimination, and should not be factored into the error rate. As discussed above an inconclusive carries a lot of meaning. It shows the reader that the examiner did not have enough information for an identification and/or had just enough information keeping them from concluding an elimination. Forcing an examiner to either pick an identification or an elimination would mean that the examiner would have to disregard what they see in the evidentiary material. Sometimes the evidence may be deformed, damaged, not well marked, or changed from one firing to the next due to rust or other factors. These conditions would increase the difficulty of the examination and may obscure markings that would drive an examiner to either an identification or an elimination. No matter the experience level of the examiner, if the markings are not present or insufficient an inconclusive conclusion would be appropriate. This scenario is how other sections in crime laboratories use inconclusive results, for example, latent prints and DNA. Although the conclusions are used in other sections, they only appear to be considered problematic in the field of firearm comparisons.
Lastly, the majority rules solution would not work and cause an artificial error rate. As discussed in the previous section if the majority chooses inconclusive then to be right the participants would have to conclude an inconclusive result. Let’s say in this scenario, the two pieces of evidence were from the same firearm. A couple of participants had experience with such pieces of evidence that they were able to look at areas other examiners would normally miss or use lighting techniques that other examiners are not proficient in. By using their experience and techniques they were able to find markings that drew them to conclude and identification, which would be the correct answer. However, since the majority ruled that the conclusion would be inconclusive these select examiners would be marked wrong.
A real-life example of the previous scenario would be to take a survey of 100 people and ask them what the southernmost state of the United States is. Let’s assume that the majority of the participants picked either Texas or Florida then that would become the ground truth answer. Then if the researcher took this same survey to the actual participants of the study and gathered the answers and graded them based on the majority ground truth. Some participants may be more versed in geography and answer that Hawaii is the southernmost state. Technically, these participants would be right in the real world, but in the context of this study, they would be marked wrong and contribute to the error rate.
The majority rules also cause test-taking bias. Examiners may feel forced to report a conclusion as an identification or an elimination. For example, if there were a lot of markings that are drawing an examiner to an identification, but the quality and quantity of the markings did not meet their threshold for an identification they would normally conclude an inconclusive. But, by knowing that the study is graded as majority rule they may now conclude an identification because they would be biased that the majority of the participants would use those markings to make an identification.
Before finishing up this article I would like to add one more data point that was seen in multiple studies. It was found that examiners who used inconclusives more were considered more trustworthy than examiners who did not use inconclusives as often. This can be seen in the AMES study where there was an examiner who contributed significantly to the error rate but in their result did not report any inconclusives.
Final Thought
Examiners should start using one inconclusive result rather than three to become more transparent and to eliminate any bias to the jury. Outside organizations should accept inconclusive conclusions as valid as they do for other disciplines. It seems like these outside organizations are trying to artificially raise the error rates and conflict within the field by using inconclusives. People have to be able to recognize the true meaning of an inconclusive result and not equate the conclusion as an easy way out or something that is used to easily pass a study. It is a valid and important conclusion to be able to appropriately speak for the evidence.