Daubert Hearing on Firearm Comparisons
Introduction
I recently watched a video of a Daubert hearing for Firearm Comparisons. This hearing was done in a Maryland Appeals court, and the video can be found here. The video only contains the closing arguments from the defense and the prosecution. Although, during the closing arguments past witnesses and research studies were mentioned. Instead of giving a summary of the video, which would make this post longer than I would like, this post will only focus on my response to the video.
Overall the Defense brought up a lot of points that were either exaggerated or misconstrued. Although, the Defense did seem more believable and knowledgeable when compared to the prosecution, because of their confidence, organization, and ability to use different studies for their gain. The Prosecution fumbled generic explanations of the science and lacked the ability to take guidance from the judges.
Error Rates
The Defense states that the error rate of the science is 50%, but they fail to explain where they obtained that error rate from. From the studies that have been published the error rate is usually around 1%, but this error rate can only be applied to the examiners that took the test and not the science as a whole. This error rate cannot be based on the science as a whole because in some studies the error rates can be contributed to a few examiners out of the pool of participants. Also, many of these studies prevent the examiners from fully utilizing their Quality Assurance (QA) system, which would reduce errors seen in the studies. In real case work examiners would have the full use of the QA system, which will act as a check and balance for their work.
In another part of the closing argument, the Defense reveals that the AMES II study has a high error rate for comparisons. During the explanation, we get to see how the Defense is analyzing the data from the study, which can point to how they may have come up with the 50% error rate from earlier. They picked the error rate where the authors of the study combine the inconclusive results with either the identification or elimination results. An inconclusive A was combined with identifications and an inconclusive C was combined with eliminations. By including the inconclusives with these results it would inflate the error rate to 10% rather than the error rate of around 1% (Not including inconclusives) that the authors listed in the conclusion section of the study.
Combining inconclusives with a positive identification or elimination would not properly reflect the error rate of the study because examiners chose an inconclusive result for a reason. The inconclusive results are usually chosen by the examiner either because the markings on the bullet/casings aren’t sufficient enough for identification but are still present to prevent an elimination. Also, some laboratories do not allow their examiners to conclude an elimination if the class characteristics of the bullets/casings match. So, combining the inconclusive would take away this information and cause the examiner to conclude against the evidence provided or cause an examiner to be in error when they are just following their laboratory procedure. Inconclusive results are not a free ticket out of a hard examination, but a real description of the evidence.
Black Box Studies
The Defense also states that an actual black box study has never been performed because all examiners are aware that they are being tested. This awareness causes the examiners to be more cautious and conclude more inconclusive results, thus “taking the easy way out”. The AMES II study, which is considered a black box study, includes a section where they publish some of the comments that the participating examiners had. These comments bring to light why inconclusive results are chosen in these studies. In the comments, some examiners stated that they concluded inconclusive results because they lacked the firearm for examination. In real case work with test bullets, the firearm will also be accessible for examination, which will allow the examiner to assess for subclass characteristics. Without properly evaluating for subclass characteristics examiners will feel that the best conclusion is inconclusive, especially with poor marking bullets/casings. Another examiner commented that some of the samples provided for comparison may contain test fires that were taken longer in the sequence from the unknown. In actual case work the unknown sample will be close to the test fires created by the submitted firearm. This distance in test fires causes more dilution in the markings, which can lead more examiners to an inconclusive result.
Consecutively Manufactured Studies
The Prosecution’s closing argument was focused on the importance of consecutively manufactured studies. The Prosecutor states that the closed consecutively manufactured studies can be more important than the black box studies that were heavily focused on by the Defense. One of the judges makes a comment that the consecutively manufactured studies would be less important since examiners should already be looking for subclass characteristics, and the consecutively manufactured samples would share subclass characteristics making them easier to identify. With the subclass markings more easily identifiable the examiner would be able to focus more on the individual markings within the samples. The judge then says that the black box studies would be more beneficial because examiners would have a harder time identifying subclass characteristics from the individual markings without a consecutively manufacture reference. In response, the Prosecution fumbles his response and does not properly convey the importance of consecutively manufactured studies.
Consecutively manufactured studies are just as important as black box studies. They may help examiners establish their best-known non-match. Since the samples are from consecutively manufactured tools, it would increase the probability of non-matches sharing more marks with one another, which will establish the examiner’s understanding of their threshold for identifications and eliminations. A consecutively manufactured study also forces subclass characteristics which will also help examiners to study the patterns of subclass characteristics. Long continuous marks, gross marks, and rhythmic marks may all be subclass characteristics and examiners can use these studies to understand the indications of subclass. Lastly, these studies aren’t created with the sole purpose of showing that consecutively manufactured firearms can create identifiable samples that can be linked back to the firearm. They are used to validate different tools as being able to create different markings from one part to the next. These tools can be casted, broached, double broached, and hammer forged just to name a few. Knowing that these tools can create identifiable marks allows an examiner to use this knowledge with any firearm that is created by the tools examined in the study. Therefore, consecutively manufactured studies add to the science, and black box studies establish an error rate for the examiners that participate in the study.
Data Manipulation
Before finishing this post, I would like to bring to light how the Defense manipulates the data of the AMES II study. As discussed previously the Defense combined inconclusive results into either the identification or elimination conclusions to establish their error rates. But when talking about the error rates for the repeatability and reproducibility of the study they chose the data that did not combine the inconclusive results into either the identification or elimination conclusions. In this part of the study, the error rates were higher when not combining the inconclusive because of the study’s use of the three-tier inconclusive conclusion. If an examiner were to switch the initial answer from an inconclusive A to identification (ground truth) or inconclusive A to an inconclusive B it would be considered that the examiner changed their answer even though they would still be marked correctly. But in this case, combining the inconclusive results will more accurately represent examiners who changed their answers but remained right according to the ground truth. Since the combination results produced a lower error rate the Defenses decided not to utilize it for this part of their argument.
Concluding Thoughts
Overall, I felt like the Prosecutor should have come better prepared and used the studies on record to his advantage. Instead, the Prosecutor stumbled and failed to make solid arguments, which may hurt the science. Hopefully, in future Daubert/Frye hearings the information I provided may be used to better utilize existing studies and provide a better hearing to protect the science. It is important to become aquatinted and knowledgeable in the studies that exist for our science so that they can be used effectively. For example, if the Prosecutor knew the AMES II study more thoroughly he could have brought to light the Defense using data manipulation in their argument.