Literature Review

Daubert Hearing on Firearm Comparisons

Introduction

I recently watched a video of a Daubert hearing for Firearm Comparisons. This hearing was done in a Maryland Appeals court, and the video can be found here. The video only contains the closing arguments from the defense and the prosecution. Although, during the closing arguments past witnesses and research studies were mentioned. Instead of giving a summary of the video, which would make this post longer than I would like, this post will only focus on my response to the video.

            Overall the Defense brought up a lot of points that were either exaggerated or misconstrued. Although, the Defense did seem more believable and knowledgeable when compared to the prosecution, because of their confidence, organization, and ability to use different studies for their gain. The Prosecution fumbled generic explanations of the science and lacked the ability to take guidance from the judges.

Error Rates

            The Defense states that the error rate of the science is 50%, but they fail to explain where they obtained that error rate from. From the studies that have been published the error rate is usually around 1%, but this error rate can only be applied to the examiners that took the test and not the science as a whole. This error rate cannot be based on the science as a whole because in some studies the error rates can be contributed to a few examiners out of the pool of participants. Also, many of these studies prevent the examiners from fully utilizing their Quality Assurance (QA) system, which would reduce errors seen in the studies. In real case work examiners would have the full use of the QA system, which will act as a check and balance for their work.

            In another part of the closing argument, the Defense reveals that the AMES II study has a high error rate for comparisons. During the explanation, we get to see how the Defense is analyzing the data from the study, which can point to how they may have come up with the 50% error rate from earlier. They picked the error rate where the authors of the study combine the inconclusive results with either the identification or elimination results. An inconclusive A was combined with identifications and an inconclusive C was combined with eliminations. By including the inconclusives with these results it would inflate the error rate to 10% rather than the error rate of around 1% (Not including inconclusives) that the authors listed in the conclusion section of the study.

            Combining inconclusives with a positive identification or elimination would not properly reflect the error rate of the study because examiners chose an inconclusive result for a reason. The inconclusive results are usually chosen by the examiner either because the markings on the bullet/casings aren’t sufficient enough for identification but are still present to prevent an elimination. Also, some laboratories do not allow their examiners to conclude an elimination if the class characteristics of the bullets/casings match. So, combining the inconclusive would take away this information and cause the examiner to conclude against the evidence provided or cause an examiner to be in error when they are just following their laboratory procedure. Inconclusive results are not a free ticket out of a hard examination, but a real description of the evidence.

Black Box Studies

The Defense also states that an actual black box study has never been performed because all examiners are aware that they are being tested. This awareness causes the examiners to be more cautious and conclude more inconclusive results, thus “taking the easy way out”. The AMES II study, which is considered a black box study, includes a section where they publish some of the comments that the participating examiners had. These comments bring to light why inconclusive results are chosen in these studies. In the comments, some examiners stated that they concluded inconclusive results because they lacked the firearm for examination. In real case work with test bullets, the firearm will also be accessible for examination, which will allow the examiner to assess for subclass characteristics. Without properly evaluating for subclass characteristics examiners will feel that the best conclusion is inconclusive, especially with poor marking bullets/casings. Another examiner commented that some of the samples provided for comparison may contain test fires that were taken longer in the sequence from the unknown. In actual case work the unknown sample will be close to the test fires created by the submitted firearm. This distance in test fires causes more dilution in the markings, which can lead more examiners to an inconclusive result.

Consecutively Manufactured Studies

The Prosecution’s closing argument was focused on the importance of consecutively manufactured studies. The Prosecutor states that the closed consecutively manufactured studies can be more important than the black box studies that were heavily focused on by the Defense. One of the judges makes a comment that the consecutively manufactured studies would be less important since examiners should already be looking for subclass characteristics, and the consecutively manufactured samples would share subclass characteristics making them easier to identify. With the subclass markings more easily identifiable the examiner would be able to focus more on the individual markings within the samples. The judge then says that the black box studies would be more beneficial because examiners would have a harder time identifying subclass characteristics from the individual markings without a consecutively manufacture reference. In response, the Prosecution fumbles his response and does not properly convey the importance of consecutively manufactured studies.

            Consecutively manufactured studies are just as important as black box studies. They may help examiners establish their best-known non-match. Since the samples are from consecutively manufactured tools, it would increase the probability of non-matches sharing more marks with one another, which will establish the examiner’s understanding of their threshold for identifications and eliminations. A consecutively manufactured study also forces subclass characteristics which will also help examiners to study the patterns of subclass characteristics. Long continuous marks, gross marks, and rhythmic marks may all be subclass characteristics and examiners can use these studies to understand the indications of subclass. Lastly, these studies aren’t created with the sole purpose of showing that consecutively manufactured firearms can create identifiable samples that can be linked back to the firearm. They are used to validate different tools as being able to create different markings from one part to the next. These tools can be casted, broached, double broached, and hammer forged just to name a few. Knowing that these tools can create identifiable marks allows an examiner to use this knowledge with any firearm that is created by the tools examined in the study. Therefore, consecutively manufactured studies add to the science, and black box studies establish an error rate for the examiners that participate in the study.

Data Manipulation

Before finishing this post, I would like to bring to light how the Defense manipulates the data of the AMES II study. As discussed previously the Defense combined inconclusive results into either the identification or elimination conclusions to establish their error rates. But when talking about the error rates for the repeatability and reproducibility of the study they chose the data that did not combine the inconclusive results into either the identification or elimination conclusions. In this part of the study, the error rates were higher when not combining the inconclusive because of the study’s use of the three-tier inconclusive conclusion. If an examiner were to switch the initial answer from an inconclusive A to identification (ground truth) or inconclusive A to an inconclusive B it would be considered that the examiner changed their answer even though they would still be marked correctly. But in this case, combining the inconclusive results will more accurately represent examiners who changed their answers but remained right according to the ground truth. Since the combination results produced a lower error rate the Defenses decided not to utilize it for this part of their argument.

Concluding Thoughts

Overall, I felt like the Prosecutor should have come better prepared and used the studies on record to his advantage. Instead, the Prosecutor stumbled and failed to make solid arguments, which may hurt the science. Hopefully, in future Daubert/Frye hearings the information I provided may be used to better utilize existing studies and provide a better hearing to protect the science. It is important to become aquatinted and knowledgeable in the studies that exist for our science so that they can be used effectively. For example, if the Prosecutor knew the AMES II study more thoroughly he could have brought to light the Defense using data manipulation in their argument.

Literature Review

Part I: Ames Study

Finding the Article

            I was able to find a copy of a Study titled “A Study of False-Positive and False-Negative Error Rates in Cartridge Case Comparisons” written by David P. Baldwin, Stanley J. Bajic, Max Morris, and Daniel Zamzow. This is Part I of a two-part study done by the Ames Laboratory. Part I can still be found in obscure places but part II has been wiped from most sources. Defense attorneys or academic opponents heavily reference these studies, but when used, they are quickly/sloppily referenced and cherry-picked. I hope to be able to share the main findings in these studies to be able to help anyone in the field that will meet with people using these studies. This post will focus on Part I of the study, and at a later date, I will post a discussion of Part II of the study.

Introduction/Experiment

            The study’s authors designed the study to better understand error rates associated with the comparison of fired cartridge casings. They stated that the problem with previous studies is that they did not include independent sample sets that would allow unbiased determination of the false-positive and/or false-negative rates. So, this study sets out to resolve this issue.

            Two hundred and eighty-four (284) participants were given fifteen (15) test sets to examine. Twenty-five (25) Ruger SR9s were used to create the samples for the test sets, and each firearm fired 200 cartridges to break them in before sample collection. Each handgun fired 800 cartridges in total for the test sets. In the test sets, no source firearm was repeated in a single test packet, except in the case when a test set was meant to be the same source comparison. The sets included 3 knowns to compare to a single questioned casing. For all the participants five (5) of the test sets were from known same-source firearms, and ten (10) of the test sets were from known different-source firearms.  In addition to the results, the participants had to record the quality of the known samples, which allowed the authors to calculate a poor mark production rate. This rate was examined to avoid cherry-picking well-marked samples for the test sets, which usually draws criticism as making the test sets too easy. The authors also asked the participants not to use their laboratory peer review process, which allowed the error rates to reflect the individual examiner.

Results

False Negative

            Out of the two hundred and eighty-four (284) participants, only two hundred and eighteen (218) returned completed responses. Out of the completed responses, 3% accounted for self-employed individuals. In total, thousand and ninety (1090) true same-source comparisons were made, where only four (4) comparisons were labeled elimination and eleven (11) were labeled inconclusive. The false elimination rate was calculated to be 0.3670% with the Clopper-Pearson exact 95% confidence interval calculated to be 0.1001%-0.9369%. Two (2) of the four (4) false eliminations were made by the same examiner, therefore 215 out of 218 examiners did not make a false elimination. When factoring in inconclusive with false elimination the error rate increases to 1.376% with the corresponding 95% confidence interval calculated to be 0.7722%-2.260%.

            A number to take into consideration is the poor mark production rate that was discussed above. Two hundred and twenty-five (225) known samples out of nine thousand seven hundred two (9702) knowns were considered poor quality and inappropriate for inclusion in the comparison, which was calculated to be 2.319% of the samples with a corresponding 95% confidence interval of 2.174%-2.827%. This percentage is greater than the false elimination rate, which means there is a high probability that some of the false elimination can be attributed to the poor quality of the knowns used for comparison. Also, four (4) of the false eliminations were made by examiners who did not use inconclusive for any response, which could be attributed to their agency requirements.

False Positive

            Out of the two thousand hundred and eighty (2180) true different-source comparisons, twenty-two (22) comparisons were labeled identifications, and seven hundred and thirty-five (735) were labeled inconclusive. The error rate for false identification was calculated to be 1.010% (Note: Two (2) responses were left blank and were subtracted from the total number of responses.). Out of the false identifications, all but two were made by five (5) of the two hundred and eighteen (218) examiners. Since a small number of examiners made the same error, it can be suggested that the error probability is not consistent across examiners, which was the idea stated at the beginning of this post. The beta-binomial model was used to estimate the false identification because it cannot be assumed that the probability is uniform across examiners. The probability was calculated to be 0.939% with a likelihood-based 95% confidence interval of 0.360%-2.261%.

            The inconclusive also showed to be heterogeneous. Out of the two hundred and eighteen (218) examiners, ninety-six (96) labeled none of the comparisons as inconclusive, forty-five (45) labeled all 10 of the comparisons as inconclusive, and seventy-seven (77) examiners showed an even spread between the extremes. 

My Discussion

            The authors state that the false elimination error rate is in doubt because the poor-quality rate is higher than the false elimination rate, even with the inconclusive results factored in. I agree that the error rate should be questioned because the rate can be affected by the poor quality of the samples, which can lead an examiner to not conclude a positive comparison. But, there is another factor in play as well. Some laboratories do not allow their examiners to report inconclusive results and require that the conclusion can be either an identification or an elimination, which is something that the statistics community has been making a push for. But, this factor is hard to consider when the authors did not require the participants to disclose their laboratory practices. It can be assumed that this might be the case because all the false eliminations were made by examiners who did not conclude inconclusive in any of the comparisons.

            The false positive rate is a percentage that should not be applied to the science but rather to an examiner. This 1% error rate is more representative of the examiners participating in this specific study. This can be seen when most of the false identifications were produced by five (5) examiners out of the two hundred and eighteen (218) total participants. This study also disclosed in the design that they did not want the laboratory review process to be a factor so that they can examine the individual examiners. It is my belief that if the review process was allowed in this experiment the error rate would be smaller or be close to 0%. So, the error rate can be used to advocate for examiners to be well-trained and to have a well-established QA system in place.

            The study also addresses the higher inconclusive responses that were received with the different-source comparisons. The seven hundred and thirty-five (735) inconclusive results out of the one thousand four hundred and twenty-one (1421) reported eliminations is too large to be attributed to the poor-quality percentage. Just like the false elimination results, the inconclusive can be attributed to the policy of the laboratory. A laboratory may require the examiner to report an inconclusive result if the class characteristics are the same between the known and unknown samples. In this study since the same model of firearms were used for the creation of the known and unknown samples, the samples generated would have the same class characteristics. If the authors included a section where the participants were able to disclose their laboratory policy, we would be able to better understand the number of inconclusive results seen in the study.

            Hopefully, my post will help bring to light the first part of the Ames study and provide more transparency to the error rates published in the paper. Please use this post as a reference or a quick summary for your knowledge, but seek out a copy of the original paper for a more in-depth look into the study design. The authors of the study were very detailed in their paper and it would prove to be very beneficial to read the paper for yourself. They go deeper in-depth on the design of the study and the creation of the samples than I have included in this post. They also have a large discussion section that dives deeper into the statistics they applied and why they were selected to properly represent the data. In a future post, I will be summarizing and discussing the second part of the Ames study so that more examiners will have access to what some critics of the science use as a reference.

Literature Review

Firearms and Toolmark Error Rates

Introduction

            On January 3, 2022, four statisticians issued a statement entitled, “Firearms and Toolmark Error Rates”. These four statisticians were: Alicia Carriquiry, Heike Hofmann, Kori Khan, and Susan Vanderplas. All the statisticians, except Kori Kahn, are part of the Center for Statistics and Applications in Forensic Evidence (CSAFE). Their purpose for the statement is to offer the opinion on the Firearm and Toolmark discipline that “error rates established from studies with sampling flaws, methodological flaws, non-response and attrition bias, and inconclusive results are not sufficiently sound to be used in criminal proceedings.” I reject the statements made and I will be summarizing the statement in this article and providing my own opinion.

Participant Sampling

            They first offer that there is a sampling problem within the studies conducted for the discipline. They state that having examiners volunteer for participation in a study will bias the study and create lower error rates. This is because examiners who volunteer are more involved in the discipline and tend to have more experience. The announcements for these studies are usually posted on the Association of Firearm and Toolmark Examiner (AFTE) forum, which is a place where all members derive most of their income from being a Firearm examiner. They state that examiners part of this organization is assumed to be more involved in the field and have more experience. I disagree with their statement because the study has to be announced in an area that the relevant scientific community can have the opportunity to volunteer. Examiners that are part of AFTE range from all different experience levels and and it cannot be assumed that membership does not include examiners with only a few years of experience. In my case, I have had only 2 years of experience in this field and I am an AFTE member who has access to the AFTE forum. There are also plenty of published studies that have volunteers that only have a couple of years of experience, which includes a consecutively manufactured Ruger slide study performed by the Miami-Dade Crime Laboratory. I also disagree that volunteers will affect the validity of the results because it would be impossible for the researcher to randomly select participants and then have their laboratory present the study as actual case work. Most laboratories evidence intake makes this hard to accomplish and it would be hard to replicate all the evidence and paperwork needed to make the study appear as a real case. All other scientific disciplines, including the medical field, rely on volunteers for their studies, so this should not be used to exclusively invalidate Firearm and Toolmark studies.

Material Sampling

            The group then argues that the discipline has material sampling problems. Studies in the discipline tend to focus on consecutively manufactured studies, which this group of statisticians finds problematic. They state that the studies lose the ability to make broad sweeping claims about the discipline. To do this they recommend that a black box study is needed with a large number of firearms and ammunition types so that the study can encompass more of what is found in actual case work. I disagree with this statement because consecutively manufactured studies create the worst-case scenario for examiners, thus giving the highest theoretical error rate. The consecutive studies are done on almost every part of the firearm, for example, the barrel., extractors, ejectors, and breech faces. In addition to the multiple parts that are examined, multiple machining methods are examined, for example, double broaching rifling, and hammer forged rifling. So, when combined these studies isolate the different parts of a firearm and the different manufacturing methods. These studies focus on the machining method rather than a mass amount of firearms because there is only a limited number of machining methods that manufacturers can use to manufacture a firearm. Therefore, examining the machining method is more beneficial for the examiner than examining random firearm make and models. I also believe that a creating big study examining multiple firearms as the statement suggests will not be useful because examiners would be able to eliminate samples early on in the study due to differences found in the class characteristics, which would prevent the individual characteristics from being examined.

Non-Response Bias

            They then go into the problem of missing data and non-response bias. They claim that most studies never disclose the data of their study and the drop-out rate.  Their suggestion is that the dropout rate should be factored into the error rate. They claim that a dropout rate of 20% should be enough to invalidate the study results and a 5% rate should be sufficient to cause concern for the study. When the dropout rate reaches these percentages they recommend that these participants answers be included and counted as 100% incorrect. They reason that it can be assumed that the participants quit the study based on the difficulty of the study or their own lack of time management. Due to this, their answers would have been assumed to be largely incorrect. Applying this could raise low error rates up to 16.56%, which will provide an upper bound for the error rate. This argument does not hold up well, because many people may drop out of the study due to case load at the laboratory or other responsibilities. Their dropout should not automatically be assumed that the examiner thought the study was too hard, especially since the statistician’s earlier assumption was that all volunteers were considered experienced. Also, to assume that their error rate would be 100% would assume complete incompetence of the examiner, the scientific backing of the discipline, and the quality assurance measures of the laboratory. Most laboratories require a second examiner to come to the same conclusion before the conclusion can be published, and this would assume that the second examiner would have also had an error rate of a 100%.  

Inconclusive

            Their next argument is about the AFTE Theory of Identification’s use of inconclusive. The AFTE Theory allows the examiner to conclude identification, inconclusive, and elimination. AFTE also allows three different levels of inconclusive that range from being close to an identification to being close to an elimination. Although, AFTE allows these three levels of inconclusive they are seldom used in laboratories. The group of statisticians believes that the inconclusive conclusion is used when it’s a hard decision and the examiner wants to be right. Because of their disagreement with the inconclusive conclusion, they want this conclusion to be considered an error, rather than the common practice of omitting the conclusions from error rates. When they consider an inconclusive an error the error can be brought up to around 50% making the conclusion a “coin toss”. The field is seeing a lot of “professionals” speaking out against the inconclusive conclusion, but I disagree with their statements. Inconclusive is a valid conclusion because of the nature of the evidence that is normally received in the laboratory. For example, many expended bullets that come through the laboratory are damaged which can cause foreshortening and damage to the underlying toolmarks. This will cause some areas to be unusable and leave the examiner with a limited number of markings. These markings may not meet the examiner’s threshold for an identification, but their presence will prevent the examiner from excluding the bullet. The only option that the examiner would be left with is to report an inconclusive result. Another situation is when the pressure inside a firearm may prevent the head of the casing from making good contact with the breech face which causes the primer to take limited marks of the breech face of the firearm. This situation would be similar to the bullet, and in no way eludes to the examiner wanting to take the easy way out. The examiner would only list the conclusion to properly speak for the evidence and prevent misguiding anyone reading the report.

Conclusion

            Based on the above-listed arguments the group of statisticians make the move that they can not support Firearm and Toolmark examination as evidence in criminal proceedings. They base most of their finding on the studies conducted in the field rather than the specific examiners in the field. They take a strict stand against the discipline but fail to recognize the complexity and uniqueness of this comparative science. For example, their misunderstanding of inconclusive results and their importance. Their recommendations are considered extreme and seem to be implemented just to raise the error rate of a study, for example, counting dropouts as 100% error or considering inconclusive results as errors. The courts should not accept their statement because of their lack of understanding and extreme views on how firearm-related studies should be conducted. They have little evidence to support their claims and provide very few references This statement has also brought the FBI on May 3, 2022, to post their own response. Their response will be reviewed in another literature review post.