General Laboratory

Paving Your Way to Publication

Why Should You Start a Research Project?

            In this article, I will take you through the general process of starting your research project and publishing your work in a recognized scientific journal. This task might feel daunting at first, but it’s something that is obtainable and you just have to take one step at a time. Being a good scientist is important, which includes being able to follow procedures and understanding the theory behind the things that we do. But, there is another level that is crucial to the scientific field, and that is to start or participate in a research project. Our science is based on research, and it only moves forward or solidifies its validity through the constant publication of scientific articles.

            If you’re a student you may think that a research project is something that you just have to do to pass a class or receive your degree. But it’s so much more than that. As a forensic scientist, you may think that your idea doesn’t matter or that the field has already been fully developed. This is not the case, the field will always have a question to answer or have a technique or procedure that can be refined or given more validity. And if you are a part of a scientific organization this is exactly what they were created to do, aid researchers and spread ideas.

            As you read through this article, I hope to give you a vision or a path to start your own research project. You are the one who can move the science forward in your own discipline. Take that next-level initiative and become a contributor to the field.  

Finding the Next Great Idea

            The most important part of publishing your research is to have a research idea. Hopefully, you have an idea in mind, and if not, I will show you some places that may give you an idea. If you’re a college student, your best option may be to join an ongoing research that is currently being conducted at your institution. Alternatively, you can find a mentor to help you with your new research. The mentor you pick should be working or teaching the relevant subject area so that they can better assist you.

            If you are not part of an intuition or looking for your own idea, then internet research will be your best friend. A great place to start your research is looking within a scientific organization that you are a part of or one that aligns with your area of research. For example, AFTE has a committee that will aid examiners in finding a research project that was either abandoned or is needed to fill a gap in the science. Most likely your professional organization will have something similar. OSAC, which is found on the NIST website, posts areas of research that are needed by the scientific community. These areas are created from other research projects that created more questions to be answered, or areas that are under scrutiny and need more research to flush out.

            I would like to share a quick success story. A college student used the forums for AFTE to find a research idea and acquire a mentor. He then conducted his research with his mentor’s supervision and guidance. When it came time for the AFTE Annual meeting he presented his work, and everyone was amazed by the amount of dedication this college student had. Because of this, supervisors from different labs walked up onto the stage to give them their business cards so he could apply to their laboratory. From this story, it shows the importance of using the resources that these scientific organizations offer. Sometimes, these organizations offer great resources, but they are seldom used.

            After your searching, you will come to an idea that you are passionate about and then the literature review process starts. This is where you will learn what your forefathers accomplished to see if there are any missing pieces in the research or where the research can be expanded upon. At this point you may find out your idea has already been covered excessively and you may need to either tweak your idea or change it altogether. Otherwise, you have secured your idea and are now laser-focused on it. The next step is how will you be funding this new idea.

Funding Your Great Idea

Below is a small summary of some grant opportunities. If you would like more information on a specific grant, feel free to click on the hyperlinks. This list is only meant to guide you in the right direction to seek research funding opportunities, many other grants can be found, especially ones that are specific to your research area.

AFTE

Research and Development (Members): This is a research grant that is given to regular members in AFTE to conduct research in the field of firearm and toolmark identification. I have applied for this grant twice and was accepted with both submissions. Since I went through this process I will provide more detail about this grant.  

This grant requires that an application be filled out with the following attachments: a project proposal abstract, a signed rules and assurance form, a budget worksheet, a small literature review of reference material, and a current CV/Resume. Once the application packet is completed it is sent to the Research and Development (R&D) committee for consideration. If the committee approves the application, it is then sent to the Board of Directors for their blessing. Once you achieve their blessing you will receive a check for the seed money, which is 10%-20% of the projected budget. Once the research is completed the R&D committee will reimburse you the remaining amount. Once the money is accepted, it is agreed that the research will be published and will be presented at the Annual Conference.

Research and Development (Students/Trainees) – This grant is similar to the grant above but is only offered to students and trainees in the field of firearm and toolmark identification. This grant also requires that you seek a mentor who is an AFTE member to guide you in your research.

Before moving on to the next grant opportunities I just want to share something I heard from the R&D committee when I attended the last AFTE Annual Conference. The R&D committee discussed how they would like to receive more applications for grant money because they do not receive many applications during the budget year. This was touched upon earlier when I said a lot of these organizations provide valuable resources but they are seldom used, which, to your benefit, will increase your chance of being awarded a grant.

NEAFS

Carol De Forest Forensic Science Research Grant – This grant will award $2,500 for research projects conducted by full-time undergraduate and graduate students. Two awards are given per year. The student must also be located in the following areas: New England, New Jersey, New York, and Pennsylvania. The student must also be majoring in either a natural, allies, or forensic science program. Lastly, the student must have a research advisor from the academic institution they attend.

The student will have to submit an application with the following attachments: 3-6 page research proposal, 1-2 page statement of qualifications, CV, Transcript, and CV and recommendation from the research advisor you chose.

AAFS

Field and Lucas Research Grant – The Field Grant will fund up to $1,500 and the Lucas Grant will fund $1,501-$6,000. This grant is for researchers who initiate original in-depth, problem-oriented research. To receive this grant you must be a member, of any level, of AAFS.

The following is needed to apply for the grant: An abstract, a 1-2 page literature review with no more than 10 references, a detailed budget, a timetable and specific plan for dissemination of results, disclose any current or previous FSF research grants, and the CVs of all researchers involved.

Jan Bashinski Criminalistics Graduate Thesis Assistance Grant – The grant is for full/part-time students completing their graduate degree requirements by conducting a research project at an educational institution. The grant awards $1,850, and funds up to $1,400 to cover travel expenses for the student to present at an AAFS conference.

The student has to fill out an application with the following attachments: a 3-6 page proposal, a 1-2 page statement of qualifications, a CV, a letter of recommendation from your research advisor, and your transcript.

NIJ

Every year NIJ will award grants and cooperative agreements for research, development, evaluation, and training across the spectrum of criminal justice. The site offers a list of all projects that have been awarded, so you can get a feel of what type of projects get awarded. In the field of firearms, a company by the name of Cadre Research Labs receives a significant amount of funding from NIJ to expand the technology of 3D microscopy. NIJ will usually fund projects in their current areas of interest, and unsolicited proposals may not be accepted as easily as competitive solicitations.

Grantforward.com

This website is a valuable resource, which is used as a search engine for grant opportunities. The website will collect data about you and your research and search that information against their database and connect you to the applicable grants. This search engine does require a membership to be able to search, but John Jay students may login using their institution’s credentials. Other academic institutions may offer the same benefits, so check with your institution for details.

Elsevier Funding Solutions

Elsevier will work with researchers to provide grant money for their research, similar to the Grant Forward search engine. To use their services, they require the researcher to contact them when one of their specific product offerings is selected. Please review the website if this may be a fit for you.

Office of Justice Programs

Awards specific grants per year that offer solicitations that support the Forensic Science topics.

What Scientific Journal Fits You?

ANAB Documents

Once you completed your research it’s time to publish it in a scientific journal. If you applied for a grant, you may be required to publish in the organization’s journal. Otherwise, your choice should be based on the focus of your research. A journal like the Journal of Forensic Science will focus on multiple disciplines in forensic science. A journal like AFTE will focus on a niche group like the firearm and toolmark community. You want to pick the most appropriate journal so that you have the greatest probability of publication and for your article to reach the right group of people. Below are some journals that are commonly used by forensic scientists, this list is not exhaustive, so please look into other journals that may be better suited for your research. I recommend that if you choose a journal that is not on this list, make sure the journal is peer-reviewed. Having an article peer-reviewed will add validity to your published work. It’s one of the first things defense lawyers may ask to ensure that your article can be “trusted”.

AFTE Journal: This Journal focuses on firearms and toolmarks.

Journal of Forensic Science: This Journal publishes a wide variety of disciplines including the following: Anthropology, Criminalistics, Forensic Nursing Science, Odontology, Pathology/Biology, Psychiatry & Behavioral Science, Questioned Documents, Toxicology, Digital & Multimedia Sciences, and General. This journal will cover a lot of research ideas but is one of the toughest journals to get into.

Journal of Forensic Identification: This Journal publishes articles related to forensics. This includes anything from crime scene processing to footwear comparisons. Similar to the Journal of Forensic Science, this Journal publishes a wide range of disciplines.

Forensic Science International: This Journal is another multidiscipline publication but they publish research from around the world.

International Journal of Toxicology: This Journal is specific to toxicologists.

The Publication Process

So far, I have published twice in the AFTE Journal so I will go into some detail about that process, which will hopefully help you through your journey. The first thing I did before entering my submission, as you should do when preparing for your submission, is to look at the publication’s website and see the criteria needed for your submission. Ensure that your submission follows all the rules so that the article can be accepted and published in a timely manner.

In my case after submission, I had to go through two phases of peer review. My first peer reviewer was given my article without my name so that they would not know the identity of the author. This is to mitigate bias for well-known or hardly known authors. The peer reviewer then sends the corrections to the Journal Editor, which will then be sent back to me. The peer reviewer was unknown to me, which allowed the reviewer to be completely honest and negate any bias. At this point, I have the option to accept the corrections or reject them. I then send my corrections and response to the Journal Editor to send back to the peer reviewer. This correspondence happened a couple of times until the peer reviewer and I was happy. After we both came to an agreement the new corrected article was sent to a second peer reviewer to repeat a similar process. After the second peer review process the article was evaluated by the Journal Editor. The article was then formatted, and a draft copy of the formatted version was sent to me for approval. This was my time for any final corrections and to approve the article for publication. Finally, after this point, the article was published for all to enjoy.

Good Luck

I hope that you learned a lot from this article and please feel free to use the contact button at the top of the page if you have any questions. Just remember that the lists provided here were only a small fraction of what is available. Now go out there and contribute to the field of forensic science.

General Laboratory

Discussion about Inconclusives

Discussion about Inconclusives

            In other articles on this website, I have briefly discussed inconclusive results in the field of firearm comparisons. This article will dive deeper into what’s the meaning of the conclusion and why it is becoming the object of intense discussion.

AFTE Range of Conclusions

            AFTE (The Association of Firearm and Tool Mark Examiners) allows examiners to report three different inconclusive results: Inconclusive A, Inconclusive B, and Inconclusive C.

Inconclusive A

            Inconclusive A would be chosen if there is some agreement of individual characteristics and all discernible class characteristics, but insufficient for an identification. This conclusion would mean the examiner was leaning towards an identification but did not have enough information to conclude an identification.

Inconclusive B

            Inconclusive B would be chosen if there is an agreement of all discernable class characteristics without agreement or disagreement of individual characteristics due to an absence, insufficiency, or lack of reproducibility. This conclusion would be considered the middle ground where an examiner was not leaning towards an identification or an elimination.

Inconclusive C

Inconclusive C would be chosen if there is agreement of all discernible class characteristics and disagreement of individual characteristics, but is insufficient for an elimination. This conclusion would mean the examiner was leaning towards an elimination but there was just enough information to keep them from eliminating the two items from each other.

Opinion on Reporting Inconclusive

            Although AFTE allows examiners to report three different types of inconclusives, I believe that examiners should try to only report “inconclusive” as their result regardless of what way they are leaning. This not only prevents bias but also more accurately conveys the intention of the examiner. For example, if an examiner were to testify and they tell the jury that they made an Inconclusive A conclusion he would have to tell them that there wasn’t enough information to report the comparison as an identification or elimination, but the comparison was leaning towards an identification. I feel like this explanation is taking the inconclusive conclusion and then adding a wink and a nod to the jury that the comparison could have been an identification. This confuses the jury members and also skews the results of the examination.

            The examiner who reports “inconclusive” will be more transparent to the jury and the jury will have an easier time understanding the conclusion of the examination. The examiner would explain that during the comparison there was not enough information for an identification and just enough information to reject an elimination. If the markings were not sufficient enough for an identification then the conclusions would be inconclusive and not inconclusive A, because no matter the circumstances it was never enough for an identification.

The Rise of the “Problem”

Their Argument

            After the release of the PCAST report the field went to work producing studies that would satisfy the PCAST recommendation for black box studies. The Ames I (Baldwin Study) and the AMES II (Monson Study) were conducted to satisfy that recommendation. These studies were very informative and contained a lot of data to be reviewed by anyone in the field. Although satisfied, people outside the field determined to move the goalpost and stated that the problem of the field and the studies were the inconclusive conclusions. This has now become the main focus of the field and once again examiners are trying to satisfy this challenge.

            One of the first times the inconclusives results were stated to be problematic was with Dr. Scurich, who stated that inconclusives should be reported as false positives. He demonstrated this idea by taking the Baldwin study and recalculating the error rates based on his new idea. With the new treatment of inconclusive, the error rate went from 1.01% to 35%.

            Dr. Scurich also recommended that a majority approach be implemented when determining the ground truth for a particular comparison. For example, if the preliminary comparison were given to a certain amount of examiners and the majority concluded that the comparison resulted in an inconclusive result; it would mean when the comparisons are given to the actual participants in a study, the conclusions would only be correct if they were to report inconclusive. This would also mean if the majority ruled that a comparison is an identification/elimination, that any participant who concluded an inconclusive would be marked wrong. No matter the ground truth the answer will always have to coincide with the majority rule.

My Argument

            We will first look at combining the inconclusives into the error rate of a study. I believe that inconclusives should be treated separately from an identification and elimination, and should not be factored into the error rate. As discussed above an inconclusive carries a lot of meaning. It shows the reader that the examiner did not have enough information for an identification and/or had just enough information keeping them from concluding an elimination. Forcing an examiner to either pick an identification or an elimination would mean that the examiner would have to disregard what they see in the evidentiary material. Sometimes the evidence may be deformed, damaged, not well marked, or changed from one firing to the next due to rust or other factors. These conditions would increase the difficulty of the examination and may obscure markings that would drive an examiner to either an identification or an elimination. No matter the experience level of the examiner, if the markings are not present or insufficient an inconclusive conclusion would be appropriate. This scenario is how other sections in crime laboratories use inconclusive results, for example, latent prints and DNA. Although the conclusions are used in other sections, they only appear to be considered problematic in the field of firearm comparisons.

            Lastly, the majority rules solution would not work and cause an artificial error rate. As discussed in the previous section if the majority chooses inconclusive then to be right the participants would have to conclude an inconclusive result. Let’s say in this scenario, the two pieces of evidence were from the same firearm. A couple of participants had experience with such pieces of evidence that they were able to look at areas other examiners would normally miss or use lighting techniques that other examiners are not proficient in. By using their experience and techniques they were able to find markings that drew them to conclude and identification, which would be the correct answer. However, since the majority ruled that the conclusion would be inconclusive these select examiners would be marked wrong.

            A real-life example of the previous scenario would be to take a survey of 100 people and ask them what the southernmost state of the United States is. Let’s assume that the majority of the participants picked either Texas or Florida then that would become the ground truth answer. Then if the researcher took this same survey to the actual participants of the study and gathered the answers and graded them based on the majority ground truth. Some participants may be more versed in geography and answer that Hawaii is the southernmost state. Technically, these participants would be right in the real world, but in the context of this study, they would be marked wrong and contribute to the error rate.

            The majority rules also cause test-taking bias. Examiners may feel forced to report a conclusion as an identification or an elimination. For example, if there were a lot of markings that are drawing an examiner to an identification, but the quality and quantity of the markings did not meet their threshold for an identification they would normally conclude an inconclusive. But, by knowing that the study is graded as majority rule they may now conclude an identification because they would be biased that the majority of the participants would use those markings to make an identification.

            Before finishing up this article I would like to add one more data point that was seen in multiple studies. It was found that examiners who used inconclusives more were considered more trustworthy than examiners who did not use inconclusives as often. This can be seen in the AMES study where there was an examiner who contributed significantly to the error rate but in their result did not report any inconclusives.

Final Thought

            Examiners should start using one inconclusive result rather than three to become more transparent and to eliminate any bias to the jury. Outside organizations should accept inconclusive conclusions as valid as they do for other disciplines. It seems like these outside organizations are trying to artificially raise the error rates and conflict within the field by using inconclusives. People have to be able to recognize the true meaning of an inconclusive result and not equate the conclusion as an easy way out or something that is used to easily pass a study. It is a valid and important conclusion to be able to appropriately speak for the evidence.

Literature Review

AMES II Study

Validation Study of the Accuracy, Repeatability, and Reproducibility of Firearm Comparisons

            This post will be the summary of the second part of the AMES Study. The AMES study was created as a direct response to the PCAST report to create a black box study that would validate the comparison of components from expended cartridges. The AMES II study included repeatability and reproducibility, which was missing from the AMES I study and incorporated expended bullets along with harder-than-usual samples for comparison.

Materials and Methods

Participation

            Recruitment was done through the AFTE website, announcements by FBI personnel at forensic meetings, and emails through an email list. The participants were told that they would remain anonymous to protect the participants from any risk. Overall (270) responded, but later it was decided that FBI employees could not participate to eliminate any bias, which brought the total to (256) participants. By the end of the study only (173) examiners returned their evaluations and were active in the study. Only (80) examiners returned all six mailings of test packets. The dropout rate was due to the examiners’ response of having an inadequate amount of time to complete the study along with their casework.

Sample Creation

            For expended casings: (10) Jimenez, (1) Bryco (that replaced a failed Jimenez), and (27) Beretta firearms were used. (23) Berettas were new and were selected in groups of 4 or 5 that were consecutively produced using the same broach at different periods in the life of the broach. All firearms had a break-in period and were cleaned throughout the testing. Steel Poylformance 9mm ammunition was used due to poor reproducibility of individual characteristics, which would increase the difficulty of the study. The expended bullet samples were created using (11) Ruger and (27) Beretta firearms.

Test Set Creation

            Each test packet consisted of (30) comparison sample sets that were made up of (15) comparisons of (2) knowns to (1) questioned expended cartridge case and (15) comparisons of (2) knowns to (1) questioned expended bullet. The expended cartridge case comparisons consisted of (5) sets of Jimenez and (10) sets of Beretta produced expended casings. The expended bullet comparisons sets consisted of  (5) sets of Ruger and (10) sets of Beretta expended bullets. The ratio of known same-source firearms to known different-source firearms was approximately 1:2 for both expended casings and bullet sets but varied among test packets. Each set was an independent examination that was not related to the other sets. These sets are open because there was not a match for every questioned sample.

            The samples were designated so that the researchers would know if they were fired in the early, middle, or late test firing. This was done to be able to look at the effect of the test firing order on the error rate. Samples from different manufacturing intervals were also marked so that the effect can be seen in the error rate.

            The packets were randomized when received with the results so that they can be redistributed. The test packets would have to be sent back to the examiner to be tested for repeatability and then sent out again to a different examiner to be tested for reproducibility. The randomization of the same packet helped to ensure that the examiners would not be able to identify any trends when receiving the test packet.

Results

Accuracy

            A total of (4320) expended bullet examinations and (4320) expended casing set examinations were performed. For expended bullet comparisons (20) were a false identification (ID) with an error rate of 0.70% and (41) were a false elimination with an error rate of 2.92%. For expended casings (26) were a false ID with an error rate of 0.92% and (25) were a false elimination with an error rate of 1.76%. Out of (173) examiners (34) examiners made a hard error in expended bullet comparisons. Out of (173) examiners (36) examiners made a hard error in expended casings comparisons. A chi-square test determined that the probabilities associated with each conclusion are not the same for each examiner. The error rate (Point Estimate) with 95% confidence was calculated to be the following: Expended casings false positive 0.933% and false negative 1.87%, and expended bullets false positive 0.656% and false negative 2.87%.

Repeatability

            The results are laid out to show errors when an examiner changed his/her initial answer. This would mean that the examiner changed their answer from one inconclusive option to another, or between an inconclusive option to the ground truth, it would be marked as an error. Below I will summarize the charts by neglecting the switches explained above and only including hard errors. Hard errors would be switching the answer from an elimination to an identification or visa versa. For expended bullet matching sets (8) comparisons went from ground truth ID to false elimination and (8) comparisons went from false elimination to ground truth ID. For expended bullets nonmatching sets (1) comparison went from ground truth elimination to a false ID and (6) comparisons went from a false ID to ground truth elimination. For expended casings matching sets (5) comparisons went from ground truth ID to a false elimination and (1) comparison went from a false elimination to a ground truth ID. For expended casings nonmatching sets (2) comparisons went from ground truth elimination to a false ID and (2) comparisons went from a false identification to a ground truth elimination.

            The portion of paired disagreements was calculated by pooling the inconclusives together and by combining ID with Inconclusive A and elimination with inconclusive C. The first percentage will reflect the former and the second percentage will reflect the latter. For expended bullets the matching sets have a 16.6%/14.5% and the nonmatching sets have 16.4%/28.7%. For expended casings the matching sets had 19.1%/14.6% and the nonmatching set 21.1%/27.5%. The authors also calculated a better-than-chance repeatability from the results.

Reproducibility

            The results are laid out the same way as the results from the repedability portion of the study, and again I will only be listing the hard errors for the summary. For expended bullets matching sets (12) comparisons went from ground truth ID to false elimination and (13) comparisons went from false elimination to ground truth ID. For expended bullets nonmatching sets (1) comparison went from ground truth elimination to a false ID and (1) comparisons went from a false ID to ground truth elimination. For expended casings matching sets (5) comparisons went from ground truth ID to a false elimination and (15) comparisons went from a false elimination to a ground truth ID. For expended casings nonmatching sets (1) comparisons went from ground truth elimination to a false ID and (5) comparisons went from a false identification to a ground truth elimination.

            As before the portion of paired disagreements was calculated by pooling the inconclusives together and by combining ID with Inconclusive A and elimination with inconclusive C. The first percentage will reflect the former and the second percentage will reflect the latter. For expended bullets the matching sets have a 27.6%/22.6% and the nonmatching sets have 45.4%/51.0%. for expended casings the matching sets had 29.7%/23.6% and the nonmatching set 45.1%/40.5%. The authors also calculated a better-than-chance repeatability from the results.

Other Examined Areas

            The paper also examines other areas of interest that may prove useful to some examiners, but the results would not be used in court testimony. I will briefly summarize them here to ensure that this post remains a complete summary of the study.

            The effects related to firearm type and wear were examined and it was found that Beretta produced expended bullet samples had a larger proportion of correct conclusions. Ruger firearms had more inconclusive results when compared to the Beretta samples. For expended casings, Beretta firearms produced a larger amount of correct conclusions compared to the Jimenez. For the firing sequence for matching sets, the rate of correct conclusions compared to inconclusives were higher when the samples are part of the same third of the sequence. Even though a difference was observed, the Chi-Square test proved them not to be significant for Early-Late and Late- Early comparisons.

            The proportion of unsuitable evaluations were also examined during the study. It was observed that fewer expended bullet sets produced with Berettas were recorded as unsuitable and more expended casings sets produced with Beretta weapons were recorded as unsuitable. The effects associated with manufacturing were also examined and found strong support for the difference between conclusions for the same group and different group examinations in expended casings. More eliminations were seen with expended casings from different production runs when compared to ones from the same production run. It was also found that tool wear from a production run is not significant by using the Chi-square.

            The study also asked examiners the difficulty level of their comparisons, the time of their evaluations, if they used consecutively matching stria, and the areas they used for their conclusions. Examiner experience was also looked at by the authors. These results can be found in the study and will not be discussed here. Although, I would like to share one result about the consecutively matching stria (CMS) portion. The study showed that examiners who used CMS were more likely to choose false negative conclusions. This result was significant in only matching expended bullets and nonmatching expended casings.

Discussion

Accuracy

            The error rates found in the accuracy portion of this study was close to the ones found in Part I of the AMES study. The false positive rates match extremely well but this study does show a higher false negative rate. The difference in false negative error rate can be contributed to the steel Wolf Polyformance cartridges that prove to be more difficult for comparisons. This factor combined with the poor marking Jimenez with the difference in firing order can cause more false negatives. These factors would be considered the worst-case scenario for examiners. Some examiners were able to record comments on the study and some had concerns that they were not able to look at the firearms to determine if certain markings were subclass. In normal casework, if known samples are generated the examiner would have access to the firearm to examine. Another complaint from examiners is the spacing of test fires. In casework, the test fires generated from a known source should be close in sequence to samples collected at a crime scene with the firearm. Another factor noted is that the errors were only contributed to a few examiners, which was also seen in Part I of this study. The article concludes this section by stating for both expended bullets and casings the probability for a false positive is half of what is for a false negative possibly due to examiners being trained to lean towards the cautious side.

            It should also be noted for the error rate that the (6) most error-prone examiners accounted for almost 30% of the total error. (13) Examiners account for almost half of all the hard errors seen in the study. These results are consistent with the ones seen in Part I of the study. Considering that most of the errors are from a small group of examiners it can be said that the error rates are really applied to the examiner rather than the overall science. If these examiners were randomly swapped with other examiners during the selection process it could have caused the overall error rate to decrease. Also, if the study allowed the examiners to use their laboratory QA system it could have prevented them from making an error to begin with.

Repeatability/Reproducibility

            In the article the author created multiple scatter plots of observed versus expected agreement for repeated examinations by the same examiner. These plots show that examiners score high in repeatability, in other words, their observed performance generally exceeds the statistically expected agreement by a wide margin. This statement is true even if the inconclusives are treated separately or combined. Some examiners stated that they would not be surprised if they were to conclude inconclusive C and then conclude elimination in the second round. Another examiner states that they would not be surprised that their “flip-flop” would be concentrated around the three inconclusive categories. As for reproducibility, it was found that the observed agreement generally matched the expected agreement and the trends are not as dramatic as the ones seen in repeatability. This is due to reproducibility involving multiple examiners in the process rather than a single examiner that was involved in repeatability.

Inconclusives

            I feel that it is important to not include inconclusives with the error rate because they should be used when an examiner cannot make an identification or elimination based on the evidence provided. Forcing an examiner into making an identification or an elimination would be asking the examiner to make a conclusion against what they are observing. I also believe that the study should have only allowed the examiner to choose one option for inconclusive, allowing three options creates difficulty when trying to determine error rates. For example, in repeatability and reproducibility, the three inconclusives can cause a higher error rate meanwhile the examiners choosing them are all concluding inconclusive. When examiner chooses inconclusive they should not be biasing themselves to lean in an identification or elimination direction. They should be stating that the markings are not sufficient for either an identification or an elimination. I find the only use of the three inconclusives is in academic research, but as seen here can still cause problems in this area.

Final Thoughts

            In the conclusion of this study, they state that there were some comparison sets that resulted in errors by more than one examiner. One of the sets was marked as an error in all parts of this study. The authors state that these comparison sets would be evaluated by trained forensic examiners at the FBI to determine the cause behind the errors. Since the publication, I cannot find anywhere if they ever followed up on this.

            As always, I recommend you read the full study because they examined a lot of variables not seen in Part I and they included a lot of statistics to back up their claims. Also, a better understanding of this study will help you combat its use in court. See my article on a Daubert hearing where the defense attorney used this study to manipulate the data for their own gain.

Literature Review

Part I: Ames Study

Finding the Article

            I was able to find a copy of a Study titled “A Study of False-Positive and False-Negative Error Rates in Cartridge Case Comparisons” written by David P. Baldwin, Stanley J. Bajic, Max Morris, and Daniel Zamzow. This is Part I of a two-part study done by the Ames Laboratory. Part I can still be found in obscure places but part II has been wiped from most sources. Defense attorneys or academic opponents heavily reference these studies, but when used, they are quickly/sloppily referenced and cherry-picked. I hope to be able to share the main findings in these studies to be able to help anyone in the field that will meet with people using these studies. This post will focus on Part I of the study, and at a later date, I will post a discussion of Part II of the study.

Introduction/Experiment

            The study’s authors designed the study to better understand error rates associated with the comparison of fired cartridge casings. They stated that the problem with previous studies is that they did not include independent sample sets that would allow unbiased determination of the false-positive and/or false-negative rates. So, this study sets out to resolve this issue.

            Two hundred and eighty-four (284) participants were given fifteen (15) test sets to examine. Twenty-five (25) Ruger SR9s were used to create the samples for the test sets, and each firearm fired 200 cartridges to break them in before sample collection. Each handgun fired 800 cartridges in total for the test sets. In the test sets, no source firearm was repeated in a single test packet, except in the case when a test set was meant to be the same source comparison. The sets included 3 knowns to compare to a single questioned casing. For all the participants five (5) of the test sets were from known same-source firearms, and ten (10) of the test sets were from known different-source firearms.  In addition to the results, the participants had to record the quality of the known samples, which allowed the authors to calculate a poor mark production rate. This rate was examined to avoid cherry-picking well-marked samples for the test sets, which usually draws criticism as making the test sets too easy. The authors also asked the participants not to use their laboratory peer review process, which allowed the error rates to reflect the individual examiner.

Results

False Negative

            Out of the two hundred and eighty-four (284) participants, only two hundred and eighteen (218) returned completed responses. Out of the completed responses, 3% accounted for self-employed individuals. In total, thousand and ninety (1090) true same-source comparisons were made, where only four (4) comparisons were labeled elimination and eleven (11) were labeled inconclusive. The false elimination rate was calculated to be 0.3670% with the Clopper-Pearson exact 95% confidence interval calculated to be 0.1001%-0.9369%. Two (2) of the four (4) false eliminations were made by the same examiner, therefore 215 out of 218 examiners did not make a false elimination. When factoring in inconclusive with false elimination the error rate increases to 1.376% with the corresponding 95% confidence interval calculated to be 0.7722%-2.260%.

            A number to take into consideration is the poor mark production rate that was discussed above. Two hundred and twenty-five (225) known samples out of nine thousand seven hundred two (9702) knowns were considered poor quality and inappropriate for inclusion in the comparison, which was calculated to be 2.319% of the samples with a corresponding 95% confidence interval of 2.174%-2.827%. This percentage is greater than the false elimination rate, which means there is a high probability that some of the false elimination can be attributed to the poor quality of the knowns used for comparison. Also, four (4) of the false eliminations were made by examiners who did not use inconclusive for any response, which could be attributed to their agency requirements.

False Positive

            Out of the two thousand hundred and eighty (2180) true different-source comparisons, twenty-two (22) comparisons were labeled identifications, and seven hundred and thirty-five (735) were labeled inconclusive. The error rate for false identification was calculated to be 1.010% (Note: Two (2) responses were left blank and were subtracted from the total number of responses.). Out of the false identifications, all but two were made by five (5) of the two hundred and eighteen (218) examiners. Since a small number of examiners made the same error, it can be suggested that the error probability is not consistent across examiners, which was the idea stated at the beginning of this post. The beta-binomial model was used to estimate the false identification because it cannot be assumed that the probability is uniform across examiners. The probability was calculated to be 0.939% with a likelihood-based 95% confidence interval of 0.360%-2.261%.

            The inconclusive also showed to be heterogeneous. Out of the two hundred and eighteen (218) examiners, ninety-six (96) labeled none of the comparisons as inconclusive, forty-five (45) labeled all 10 of the comparisons as inconclusive, and seventy-seven (77) examiners showed an even spread between the extremes. 

My Discussion

            The authors state that the false elimination error rate is in doubt because the poor-quality rate is higher than the false elimination rate, even with the inconclusive results factored in. I agree that the error rate should be questioned because the rate can be affected by the poor quality of the samples, which can lead an examiner to not conclude a positive comparison. But, there is another factor in play as well. Some laboratories do not allow their examiners to report inconclusive results and require that the conclusion can be either an identification or an elimination, which is something that the statistics community has been making a push for. But, this factor is hard to consider when the authors did not require the participants to disclose their laboratory practices. It can be assumed that this might be the case because all the false eliminations were made by examiners who did not conclude inconclusive in any of the comparisons.

            The false positive rate is a percentage that should not be applied to the science but rather to an examiner. This 1% error rate is more representative of the examiners participating in this specific study. This can be seen when most of the false identifications were produced by five (5) examiners out of the two hundred and eighteen (218) total participants. This study also disclosed in the design that they did not want the laboratory review process to be a factor so that they can examine the individual examiners. It is my belief that if the review process was allowed in this experiment the error rate would be smaller or be close to 0%. So, the error rate can be used to advocate for examiners to be well-trained and to have a well-established QA system in place.

            The study also addresses the higher inconclusive responses that were received with the different-source comparisons. The seven hundred and thirty-five (735) inconclusive results out of the one thousand four hundred and twenty-one (1421) reported eliminations is too large to be attributed to the poor-quality percentage. Just like the false elimination results, the inconclusive can be attributed to the policy of the laboratory. A laboratory may require the examiner to report an inconclusive result if the class characteristics are the same between the known and unknown samples. In this study since the same model of firearms were used for the creation of the known and unknown samples, the samples generated would have the same class characteristics. If the authors included a section where the participants were able to disclose their laboratory policy, we would be able to better understand the number of inconclusive results seen in the study.

            Hopefully, my post will help bring to light the first part of the Ames study and provide more transparency to the error rates published in the paper. Please use this post as a reference or a quick summary for your knowledge, but seek out a copy of the original paper for a more in-depth look into the study design. The authors of the study were very detailed in their paper and it would prove to be very beneficial to read the paper for yourself. They go deeper in-depth on the design of the study and the creation of the samples than I have included in this post. They also have a large discussion section that dives deeper into the statistics they applied and why they were selected to properly represent the data. In a future post, I will be summarizing and discussing the second part of the Ames study so that more examiners will have access to what some critics of the science use as a reference.

Experience

Internal Auditor Training

            Recently I took an Internal Auditor training course provided by Seaglass Training and taught by Anja Einseln. Anja Einseln worked for ASCLD/LAB (now merged into ANAB) from 2006 to 2017. The training covered the main documents used during internal audits for crime laboratories: ISO/IEC 17025 2017, ANAB AR3125 Accreditation Requirement, and the 19011 Auditing Guidelines. The course was attended online with the instructor’s video/voice overlayed over a PowerPoint presentation. The course was taught in a 5 day period for 5 hours per day. We had two 15 minutes breaks each day…for those readers wondering about breaks. Although I took the course online the course has previously been taught in person. By the time you take this course, your experience may differ if the course was to convert back to in-person training.

Background

            The course was structured where the instructor went through each section one by one. Ms. Einseln read most of the section aloud while the students took notes. We were provided with a PDF copy of the PowerPoint where we were able to take notes and read along. The PDF copy had fill-in-the-blank spaces every couple of slides, which meant that we had to be constantly cross-checking her PowerPoint against our own PDF to make sure we had all the information. This was done to make sure we were engaged throughout the course. Before starting the class, she explained her teaching methods to us. She explained that she structured her course to be able to help students who learn in different ways: Kinesthetic, visual, and auditory. Kinesthetic learners benefited from the fill-in-the-blanks, visual learners benefited from the PowerPoint slides, and auditory learners benefited from the audible reading of each section of the documents. She also asked us questions from time to time and required us to write the answer in the chat. They were usually simple questions, but they made sure we were engaged. We had to make sure we answered these questions because she would monitor the chat to make sure everyone in the class was participating. The last day included exercises that further engaged us and allowed her to test our knowledge. The exercises will be explained in the next section.

Day-to-Day Breakdown

Day 1

            The first day was focused on background information. This included the background of ISO, accrediting bodies, and other organizations. Some organizations mentioned were OSAC, ILAC, IAAC, and NIST. She explained the importance of these organizations and how they relate or interact with one another. She also discussed how the overall auditing process is layered: General requirements, Accrediting body requirements, Community adopted standards, Agency orders/manuals, and Laboratory policies and procedures. Another topic heavily covered on the first day was a concept called Deming’s Wheel, which goes over the basics of auditing. The wheel consists of the following concepts plan, do, check, and act, which was created by W. Edwards Deming.

Rest of the week

            The second and third day was allocated to covering the ISO 17025 2017 document. This was done through the method explained above where each section was read one by one. During this process, we were filling in our PowerPoints. The instructor allowed us to ask questions at any time and she was good at catching when a question was asked in the chat portion of our virtual meeting. All questions were answered thoroughly, and she ensured everyone understood the answer before moving on. The fourth and fifth day was dedicated to the ANAB AR3125 document. These days were structured the same as the days prior.

On the fifth day, two exercises were handed to us. One exercise required us to make a checklist based on a procedure. Then she showed us the final product of a couple of cases. We had to take the checklist we made and ensure that the final product matched up with our checklist. If the end result did not fulfill our checklist, we had to say that the case was not compliant and what areas caused the determination. For the second exercise, we were given a procedure and a video to watch of someone performing the experiment. We had to take notes when any deviations occurred in the video. This exercise had us practice case work observations. At the end of the video, everyone wrote in the chat the deviations they found. Some people went overboard with their findings, but most people found what was supposed to be found.

Final Thoughts

            Overall, this class was helpful but very dry. BUT, the dryness of the class cannot be helped because there is no way to make reading these documents entertaining. She did try her best and created multiple ways to engage the students so that we had an easier time absorbing the information while also being stimulated. One of the best parts of the class was the question-and-answer dynamic. This allowed us to clarify anything from these informationally dense documents. I wish I knew about this dynamic because I could have asked the laboratory if they needed clarification on anything from the documents; because the instructor was very knowledgeable about everything within those documents. There were times when she didn’t have an answer right away but either after a break or the next day, she would have a detailed answer ready for the person who originally asked. She also made herself available during breaks and after class for any questions we had. I would usually stay after class just to hear what other people were asking and to hear her answer. I would recommend that anyone taking her class to listen carefully and ask your own questions. The answers she gives can help immensely during an internal audit. Lastly, make sure the notes you take, the documents that are marked, and the PowerPoints are kept in a safe place because they will be very useful as a reference in the future.  

Literature Review

Response: The Field of Firearms Forensics is Flawed

Introduction

            An article entitled “The Field of Firearms Forensics is Flawed” was published by David L. Faigman, Nicholas Scurich, and Thomas D. Albright. The authors start their article by referencing an article entitled “Forensic Science: Oxymoron,” by Donald Kennedy, which agues the point that Forensic Science is an Oxymoron. The authors of this article agree that the statement made in 2003 is still relevant today. They state, “Forensic experts continue to employ unproven techniques, and courts continue to accept their testimony largely unchecked.” They claim that the field of Firearm Examination is built on smoke and mirrors. I would like to provide a response to this article using the knowledge I have as a Firearm Examiner. 

Quantity of Studies

            Their first argument was that there existed only a few studies for the validation of the field, and the ones that did exist, indicated that examiners cannot reliably determine whether bullets or cartridges were fired by a particular gun. This statement is problematic in that they offer no reference to what article(s) they are referring to that shows that an examiner cannot reliably determine the origin of an expended component. During my training as an examiner, I have read hundreds of articles supporting the field, which all produced low error rates. For example, the Hamby and Brundage study examined bullets from ten consecutively manufactured Ruger pistol barrels, the Fadul study examined 10 consecutively manufactured Ruger slides, and the Cazes Study examined 10 consecutively manufactured Hi-Point slides. The Hamby and Brundage test had a 0% error rate that incorporated 502 examiners. The Fadul study established an error rate of 0.000636% and and a durability error rate of 0.0017699%, and both error rates were determined to not be significantly higher than zero. The durability of the Fadul study consisted of giving the participants casings that were fired in a later sequence from the casings they originally received.

The durability portion of the Fadul study was created to see if an examiner’s conclusions would change based on the ware of the markings on the breech face due to the previous test fires. Studies like these focus on consecutively manufactured parts, because it creates the hardest scenario for examiners, but also ensures the examiners are using individual markings rather than class characteristics for their conclusions.

These studies also focus on the manufacturing method rather than the make/model of the firearm used. This is because a manufacturer can use only a few manufacturing methods to produce a firearm. So if the overall method is proven to produce markings that are individual it can be applied to all firearms that are produced with that same method. There are many other foundational studies and their summaries which can be found on the AFTE SWGGUN ARK.

Anti-Experts Experts??

            The authors of the articles suggest the need to create anti-expert experts to combat experts in court. These experts would consist of research scientists, which would not make sense because the people who are researching in the field are publishing in the Journal of Forensic Science and the AFTE Journal that the authors just argued against. These journals are peer-reviewed and are published for anyone to view and allow anyone to retest the conclusions made. Since these articles are peer-reviewed and are in the scientific community I am not sure who would be the research scientist that would become anti-expert experts. These experts would just be the people in the field publishing the work.  

Inconclusive Results

            As with many critiques of the science, they argue that inconclusive results should not be used in research studies, citing them as an “I don’t know” answer. As explained in a previous post the inconclusive conclusion is used to speak for the evidence rather than to get the examiner out of making a conclusion. Depending on the condition of the evidence and the quality of the toolmarks, the examiner may only have the option to report an inconclusive conclusion. The markings present can be enough to prevent the examiner from eliminating the expended evidence, but the poor marking quality and quantity may also prevent the examiner from identifying the expended evidence. If the examiner was forced to conclude an identification or an elimination in this scenario, their basis for the conclusion would be weak because of the quality and quantity of those markings. So the examiners would have to use inconclusive as their conclusion to be able to properly speak for that particular evidence.

Subjective vs Objective

            The authors explain how the examiner’s subjective experience should not be a reliable source, and that a quantitative standard needs to be established. The authors fail to explain the vast amount of articles and scientific background that supports the validity of the field. Some of the studies were discussed above but also the science of toolmarks are heavily documented that tools leave unique markings on surfaces as they perform work. This may be due to the crystalline structure of the material and other factors. These factors can be seen on the molecular level and can be observed on a microscopic scale that shows the observer the chip formation and its effects on toolmarks. Backed by this foundational knowledge the examiner is able to make their conclusion.

An analogy can be used to further push the point that subjectivity does not automatically discount the validity of the science. For example, the house you live in is unique either by the way it was built, the area around it, or the personal touches you have added to the house. Based on these factors you would be able to walk up to the house that belonged to you, because of the features of the house. This selection would be subjective but is supported by the many factors discussed above. A picture of another house of the same design can be shown to the homeowner along with a picture of their house, and by using the previously described factors, they would still be able subjectively select their house from the pictures. 

AMES Study

            The authors also reference the AMES Part II study. In this study participants from the first part of the study were given the same evidence without their knowledge and were told to reach a conclusion. The authors claim that the Part II of this study showed that the same examiners looking at the same bullet reached the same conclusion one-third of the time, and different examiners looking at the same bullets reached the same conclusion less than one-third of the time. This is all the information the author provides with no references. So I tried looking up the article and could not find it published anywhere and later discovered that the FBI removed the article from distribution. I contacted the laboratory that originally produced the article and they stated that they had error rates of only 1% and are frustrated that the FBI took down part II of the study. I am currently in the process of getting access to the second part of the study.

The reason I would like to review the second study before submitting this portion of my response is that the author’s use of the data can be misleading. Their statement could mean that an examiner originally concluded inconclusive and then in part II of the study the examiner could have concluded the actual ground truth. Alternatively, the examiner could have originally reported the ground truth and changed the answer to an inconclusive in part II of the study. Both scenarios would not be an error that would destroy someone’s life as the authors suggest. These changes can be due to many factors. These changes can be based on the quality and quantity of the markings found on the expended evidence as explained above. Originally the examiner that reported inconclusive could now have found more markings due to lighting angles or seeing a small spot on the evidence that was not seen before that provides enough information to meet their threshold for an identification/elimination conclusion. Alternatively, the examiner that originally reported an identification/elimination may report inconclusive now because they can not find the small spots they originally found that supported their conclusion or they can not achieve the angle of light they originally used to properly illuminate the markings.

Conclusion

            Overall, the authors’ view on the science is lacking support and provides little to no references for their claims. I have provided sources and explanations that combat their claims and show the foundation that the Firearm Examiners use for their conclusions. If the authors were to have provided sources for their argument, I would be able to understand their position better and be able to dissect those sources, and provide additional sources if needed. For example, their use of the AMES study lacks reference and explanation of the data, especially with the lack of the source to be viewed and analyzed by the reader. They would also need to better elaborate on who would be an anti-expert expert so that the reader can better understand where these experts would be getting their information from and why they would make a difference. Additionally, the authors need to have a better understanding of the inconclusive conclusion and its use in the field of firearm examination before offering to omit it from research studies. 

Literature Review

Firearms and Toolmark Error Rates

Introduction

            On January 3, 2022, four statisticians issued a statement entitled, “Firearms and Toolmark Error Rates”. These four statisticians were: Alicia Carriquiry, Heike Hofmann, Kori Khan, and Susan Vanderplas. All the statisticians, except Kori Kahn, are part of the Center for Statistics and Applications in Forensic Evidence (CSAFE). Their purpose for the statement is to offer the opinion on the Firearm and Toolmark discipline that “error rates established from studies with sampling flaws, methodological flaws, non-response and attrition bias, and inconclusive results are not sufficiently sound to be used in criminal proceedings.” I reject the statements made and I will be summarizing the statement in this article and providing my own opinion.

Participant Sampling

            They first offer that there is a sampling problem within the studies conducted for the discipline. They state that having examiners volunteer for participation in a study will bias the study and create lower error rates. This is because examiners who volunteer are more involved in the discipline and tend to have more experience. The announcements for these studies are usually posted on the Association of Firearm and Toolmark Examiner (AFTE) forum, which is a place where all members derive most of their income from being a Firearm examiner. They state that examiners part of this organization is assumed to be more involved in the field and have more experience. I disagree with their statement because the study has to be announced in an area that the relevant scientific community can have the opportunity to volunteer. Examiners that are part of AFTE range from all different experience levels and and it cannot be assumed that membership does not include examiners with only a few years of experience. In my case, I have had only 2 years of experience in this field and I am an AFTE member who has access to the AFTE forum. There are also plenty of published studies that have volunteers that only have a couple of years of experience, which includes a consecutively manufactured Ruger slide study performed by the Miami-Dade Crime Laboratory. I also disagree that volunteers will affect the validity of the results because it would be impossible for the researcher to randomly select participants and then have their laboratory present the study as actual case work. Most laboratories evidence intake makes this hard to accomplish and it would be hard to replicate all the evidence and paperwork needed to make the study appear as a real case. All other scientific disciplines, including the medical field, rely on volunteers for their studies, so this should not be used to exclusively invalidate Firearm and Toolmark studies.

Material Sampling

            The group then argues that the discipline has material sampling problems. Studies in the discipline tend to focus on consecutively manufactured studies, which this group of statisticians finds problematic. They state that the studies lose the ability to make broad sweeping claims about the discipline. To do this they recommend that a black box study is needed with a large number of firearms and ammunition types so that the study can encompass more of what is found in actual case work. I disagree with this statement because consecutively manufactured studies create the worst-case scenario for examiners, thus giving the highest theoretical error rate. The consecutive studies are done on almost every part of the firearm, for example, the barrel., extractors, ejectors, and breech faces. In addition to the multiple parts that are examined, multiple machining methods are examined, for example, double broaching rifling, and hammer forged rifling. So, when combined these studies isolate the different parts of a firearm and the different manufacturing methods. These studies focus on the machining method rather than a mass amount of firearms because there is only a limited number of machining methods that manufacturers can use to manufacture a firearm. Therefore, examining the machining method is more beneficial for the examiner than examining random firearm make and models. I also believe that a creating big study examining multiple firearms as the statement suggests will not be useful because examiners would be able to eliminate samples early on in the study due to differences found in the class characteristics, which would prevent the individual characteristics from being examined.

Non-Response Bias

            They then go into the problem of missing data and non-response bias. They claim that most studies never disclose the data of their study and the drop-out rate.  Their suggestion is that the dropout rate should be factored into the error rate. They claim that a dropout rate of 20% should be enough to invalidate the study results and a 5% rate should be sufficient to cause concern for the study. When the dropout rate reaches these percentages they recommend that these participants answers be included and counted as 100% incorrect. They reason that it can be assumed that the participants quit the study based on the difficulty of the study or their own lack of time management. Due to this, their answers would have been assumed to be largely incorrect. Applying this could raise low error rates up to 16.56%, which will provide an upper bound for the error rate. This argument does not hold up well, because many people may drop out of the study due to case load at the laboratory or other responsibilities. Their dropout should not automatically be assumed that the examiner thought the study was too hard, especially since the statistician’s earlier assumption was that all volunteers were considered experienced. Also, to assume that their error rate would be 100% would assume complete incompetence of the examiner, the scientific backing of the discipline, and the quality assurance measures of the laboratory. Most laboratories require a second examiner to come to the same conclusion before the conclusion can be published, and this would assume that the second examiner would have also had an error rate of a 100%.  

Inconclusive

            Their next argument is about the AFTE Theory of Identification’s use of inconclusive. The AFTE Theory allows the examiner to conclude identification, inconclusive, and elimination. AFTE also allows three different levels of inconclusive that range from being close to an identification to being close to an elimination. Although, AFTE allows these three levels of inconclusive they are seldom used in laboratories. The group of statisticians believes that the inconclusive conclusion is used when it’s a hard decision and the examiner wants to be right. Because of their disagreement with the inconclusive conclusion, they want this conclusion to be considered an error, rather than the common practice of omitting the conclusions from error rates. When they consider an inconclusive an error the error can be brought up to around 50% making the conclusion a “coin toss”. The field is seeing a lot of “professionals” speaking out against the inconclusive conclusion, but I disagree with their statements. Inconclusive is a valid conclusion because of the nature of the evidence that is normally received in the laboratory. For example, many expended bullets that come through the laboratory are damaged which can cause foreshortening and damage to the underlying toolmarks. This will cause some areas to be unusable and leave the examiner with a limited number of markings. These markings may not meet the examiner’s threshold for an identification, but their presence will prevent the examiner from excluding the bullet. The only option that the examiner would be left with is to report an inconclusive result. Another situation is when the pressure inside a firearm may prevent the head of the casing from making good contact with the breech face which causes the primer to take limited marks of the breech face of the firearm. This situation would be similar to the bullet, and in no way eludes to the examiner wanting to take the easy way out. The examiner would only list the conclusion to properly speak for the evidence and prevent misguiding anyone reading the report.

Conclusion

            Based on the above-listed arguments the group of statisticians make the move that they can not support Firearm and Toolmark examination as evidence in criminal proceedings. They base most of their finding on the studies conducted in the field rather than the specific examiners in the field. They take a strict stand against the discipline but fail to recognize the complexity and uniqueness of this comparative science. For example, their misunderstanding of inconclusive results and their importance. Their recommendations are considered extreme and seem to be implemented just to raise the error rate of a study, for example, counting dropouts as 100% error or considering inconclusive results as errors. The courts should not accept their statement because of their lack of understanding and extreme views on how firearm-related studies should be conducted. They have little evidence to support their claims and provide very few references This statement has also brought the FBI on May 3, 2022, to post their own response. Their response will be reviewed in another literature review post.

General Laboratory

ANAB Accreditation Explained

Important Terminology

ANAB is a name you will have to know when working for a New York State Crime Laboratory because of the accreditation requirement. This requirement is mandated by The Commission of Forensic Science which is part of the New York Division of Criminal Justice Services (NYDCJS). ANAB stands for The ANSI National Accreditation Board, and its goal is to accredit laboratories that follow the accreditation standards that the board sets.

The ANAB standards are based on the ISO/IEC 17025: International General Requirements for the Competence of Testing and Calibration Laboratories. The calibration portion of this document is not relevant for Forensic Laboratories, therefore it will be ignored for the rest of the article. ANAB takes the ISO standards and adds the AR3125 document, which is a supplement that adds more specific requirements for forensic service providers. Below we will dissect the ISO requirements, which are separated into different numbered sections. These sections are usually mimicked by the Quality Assurance’s (QA) Standard Operating Procedure (SOP) to ensure that the laboratory follows all the standards. The ISO document is generally vague when it comes to requirements, it’s up to the laboratory to get more specific in their QA SOP, and then ensure that all the requirements are followed at all times.

Beginning Sections

The sections that start the document are more introduction based than the actual standards that a laboratory should follow. Section 1 is the “Scope,” which explains the purpose of the document. Section 2 is “Normative References” and Section 3 is “Term and Definitions.” These sections are self-explanatory and they do not go into any specific requirements. 

Section 4 is the first section that creates requirements for the laboratory and it is appropriately titled, “General Requirements.” This section mainly focuses on impartiality and confidentiality. The next section, Section 5: “Structural Requirements,” discusses the management and personnel responsibilities, along with the activities and their requirements. Sections 4 and 5 are small when compared to the next two sections, which are the sections that make up the majority of a Laboratory’s QA manual.

Section 6

Section 6, “Resource Requirements” is one of the biggest sections and is one that the QA section will focus most of their manual on. The sub-sections of Section 6 are split up as follows: General (6.1), Personnel (6.2), Facilities and Environment Conditions (6.3), Equipment (6.4), Metrology Traceability (6.5), and Externally Provided Products and Services (6.6). Below I will explain some of the sub-section for better clarity. I will skip 6.1 because it does not contain anything of value for a general examination of the document.

(6.2): This section goes over the education, qualifications, training, and experience of the personnel. The agency needs to have the requirements for these categories and the records to support them. The personnel should also have competency in their laboratory activities.  

(6.3): Requirement to list and record all the environmental factors that can affect the validity of the results.

(6.4): To keep a record of the equipment that is used in testing and to ensure the equipment conforms to specified requirements: measurement accuracy and measurement uncertainty. There must also be recorded calibration and checks on the instruments.

(6.5): There has to be documentation for metrological traceability. These measurements must be traceable to SI units.

(6.6): Records of what will approve a product and what will make it conform.

Section 7

Next is Section 7 which is the review of requests, tenders, and contracts. This section is split up into 11 sub-sections. This section will affect more of the report writing and the way analyst must conduct their examinations. The sub-sections cover the following:

(7.1): Procedure for the review of requests, tenders, and contracts

(7.2): Selection, verification, and validation of methods that must be appropriate and correct. There must be a manual for the methods, standards if applicable, and a verification process.

(7.3): This section goes into depth about sampling. There must be a plan and must address any factors.

(7.4): This section goes into the handling of test or calibration items. Also requires a procedure for storage, transportation, receipt, etc. of these items.

(7.5): The testing records must be complete so they can be replicated. The original observations, data, and calculations at the time they were made must be recorded.

(7.6): Evaluation on Measurement Uncertainty. There will be another article posted on measurement uncertainty and its importance at a later date.

(7.7): Requiring the monitoring of results so that trends can be detectable. This can be done through functional checks, retesting of retained items, replicate testing, etc.

(7.8): This section dives deeply into the reporting of results. The results must be accurate clear and unambiguous, and objective. The section also provides guidance on what the report should include: title, the method used, data, etc. On top of report writing the section addresses calibration certificates and what they should include. To keep this section brief I would like to conclude that most of the guidance in the final product is found in this sub-section. This sub-section is very important to the QA section and other sections in the laboratory when crafting their own SOPs.

(7.9): This sub-section goes through the complaint process when a customer files a complaint.

(7.10): Anything dealing with non-conformance work is found in this sub-section

(7.11): Lastly this sub-section finishes off with control of data and information management.

Section 8

The last section of this document is the management system requirements, which is section 8. They split this section into an “A” option and a “B” option. The “A” option is usually taken by crime laboratories and the “B” option is taken by laboratories that use the ISO 9001. The “A” option goes over control of documentation of the laboratory, corrective actions, internal audits, identifying areas of improvement, and policies and objectives for the fulfillment of this document.

Conclusion

This article is just to teach you the basic idea and summary of the ISO/IEC 17025:2017 document. Overall, the document is just general requirements that any laboratory can take and mold into its own SOPs. The ANAB Accreditation Requirements (AR 3125) has all the same sections as the ISO 17025 document, but the information is geared more towards forensic laboratories. Although the AR3125 document is still pretty general in nature, it does have more forensic-specific requirements. Once these documents are incorporated into the QA manual and the other section’s SOP is when they become more specific to the specific laboratory they are incorporated into. At this point, the laboratory is responsible for following its own manuals in addition to the ISO and AR documents when an assessment is performed on the laboratory.

This information can be used by the scientist who is preparing to help their lab with accreditation. They can use also use this article as a quick reference or to better understand what the ISO standards are. This information can also prove to be very useful for the student who is looking for a job or internship. This knowledge is sought after when hiring because this is huge for the laboratory, but is usually eclipsed by casework.

General Laboratory

How to Secure Your Forensic Internship

Introduction

Below are some tips that I have learned that will help you to secure an internship in a Forensic Science Laboratory. The first step is scouting your area for laboratories that have the discipline you are looking to work in but be open to other disciplines. Working in another discipline will broaden your horizon and may help you more than what you are searching for.  

Research Project

As a student, you should be preparing for your internship by your second year as an undergraduate.  By this time, you have settled into college and should be aware that an internship requirement must be fulfilled by your senior year. At this time, you should be searching for a professor to work with to conduct your own research or to assist in an ongoing research project. The more time you put into this research before applying to internships the better. Presenting and/or being published with this research will help even more. The research you pick should correlate to the discipline of Forensics that you are most interested in.

Now that you are conducting research and are working in a research lab this can now be applied to your application. The cover letter for your application or the conversation you have with the laboratory should focus mainly on your research. This will show the laboratory that you have a good knowledge of the field and are dedicated to what you do. This will put you above the other students who are applying with no research experience.

Internship Research Project

Hopefully, you should be already conducting your own research you should plan another area of research that may only be achieved with an internship with a laboratory by using their knowledge and resources. Write a proposal for this research and have it on hand. Create a short description of this research and add it to the cover letter that you have the intention of researching this topic at the laboratory if you were selected to intern. This will show your drive and show that you will be utilizing the full potential of the internship. These laboratories want to select students who will want to make the most from it, not just to fulfill a requirement.

What to do after submitting your application

After submitting your application make sure you follow up with the laboratory. If you had an interview, let them know soon after that you are thankful for the opportunity for the interview and that you appreciated meeting the panel and seeing the laboratory. Also, state you look forward to the next steps in the process. This will show your drive and how much this means to you, which should place you higher on their list of who to accept.

If the process only required a submission of an application, try to call the laboratory a week or two after the submission to inquire what the next steps of the process will be, and what you should expect. This will show your drive for the internship and show how important the internship is to you.

My Experience Securing an Internship

In college, I conducted two types of research projects. One project dealt with Firearms and the other dealt with Physics. I used both of these research projects in my cover letter, which was the main focus. I also explained how I wanted to be able to intern at a laboratory that served my local community. I expressed my interest in firearms and talked a little about my research experience, but also stated that I will be willing to intern in any section to broaden my knowledge of the crime laboratory. I also stated that I would like to conduct research at the laboratory for a capstone project I had to complete for my Honors Course. At this point, I didn’t have a specific capstone idea in mind, but now I would recommend that anyone applying should have some idea of what they would like to do to make their cover letter more appealing.

Two weeks later I called the laboratory to ask about the future steps of the process and what I should expect. During this phone call, the person in charge of accepting interns retrieved my file and told me my application looked good, and that they would be selecting interns in the next couple of weeks. I felt like this moment caused my file to reach the top of the list and stand out from the rest. Because of my openness to joining any section, I was picked to intern at the laboratory under the Quality Assurance (QA) Section. I was a little disappointed that I wasn’t selected for Firearms but gladly accepted the offer. But, after speaking with the laboratory I found out that my internship was with the QA section, but the research I mentioned in my cover letter and phone call would be fulfilled. So, I split my time learning and helping the QA section while researching in the Firearm Section. Thankfully I got my internship in the QA section, because little did I know, that this section was really important to the operation of the laboratory. Having the knowledge of the QA section actually helped me land my first Forensic Science position and my second. Also, my research allowed me to continue my internship after the summer and all through my senior year, which allowed me to grow and get recognition at the laboratory, while the other interns left by August.

Concluding Note

Researching at your internship is so important. I cannot stress that enough. It will help you satisfy any college requirement for research and help with your job applications that you will submit during your Senior year. Laboratories that are hiring want to know your internship experience, and nothing sounds better than doing research as you learn the laboratory’s function. Do not do an internship just to satisfy your school’s requirements! Use it to push you forward in the job market. Every Forensic Scientist has taken an internship, but only a few have reaped the full benefits of that internship.

General Laboratory

Why is Forensic Science Important?

The Definition of Forensic Science

            The NIST (The National Institute of Standards and Technology) definition of Forensic Science is the use of the scientific methods or expertise to investigate crimes or examine evidence that might be presented in a court of law.

The Importance of Forensics

            I would like to take Forensic Scientist and look at it under the microscope. Forensic Science is so important because it is unbiased and strictly just follows the scientific method. Forensic Science is the sole voice of the evidence, it isn’t the voice of the defense or the prosecution. This is the most important part to remember.

Speaking for the Evidence

            Physical evidence cannot speak to the jury and some evidence requires expertise to extract the information from the evidence and convert it to layman’s terms. This may be a fingerprint examiner decoding a fingerprint and comparing it to other prints and giving their conclusion/opinion of the comparison based on their experience. This can also be a drug chemist who can take a white powder and determine the makeup and conclude if there is an absence or presence of a controlled substance.  Just remember when analyzing the evidence or testifying to a conclusion that you are the voice of the evidence and you do not care who “side” it benefits. The only benefit is conveying the correct information to the Jury.

Can be Used to Prove Innocence or Guilt

            A Forensic Scientist’s conclusion/opinion may lead the Jury to establish the guilt or innocence of a subject. Many people believe that the conclusion of a Forensic Scientist is mostly used to establish guilt, but just as frequently the conclusion also proves someone’s innocence. Proving innocence can give the subject their freedom back and can allow the investigation to move on to find the correct person if applicable. A note should be made that the establishment of guilt and innocence is up to the Jury to decide and for the lawyers to persuade. The Scientist’s goal is only to give the information that the evidence has provided, no more and no less.