Washington, DC - On May 5 and 6, 2016, NIST hosted a technical colloquium on an important question facing virtually every branch of forensic science today: How should forensic examiners quantify the weight of evidence they find in a case? This was the first technical colloquium in the United States to focus specifically on this issue, and it generated an animated and much needed exchange of ideas.

The weight-of-evidence question comes up anytime a forensic examiner assesses the likelihood that a piece of evidence left at a crime scene originated from a particular source. For instance, a ballistics examiner compares a bullet found at a crime scene to a bullet that was test-fired from a suspect’s gun. After comparing the surface patterns on the two bullets, the expert might testify in court on the likelihood that they were or were not fired from the same weapon.

But how should the expert quantify that likelihood? How should they account for the possibility that the pattern on the bullet found at the scene was actually caused by a different but similar weapon? The chances might be extremely remote, but when a person’s liberty hangs in the balance, quantifying the weight of evidence is a critical step toward a just outcome.

This step is important in virtually every domain of forensic science—whether analyzing a bullet, a fingerprint, a DNA profile, a voice recording, or any other pattern-based evidence.  

An Important Conversation 

The need to quantify the weight of evidence was highlighted in the 2009 National Research Council report that urgently called for strengthening the practice of forensic science in the United States. NIST, along with the Department of Justice, is leading an effort to do just that.  

But what is the best way to quantify the weight of evidence? People have divergent and strongly held opinions on the subject, and the purpose of the colloquium, held at the NIST campus in Gaithersburg, Maryland, was to open a dialog that might eventually lead to a consensus. Participants included forensic practitioners, forensic science researchers, and members of the legal and law enforcement communities.  

“We didn’t want to direct the conversation, we just wanted to start it,” said Elham Tabassi, an expert in fingerprint analysis and one of the NIST scientists who organized the event. “We wanted a free flow of ideas, and that definitely took place.”  

“Published papers and rebuttals can go on forever,” said Reva Schwartz, another NIST scientist who organized the event. “But having so many people in the same room really moved the conversation forward.” 

Among other things, participants discussed likelihood ratios, which have emerged as a leading method for quantifying the weight of evidence. Likelihood ratios compare the probabilities of two events—for instance, the probability of observing the features of a latent print left at a crime scene if the print was made by the suspect compared to the probability of observing those features if the print was made by some other, unknown, individual. A very high likelihood ratio indicates that the evidence is powerful.

Next Steps 

But there is no firm consensus yet on how exactly to calculate a likelihood ratio, or even if that is the best approach to quantifying the weight of evidence.  

“I think it’s fair to say that there were intense, though always respectful, discussions on quantifying the weight of evidence using likelihood ratios, including their limitations and different statistical approaches to implementing them,” Tabassi said.  

One of the problems, she noted, is that certain words mean different things to different people, and an important next step will be to settle on common language.  

Schwartz and Tabassi plan to publish a technical review of the colloquium within the coming months. They also hope to establish an online forum where the conversation can continue. And they are already planning the next installment of this colloquium, which will take place at the NIST Maryland campus in July of 2017.