Friends of the Forensic Science Club, this week we present the paper “Testing an Evaluation Tool to Facilitate Police Officers’ Peer Review of Child Interview”, by Danby, M. C.; Sharman, S. J. and Guadagno, B. (2022), in which authors propose a new approach to facilitate good practices when it comes about interrogate children who may have suffered sexual abuse. 

When a case of child abuse comes up, one of the first steps the police take is conducting an interview with the child.

Since physical evidence often does not exist in child abuse cases (because, for example, it’s something that happened a some time ago), the ability of police interviewers to obtain detailed and accurate information from the child may be vital to the outcomes of the investigation.

Over the last few decades, many studies have focused on examining interview techniques for obtaining children’s reports, and as a result, there are now a good number of evidence-based guidelines to improve interviews with children, and make it easier for children to provide complete, accurate and unbiased information.

There is a certain consensus on the key elements that constitute good interviewing practices, which are generally structured in separate phases.

We have, first of all, the opening phase, where the interviewers introduce themselves to the child and build a relationship with him/her, explain the need to tell the truth and tell him/her how the interview will work.

Then, we move on to the transition phase, where the conversation changes to substantive topics. This should occur in an undirected way and interviewers should avoid introducing details about the alleged abuse.

Once the topic of concern is established, the interviewers begin with the substantive phase, in which the alleged abuse is thoroughly explored, using open-ended questions to allow the child to freely narrate what happened.

Despite expert agreement on the core elements of good interviewing practices, interviewers have difficulty adhering to them when push comes to shove. They sometimes ask controversial or leading questions to bring up the topic of abuse in the conversation, they ask leading questions instead of open-ended ones, they fail to sufficiently isolate and label incidents of alleged abuse…

For this reason, specific courses and training are recommended for those who must deal with this type of situation in their working life, so that they are adapted as best as possible to the recommended good practices.

One strategy that helps to improve and maintain interviewers’ use of open-ended questions after training is to provide ongoing feedback from experts. However, it is an intense task that requires a lot of time.

A more feasible alternative is peer feedback, which will be more available to interviewers, and more frequently, than feedback from experts and superiors, and does not pose much of a workload challenge, as a second interviewer is often present during these types of interviews. The problem is that peers don’t have the same knowledge as experts sometimes.

In the current study, the goal was to test the accuracy of forensic interviewers when evaluating simulated transcripts of a child interview. To do this, authors conducted two studies. The first, with 56 police officers from one jurisdiction; the second, with 37 police officers from a different jurisdiction.

All recently completed a 10-day training program in child forensic interviewing.

Participants were provided with a transcript of a child forensic interview, and were asked to rank each question posed in the substantive phase (open-ended, facilitating, main, in-depth…). They also were provided with a set of best practice guidelines which they will compare with the transcription. 

In the case of the first experiment, participants were less accurate in evaluating transcripts that showed mixed adherence to best practices. Some previous studies mention how mixed transcripts are especially difficult to assess. In the review of the transition phase there was also less precision than expected.

Participants in the second study were also less accurate when reviewing the initial and substantive phases, showing mixed adherence to best practice.

One reason participants may have obtained these results, authors propose, is the rigid nature of the checklist that is often used as an assessment tool. This list forced participants to select dichotomous responses, and reality may not be as simple as choosing black or white.

Authors therefore propose that research should work on developing and testing tools that are more flexible than a dichotomous checklist.

If you want to know more about the criminal mind, criminal profiling, and forensic science, don’t miss our Certificate in Criminal Profiling, a 100% online program certified by Heritage University (USA), with special grants for the Forensic Science Club readers.


Write A Comment