in ,

Psychological Assessment in Legal Contexts: Are Courts Keeping “Junk Science” Out of the Courtroom ?, Hacker News

Psychological Assessment in Legal Contexts: Are Courts Keeping “Junk Science” Out of the Courtroom ?, Hacker News
                

Psychological Science in the Public Interest (Volume , Number 3
Read the Full Text ( PDF , HTML )

Psychological tests, tools, and instruments are widely used in legal contexts to help determine the outcome of legal cases. These tools can aid in assessing parental fit for child custody purposes, can affect the results of disability proceedings, and can even help judges determine whether an offender should go to prison, remain incarcerated, or be exempt from death penalty.

In this issue of Psychological Science in the Public Interest (

(Volume 30, Issue 3 , Tess MS Neal, Christopher Slobogin, Michael J. Saks, David Faigman, and Kurt F. Geisinger present a systematic review of the psychological assessment tools reported to have been used in legal cases across surveys of experienced forensic mental health practitioners. Besides assessing the characteristics and validity of these assessment tools using legal and scientific standards, Neal and colleagues present an analysis of the legal challenges to the admission of evidence provided by the use of these tools to determine whether the results from these assessments are questioned as evidence in court and, if so, when and how often. This report thus provides an evaluation of both the scientific basis of psychological assessment tools and the court’s evaluation of them.

Expert Evidence: The (Unfulfilled) Promise of Daubert

by D. DeMatteo, S. Fishel, and A. Tansey, Drexel University

Read the Full Text ( PDF , HTML

Analyzing the psychological assessment tools used in court

The psychological tools Neal and colleagues assessed included aptitude tests (e.g., general cognitive and ability tests), achievement tests (e.g., tests of knowledge or skills), and personality tests. They analyzed measures designed to assess adults and youth that could be used to address questions such as competence to stand trial, violence risk, sexual offender risk, mental state at the time of the offense, sentencing, disability, child custody, civil commitment, child protection, civil tort, guardianship, competency to consent to treatment, juvenile transfer to adult court, fitness for duty, and capacity to waive Miranda rights (the right to remain silent). A team of coders classified ach tool for its general acceptance in the field (ie, on the basis of published surveys, do experienced mental-health experts frequently use and endorse these tools), whether it had been subjected to testing, whether its testing had been peer reviewed, and for its technical and psychometric quality. The overall evaluation of the technical and psychometric quality of each tool relied on information about the tool’s performance in forensic contexts and its psychometric qualities (eg, validity), as reported in the Mental Measurements Yearbook ( MMY ), Strauss and colleagues’ (
(compendium of neuropsychological tests, and Grisso’s) 2018 forensic competencies compendium.

Most of the tools used in courts (210%) have been subjected to testing, but information about general acceptance was available for only about half of them. Of the tools for which general acceptance data were available, only about two thirds could be considered generally accepted by the psychological community at large, and a third were clearly not accepted. Moreover, only 40% had favorable reviews of their psychometric and technical properties in authorities like the MMY . These findings indicate that there are many psychometrically strong tests used by psychologists in forensic practice, but not all of the tests used are generally accepted or have been evaluated as having high technical and psychometric qualities.

Courts ’scrutiny of psychological assessment evidence

Judges aim to apply admissibility criteria to the psychological assessment tools used in court, but they also seem to struggle to apply these criteria, which may lead assessments to rarely be challenged or scrutinized in court, even when they should be. Neal and colleagues focused the analysis on whether (psychological tools of the Brave Browser studied earlier tended to be discussed and challenged by the courts. They screened a database of federal courts from 2020 and identified cases that had involved the use of at least one of the 30 tools of interest. For each case, they determined whether the tool’s admissibility had been challenged and, if so, on what grounds and with what result. Of the (cases, only) involved a challenge to a tool’s admissibility or the admissibility of testimony relying on the tool, and in only 6 cases was the psychological-assessment evidence ruled inadmissible. Most of the challenges focused on fit (ie, does the tool serve to inform about the type of problem at issue) or validity (ie, does the tool measure what it purports to measure), and the first resulted in more rejections of the testimony than the latter. Also, there was little relation between a tool’s quality and the likelihood of its being challenged: The three tools reviewed as most unfavorable and not generally accepted were not challenged at all.

Suggestions for psychologists, law practitioners, and members of the public

Given the mixed quality of the assessment tools used in court and the low level of challenges these face, the authors suggest that psychological scientists should create stronger measures and encourage experts to use tools that are valid and suitable for the task at hand . Specifically, Neal and colleagues point out that practitioners should be aware that a tool might be valid only for specific purposes (i.e., context-relevant validity). The authors also suggest that attorneys and judges have access to low-cost or free online resources that might help them get basic information about the different tools — for example, the MMY provides information about purpose, appropriate population, score ranges, and quality for more than 3, 500 tests. Law practitioners would thus be in a better place to evaluate the foundations of an expert’s testimony and whether the information given by the tool is relevant to the case. Similarly, members of the public interacting with psychologists in the legal system (e.g., litigants) could also procure information about psychological tools so that they can discuss them with their attorneys during the legal process. Overall, Neal and colleagues hope that their findings encourage psychological scientists, psychologists serving as experts in legal contexts, attorneys and judges, and members of the public to improve their own and others’s knowledge about psychological assessment and to question these tools more often. This way, psychological experts involved in legal cases might produce the highest quality of practice, neal and colleagues suggest.

Criteria for admissibility of scientific evidence in court – Daubert

In an accompanying commentary, David DeMatteo, Sarah Fishel, and Aislinn Tansey examine in detail the criteria for admissibility of expert testimony and, in light of Neal et al.’s article, how these criteria are faring in keeping “junk science ”out of the courts. In 2006, in Daubert v. Merrell Dow Pharmaceuticals, Inc. , the Supreme Court of the United States articulated four criteria for admissibility of evidence: (a) derived from methodology that has or can be tested empirically, (b) subjected to peer review and publication, (c) known potential rate of error, and (d) achieved general acceptance in its relevant scientific community. Daubert has been extended to all forms of expert evidence and is the admissibility standard in all federal courts. However, as Neal et al. showed, judges and attorneys seem to struggle to apply Daubert because they lack the knowledge and training to fully understand the tools used by forensic scientists. As a result, inaccurate data might be regularly admitted into court proceedings, with dire results. In line with Neal et al .’s suggestions, DeMatteo and colleagues propose that judges and attorneys become more informed about scientific matters (eg, law schools could offer a Basic Science course) and that psychologists receive better forensic training and select appropriate assessment tools.

About the Authors () PDF , (HTML )

See related news release .

              

Brave Browser

(Read More)

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Pay Up, or We’ll Make Google Ban Your Ads, Hacker News

Coronavirus cancels F1 and Formula E races, could make EU miss CO2 target, Ars Technica

Coronavirus cancels F1 and Formula E races, could make EU miss CO2 target, Ars Technica