Section 540I of the Fiscal Year 2020 National Defense Authorization Act (FY 2020 NDAA) required the secretary of defense, in consultation with the secretaries of the military departments and the secretary of homeland security, to:
- Issue guidance that establishes criteria to determine when data indicating possible racial, ethnic, or gender (REG) disparities in the military justice process should be further reviewed and describes how such a review should be conducted.
- Conduct an evaluation to identify the causes of any REG disparities identified in the military justice system (MJS) and take steps to address the identified causes, as appropriate.
The Office of the Executive Director for Force Resiliency within the Office of the Under Secretary of Defense for Personnel and Readiness asked CNA to provide analytic support to fulfill these requirements. CNA addressed four research questions:
- What data elements should be tracked, and what disparity indicators should the Department of Defense (DOD) use to monitor trends in MJS outcomes and take appropriate policy actions?
- How much of the required data currently exist and to what extent are they standardized across the services?
- Do the existing MJS data reveal any differences in military justice outcomes by REG?
- Can we identify any specific factors (including bias) that contributed to observed outcome disparities?
The results for the first research question, which support fulfilling the first NDAA requirement, are reported in the document titled, How to Use Administrative Data to Measure and Interpret Racial, Ethnic, and Gender Disparities in Military Justice Outcomes. This report describes how we addressed the remaining three research questions to support fulfilling the second NDAA requirement.
To manage the scope of this effort within the study resources, we limit our analyses to the regular, active duty enlisted forces of each service. To execute the analyses, we constructed multiple datasets for each service, with each dataset comprising records of MJS incidents reported and resolved over the seven years from fiscal year (FY) 2014 through FY 2020. Each incident record includes descriptive features of the incident, including the REG of the accused servicemember. The constructed datasets follow each incident record through various steps in the MJS, but no dataset follows incidents seamlessly from initial reporting to final resolution and they vary in terms of level of detail. Thus, our dataset construction also served as a check on data completeness and allowed us to determine which REG disparities in incident outcomes can currently be tracked for each service.
We then applied quantitative methods (primarily regression analysis) to calculate unconditional and conditional service-specific REG disparity measures for as many MJS outcomes as the data allowed, controlling for other descriptive features of the offender and the incident. Unconditional disparities are measured for the first-observed outcome in each dataset and are based on comparisons between those experiencing the outcome and those in the service’s entire enlisted population. Conditional disparities are measured for outcomes that occur later in the MJS process and are based on comparisons between the servicemembers who experienced the outcome and those who experienced the outcome associated with the previous observed step in the MJS process. For example, for some services, we calculate REG disparities in guilty findings conditional on having completed nonjudicial punishment (NJP) or court-martial (CM) proceedings. This allows us to determine accurately where REG disparities first appear in the MJS and how long they persist.
There are many detailed results presented in the report. Here, we summarize these results by answering the research questions they addressed.
Research question #2: How much of the required data currently exist and to what extent are they standardized across the services?
Most of the MJS data exist and the services generally collect the same data elements, but the ways the data are collected and stored result in data elements and structures that do not always support quantitative analysis and they are not consistent across services. Specifically, despite recent service efforts to improve data collection and storage, the data are still stored in multiple data systems across multiple commands within each service. Thus, it remains cumbersome to follow incidents through the MJS and to prepare the data necessary to compute REG disparities. This, in turn, limits REG disparity analysis for all MJS incidents and creates outcomes that vary by service.
Research question #3: Do the existing MJS data reveal any differences in military justice outcomes by REG?
Our data analysis confirms that there were significant racial and gender disparities in MJS outcomes during the study period.
Across services and outcomes, we found positive racial disparities: in every service, Black enlisted personnel were more likely than White enlisted personnel to be investigated, be involved in NJP in some way, and be involved in CMs in some way, even after controlling for the other factors included in the regression models. Yet, conditional on a case progressing far enough in the MJS to have an adjudicated outcome, Black enlisted personnel were no more likely—and, in many cases, were less likely—than their White counterparts to be found guilty.
In contrast, across services and outcomes, we found negative gender disparities: in every service, female enlisted personnel were less likely than male enlisted personnel to enter the MJS and, conditional on the case progressing to an adjudication point, they were less likely to be found guilty.
Finally, we found few significant ethnic disparities in MJS outcomes. Across services and for most outcomes, Hispanic and non-Hispanic enlisted personnel experienced the modeled outcomes at similar rates.
Research question #4: Can we identify any specific factors (including bias) that contributed to observed outcome disparities?
It is impossible to determine definitively whether bias exists in the MJS solely based on statistical analysis of administrative data records such as those we used in this study. The analysis did, however, allow us to draw two sets of conclusions regarding causes of MJS disparities.
First, controlling for offender-, incident-, and MJS process-related factors did not eliminate REG disparities, and no specific factor emerged as a leading determinant of MJS disparities. Thus, bias remains on the table as a potential cause.
Second, by using the data to show where in the MJS disparities occur, we provide information to help the services decide where to investigate further. Specifically, the largest positive racial disparities were associated with the first-observed outcomes. This suggests that it is important to get more clarity on how and why Black enlisted servicemembers enter the MJS. It would be especially valuable to better understand how outcomes differ depending on whether the initial investigation is conducted by a professional military law enforcement agency (LEA) or by the command and how commanding officers (COs) make their disposition decisions, and to evaluate the relative strengths of cases brought against Black versus White servicemembers.
Recommendations related to data collection and analysis
We make the following recommendations to improve data collection and analytical processes.
- Provide the services with sufficient funding and support to ensure that MJS incident and case data are collected, stored, and made usable for conditional REG disparity analysis at each step in the MJS.
- For future data assessments, follow the two key steps recommended in the companion document: support service-specific studies and provide the time and structure for effective collaboration between researchers and MJS experts in each service.
- Continue efforts to collect complete NJP information.
- Include common case control numbers in all MJS data systems so that datasets associated with different parts of the MJS can be merged and cases can be followed from investigation through initial disposition to final resolution.
- Populate variables related to offender characteristics, especially REG, by pulling data from authoritative personnel records.
- Ensure that all relevant dates are populated.
- Define all data fields to include all potential outcomes or values, including indicators that a variable is not applicable for a given incident or that the incident has not yet proceeded far enough through the MJS for the variable to apply.
- Use dropdown menus to minimize data error and inconsistency due to hand entry.
Recommendations related to REG disparities
To address the identified MJS outcome disparities, we make the following recommendations that range from specific to general:
- Seek to address disparities, not bias per se. As reported in the companion document, regardless of their causes, disparities may create perceptions of bias and perceptions of bias have negative effects not only on the effectiveness of the MJS, but also readiness.
- Begin by studying how outcomes differ depending on whether the initial investigation is conducted by a professional military LEA or by the command, how COs make their disposition decisions, and the relative strengths of cases brought against Black versus White servicemembers.
- Follow additional steps recommended in the companion document. Specifically, conduct assessments and report results on a regular basis. Do not wait until negative publicity occurs and do not respond only to disparities identified in raw data.
- Develop procedures and systems for holding leaders accountable for the proper use of discretion across the full range of MJS outcomes. Discretion is a necessary part of law enforcement and justice, but it is also where bias (implicit or explicit) can enter. It is leadership’s job to think more broadly about the role of discretion in the MJS.
DISTRIBUTION STATEMENT A. Approved for Public Release; distribution unlimited.
- Pages: 158
- Document Number: DRM-2022-U-032798-1Rev
- Publication Date: 6/23/2023