Research for Armed Services Vocational Aptitude Battery

Syndicate content
February 1, 1990
The Department of Defense is developing a computerized adaptive-testing version of the Armed Services Vocational Aptitude Battery (ASVAB). This or some other version of the ASVAB may be enhanced by the addition of new, computerized subtests. A cost/benefit analysis has estimated a benefit of $450 million per year as the result of such enhancement. This paper questions the operational relevance of any such estimate. The paper describes ways in which the validation study of new tests needs to be expanded. It also discusses pros and cons of adaptive testing.
Read More | Download Report
November 1, 1989
This paper examines three aspects of the Marine Corps validation research effort that may have implications for Marine Corps manpower issues. They are: validity of the Armed Services Vocational Aptitude Battery (ASVAB) in the prediction of job performance, differential validity of ASVAB aptitude composites in the prediction of job performance across infantry occupational specialties, and, interaction of aptitude and experience in the prediction of job performance.
Read More | Download Report
July 1, 1989
The evaluation of aptitude standards to determine qualification into military specialties must address issues concerning both the minimum qualifying score and the appropriate aptitude distribution above that minimum. This research memorandum is an initial effort that focuses on identifying the minimum qualifying aptitude score for assigning recruits to occupational specialties. Hands-on job performance tests developed for the Marine Corps infantry occupational field provide the context for the analysis. Subsequent research will address the evaluation of the necessary aptitude distributions.
Read More | Download Report
March 1, 1989
All large-scale data collection efforts must contend with the issue of data quality. This research memorandum examines the quality of data collected for the infantry portion of the Marine Corps Job Performance Measurement Project. Particular attention is focused on data inconsistencies and imputation of missing data.
Read More | Download Report
February 1, 1989
Because the ability scale in item-response theory is arbitrary, if two item pools are calibrated in two different samples, their parameter estimates must be placed on a common metric using items administered in both calibrations. In this memorandum, a maximum-likelihood procedure for doing so is illustrated.
Read More | Download Report
December 1, 1988
Scores on new forms of a test are equated to those on an old form. Two common equating procedures are linear and equipercentile. Cross-validation is used to show that, with sample sizes of 6500 and above, equipercentile equating is preferable to linear for the Armed Services Vocational Aptitude Battery.
Read More | Download Report
December 1, 1988
Scores on new forms of the Armed Services Vocational Aptitude Battery are equated to those on form 8a, using samples of about 2500 recruits per form. Three equating procedures are compared in terms of how well their results are cross-validated in large applicant samples.
Read More | Download Report
October 1, 1988
Replacement of a paper-pencil test battery with a computerized adaptive version is likely to increase reliabilities of the subtests. This leads to an increase in the variances of composite scores, and to lower mean scores for subgroups whose average scores are already below those of the general population. These results are illustrated with a computer simulation.
Read More | Download Report
January 1, 1988
An experimental computerized adaptive testing (CAT) version of the Armed Services Vocational Aptitude Battery (ASVAB) has been developed and administered, and a new version is under preparation. It is important that each CAT-ASVAB subtest be at least as reliable as its paper-pencil counterpart. This report presents two methods for estimating subtest reliabilities of the CAT version of the ASVAB, and illustrates them using data from the experimental version. These methods can be used with later versions.
Read More | Download Report
October 1, 1987
In any ongoing testing program, new forms of a test are developed and equated to an earlier form. Linear equating is often used when the new form is nearly parallel to the old one, but it can lead to substantial systematic errors. This research contribution proposes and evaluates a new method for test equating. The method combines the stability of linear equating and small bias of equipercentile equating. See also 02 057100.00
Read More | Download Report