Research for ASVAB

Syndicate content
July 1, 1989
The evaluation of aptitude standards to determine qualification into military specialties must address issues concerning both the minimum qualifying score and the appropriate aptitude distribution above that minimum. This research memorandum is an initial effort that focuses on identifying the minimum qualifying aptitude score for assigning recruits to occupational specialties. Hands-on job performance tests developed for the Marine Corps infantry occupational field provide the context for the analysis. Subsequent research will address the evaluation of the necessary aptitude distributions.
Read More | Download Report
March 1, 1989
All large-scale data collection efforts must contend with the issue of data quality. This research memorandum examines the quality of data collected for the infantry portion of the Marine Corps Job Performance Measurement Project. Particular attention is focused on data inconsistencies and imputation of missing data.
Read More | Download Report
February 1, 1989
Because the ability scale in item-response theory is arbitrary, if two item pools are calibrated in two different samples, their parameter estimates must be placed on a common metric using items administered in both calibrations. In this memorandum, a maximum-likelihood procedure for doing so is illustrated.
Read More | Download Report
December 1, 1988
Scores on new forms of a test are equated to those on an old form. Two common equating procedures are linear and equipercentile. Cross-validation is used to show that, with sample sizes of 6500 and above, equipercentile equating is preferable to linear for the Armed Services Vocational Aptitude Battery.
Read More | Download Report
December 1, 1988
Scores on new forms of the Armed Services Vocational Aptitude Battery are equated to those on form 8a, using samples of about 2500 recruits per form. Three equating procedures are compared in terms of how well their results are cross-validated in large applicant samples.
Read More | Download Report
October 1, 1988
Replacement of a paper-pencil test battery with a computerized adaptive version is likely to increase reliabilities of the subtests. This leads to an increase in the variances of composite scores, and to lower mean scores for subgroups whose average scores are already below those of the general population. These results are illustrated with a computer simulation.
Read More | Download Report
January 1, 1988
An experimental computerized adaptive testing (CAT) version of the Armed Services Vocational Aptitude Battery (ASVAB) has been developed and administered, and a new version is under preparation. It is important that each CAT-ASVAB subtest be at least as reliable as its paper-pencil counterpart. This report presents two methods for estimating subtest reliabilities of the CAT version of the ASVAB, and illustrates them using data from the experimental version. These methods can be used with later versions.
Read More | Download Report
October 1, 1987
In any ongoing testing program, new forms of a test are developed and equated to an earlier form. Linear equating is often used when the new form is nearly parallel to the old one, but it can lead to substantial systematic errors. This research contribution proposes and evaluates a new method for test equating. The method combines the stability of linear equating and small bias of equipercentile equating. See also 02 057100.00
Read More | Download Report
August 1, 1987
The theory underlying computerized adaptive tests assumes that all items for a given subtest measure a single dimension. This assumption was examined for the math knowledge items in the item pool developed for the Armed Services Vocational Aptitude Battery. Departures from the assumption were found to be minor.
Read More | Download Report
August 1, 1987
The computerized adaptive version of the Armed Services Vocational Aptitude Battery will use a Bayesian procedure for computing test scores. Properties of three common Bayesian procedures are examined in this research memorandum. The results show that the procedures are almost equally reliable and that reliability drops if item parameters change from paper-pencil to computerized administration.
Read More | Download Report