ERIC Identifier: ED315425
Publication Date: 1989-02-00
Author: Grist, Susan - And Others
Source: ERIC Clearinghouse on Tests Measurement and Evaluation Washington DC., American Institutes for Research Washington DC.

Computerized Adaptive Tests. ERIC Digest No. 107.

Paper-and-pencil tests are "fixed-item" tests in which all students answer the same questions. Fixed-item tests waste students' time because they give students a large number of items that are either too easy or too difficult. As a result, the tests give little information about the particular level of ability of each student. With recent advancements in measurement theory and the increased availability of microcomputers in schools, the practice of using these tests may change. Computerized tests may replace paper-and-pencil tests in some instances.

With computerized tests, each student's ability level can be estimated DURING the testing process and items can be tailored to this estimate of ability. Consequently, students can take different versions of the same test. These tests are called computerized adaptive tests or CATs. This digest offers some insights into the advantages and disadvantages of CATs.


In general, computerized testing greatly increases the flexibility of test management.

Tests are given "on demand" and scores are available immediately.

Neither answer sheets nor trained test administrators are needed. Administration is consistent.

Test administrator differences are eliminated as a factor in measurement error.

Tests are individually paced so that a student does not have to wait for others to finish before going on to the next section. Self-paced administration also offers extra time for students who need it, potentially reducing one source of test anxiety.

Test security is increased because hardcopy test booklets are never compromised.

Computerized testing also offers a number of options for timing and formatting. Timing options range from self-paced administration to item-by-item timing. Also, different formats can be developed to take advantage of graphics and timing capabilities. For example, perceptual and psychomotor skills that are nearly impossible to assess with a paper-and-pencil test can be readily tested on a computer.

In addition to having the advantages of computerized testing, CATs increase efficiency. Significantly less time is needed to administer CATs than a fixed-item test since fewer items are needed to achieve acceptable accuracy. CATs can reduce testing time by more than 50% while maintaining the same level of reliability. Shorter testing times also reduce fatigue, which can be a significant factor in students' test results.

CATs can also provide accurate scores over a wide range of abilities while traditional tests are usually most accurate for average students; CATs can maintain a high level of accuracy for all students. By including more relatively easy and more relatively difficult items in the item pool, CATs can accommodate the abilities of both bright and slow students.


CATs should not be used for some subjects and skills. Most CATs are based on an item-response theory model, which assumes that all the information needed in selecting items can be summarized in one to three parameters that describe the item's difficulty for students who have different abilities. Many tests cover a number of different skills or topics, however. Specifications for traditional tests seek to ensure an even range across skills or topics. Most common CAT strategies do not accommodate such additional considerations.

Hardware limitations further restrict the types of items that can be administered by computer. Items involving detailed art work and graphs or extensive reading passages, for example, are hard to present using the types of computers found in most schools.

Another limitation of CATs stems from the need for careful item calibration. Since each student takes a set of items, comparable scores depend heavily on precise estimates of item characteristics. Therefore, relatively large samples must be used. A minimal number in a sample is 1,000 students; 2,000 is more common. Such sample size requirements are prohibitive for most locally developed tests.

Finally, for CATs to be manageable, a facility must have enough computers for a large number of students and the students must be at least partially computer-literate. While the number of computers in schools continues to grow, many schools simply do not have the resources to use CATs as a standard practice.


CATs are new and the number of companies and organizations using them is small. However, several prominent organizations are already using CATs.

For example, for the past decade, the U.S. military has pioneered basic and applied research in CATs. One step in this research program is the development of a computerized version of the Armed Services Vocational Aptitude Battery (ASVAB), headed by the Naval Personnel Research and Development Center in San Diego. Administered to roughly a half million applicants each year, the paper-and-pencil version of the ASVAB takes three hours to complete while the experimental CAT version takes about 90 minutes. With the computerized version, an examinee's qualifying scores can be immediately compared with requirements for all available positions.

Another test developed by military research laboratories -- the Computerized Adaptive Screening Test (CAST) -- was implemented in 1984. CAST was the first nationwide use of CAT. This 15-minute screening test gives prospects a quick but accurate estimate of their chances of passing the full ASVAB and of qualifying for enlistment bonuses.

As another example, two public school systems are forerunners in using CATs in the educational arena. In Portland (OR) Public Schools, CATs have been well received by examinees, test administrators, and test users. Montgomery County (MD) Public Schools has asked for approval from the State Board of Education to make its mathematics and reading CATs available to students as an alternative to the state-sponsored high school graduation examinations.


The following six organizations are now involved in computerized adaptive testing:

Assessment Systems Corporation markets the MicroCAT system, which runs on IBM-PCs and compatibles. MicroCAT is a complete authoring and administration system and includes routines for item analysis and item-pool development. The Montgomery County Schools CAT program is based on MicroCAT.

WICAT markets software to support CAT developers, a battery of 45 tests, and custom CAT computer systems. Schools use Wicat's battery of CATs to screen and identify gifted and talented students.

The Psychological Corporation markets a CAT version of the popular Differential Aptitude Test (DAT) to junior and senior high schools. It has versions for the IBM-PC and Apple II computers.

American College Testing Program (ACT) is working on several computerized adaptive tests. ACT is developing training CATs for the Marine Corps and for college placement mathematics. It is also researching the development of a multidimensional CAT.

The Educational Testing Service is working with the College Entrance Examination Board to develop and refine a CAT to aid in college placement. An initial version of the system is being used by about 20 colleges across the country.

The American Institutes for Research recently completed a major revision of the Army's Computerized Applicant Screening Test (CAST). The CAST item pool was expanded, fairness analyses were conducted, item selection procedures were modified to increase accuracy at key points, and the feedback provided to examinees and recruiters was significantly improved.


Green, Bert F., et al. "Technical Guidelines for Assessing Computerized Adaptive Tests, Journal of Educational Measurement. 1984, 21, 4, pp. 347-360.

Kreitzberg, Charles, et al. "Computerized Adaptive Testing: Principles and Directions," Computers and Education. 1978, 2, 4, pp. 319-329.

Wainer, Howard. "On Item Response Theory and Computerized Adaptive Tests: The Coming Technological Revolution in Testing," Journal of College Admissions. 1983, 28, 4, pp. 9-16.

Weiss, David J. "Adaptive Testing by Computer," Journal of Consulting and Clinical Psychology. 1985, 53, 6, pp. 774-789.

Library Reference Search

Please note that this site is privately owned and is in no way related to any Federal agency or ERIC unit.  Further, this site is using a privately owned and located server. This is NOT a government sponsored or government sanctioned site. ERIC is a Service Mark of the U.S. Government. This site exists to provide the text of the public domain ERIC Documents previously produced by ERIC.  No new content will ever appear here that would in any way challenge the ERIC Service Mark of the U.S. Government.

Popular Pages

More Info