Chapter 1 ACER ConQuest: An Introduction

ACER ConQuest is currently at Version 5. To cite this version:

  • Adams, R. J., Wu, M. L., Cloney, D., Berezner, A., & Wilson, M. (2020). ACER ConQuest: Generalised Item Response Modelling Software (Version 5.29) [Computer software]. Australian Council for Educational Research. https://www.acer.org/au/conquest

This section provides a brief survey of the models that ACER ConQuest can fit, and some applications to which these models can be applied.

1.1 What is ACER ConQuest?

ACER ConQuest is a computer program for fitting item response and latent regression models. It provides a comprehensive and flexible range of item response models to analysts, allowing them to examine the properties of performance assessments, traditional assessments and rating scales. ACER ConQuest also makes available, to the wider measurement and research community, the most up-to-date psychometric methods of multifaceted item response models, multidimensional item response models, latent regression models and drawing plausible values.

1.2 What are the models that ACER ConQuest can fit?

ACER ConQuest brings together in a single program a wide variety of item response models (including multidimensional models) and provides an integration of item response and regression analysis.

1.2.1 Rasch’s Simple Logistic Model

Rasch’s simple logistic model for dichotomies (Rasch, 1980) is the simplest of all commonly used item response models. This model is applicable for data that are scored into two categories, generally representing correct and incorrect answers. ACER ConQuest can fit this model to multiple choice and other dichotomously scored items.

1.2.2 Rating Scale Model

Andrich’s extension of the simple logistic model (Andrich, 1978) allows the analysis of sets of rating items that have a common, multiple-category response format. The rating scale model is of particular value when examining the properties of the Likert-type items that are commonly used in attitude scales.

1.2.3 Partial Credit Model

Masters’ extension of the simple logistic model (Masters, 1982) to the partial credit model allows the analysis of a collection of cognitive or attitudinal items that can have more than two levels of response. This model is now widely used with performance assessments that yield richer data than the dichotomous data that are typically generated by traditional assessment practices.

1.2.4 Ordered Partition Model

Wilson’s extension of the partial credit model (Wilson, 1992) to the ordered partition model allows a many-to-one correspondence between item response categories and scores. Most item response models require a one-to-one correspondence between the categories of response to items and the level of performance that is attributed to those categories, for example, the dichotomous Rasch model, as it name implies, models two categories of performance on an item. These categories are usually identified with the digits 0 and 1, which serve both as category labels and score levels. Similarly, for the partial credit model, the digits 0, 1, 2 and so on serve both as category labels and score levels. In each case, there is a one-to-one correspondence between the category label and the score level. ACER ConQuest allows this correspondence between category labels and score levels to be broken by permitting items to have any number of categories assigned to the same score level, while the categories are still modelled separately. For example, an item that taps students’ conceptual understanding of a science concept may elicit responses that reflect four different types of conceptual understanding. One of the responses may be considered very naive and scored as level zero, a second type of response may be regarded as indicative of a sophisticated understanding and be scored as level two, and the two remaining categories may both indicate partially correct, but qualitatively different, misconceptions that can each be reasonably scored as level one. ACER ConQuest can analyse this as a four-category item with three different score levels. It does this through the application of Wilson’s ordered partition model (Wilson, 1992).

1.2.5 Linear Logistic Test Model

Fischer (1983) developed a form of Rasch’s simple logistic model that allows the item difficulty parameters of items to be specified as linear combinations of more fundamental elements, such as the difficulties of cognitive subtasks that might be required by an item. ACER ConQuest is able to fit the linear logistic model to both dichotomous and polytomous response items.

1.2.6 Multifaceted Models

Linacre’s multifaceted model (Linacre, 1994) is an extension of the linear logistic model to partial credit items. Standard item response models have assumed that the response data that are modelled result from the interaction between an object of measurement (a student, say) and an agent of measurement (an item, say). Linacre (1994) has labelled this two-faceted measurement, one facet being the object of measurement and the other the agent of measurement. In a range of circumstances, however, additional players, or facets, are involved in the production of the response. For example, in performance assessment, a judge or rater observes a student’s performance on tasks and then allocates it to a response category. Here we have three-faceted measurement, where the response is determined by the characteristics of the student, the task and the rater. The general class of models that admit additional facets are now called multifaceted item response models.

1.2.7 Generalised Unidimensional Models

ACER ConQuest’s flexibility, which enables it to fit all of the unidimensional models described above, derives from the fact that the underlying ACER ConQuest model is a repeated-measures, multinomial, logistic-regression model that allows the arbitrary specification of a linear design for the item parameters. ACER ConQuest can automatically generate the linear designs to fit models like those described above, or it can import user-specified designs that allow the fit of a myriad of other models to be explored (see section 2.10). Imported models can be used to fit mixtures of two-faceted and multifaceted responses, to impose equality constraints on the parameters of different items, and to mix rating scales with different formats, to name just a few possibilities.

1.2.8 Multidimensional Item Response Models

ACER ConQuest analyses are not restricted to models that involve a single latent dimension. ACER ConQuest can be used to analyse sets of items that are designed to produce measures on up to ten latent dimensions. Wang (1995) and Adams, Wilson, & Wang (1997) have described two types of multidimensional tests: multidimensional between-item tests and multidimensional within-item tests. Multidimensional between-item tests are made up of subsets of items that are mutually exclusive and measure different latent variables. That is, each item on the test serves as an indicator for a single latent dimension. In multidimensional within-item tests, each of the items can be an indicator of multiple latent dimensions. ACER ConQuest is able to fit all of the above-listed unidimensional models to undertake confirmatory analyses of either multidimensional within-item or multidimensional between-item tests.1

1.2.9 Latent Regression Models

The term latent regression refers to the direct estimation of regression models from item response data. To illustrate the use of latent regression, consider the following typical situation. We have two groups of students, group A and group B, and we are interested in estimating the difference in the mean achievement of the two groups. If we follow standard practice, we will administer a common test to the students and then use this test to produce achievement scores for all of the students. We would then follow a standard procedure, such as regression (which, in this simple case, becomes identical to a t‑test), to examine the difference in the means. Depending upon the model that is used to construct the achievement scores, this approach can result in misleading inferences about the differences in the means. Using the latent regression methods described by Adams, Wilson, & Wang (1997), ACER ConQuest avoids such problems by directly estimating the difference in the achievement of the groups from the response data.

1.3 How does ACER ConQuest fit these models?

ACER ConQuest produces marginal maximum likelihood estimates for the parameters of the models summarised above. The estimation algorithms used are adaptations of the quadrature method described by Bock & Aitkin (1981), Gauss-Hermite quadrature, and the Monte Carlo method of Volodin & Adams (1995). The fit of the models is ascertained by generalisations of the Wright & Masters (1982) residual-based methods that were developed by Wu (1997). A summary of these procedures is provided in Estimation in Chapter 3, Technical matters.

1.4 Some applications of ACER ConQuest

1.4.1 Performing item analysis

With each of the models that it fits, ACER ConQuest provides parameter estimates, errors for those estimates, and diagnostic indices of fit. These are the basic components of an item analysis based on item response modelling. In addition to producing item response modelling-based information, ACER ConQuest produces an array of traditional item statistics, such as KR-20 and Cronbach’s alpha coefficients of reliability, distractor analyses for multiple choice questions, and category analyses for multicategory items.

1.4.2 Examining Differential Item Functioning

ACER ConQuest provides powerful tools for examining differential item functioning. ACER ConQuest’s facility for fitting multifaceted models and imposing linear constraints on item parameters allows convenient but rigorous testing of the equality of item parameter estimates in multiple groups.

1.4.3 Exploring Rater Effects

The exploration of rater effects is an important application of multifaceted models implemented in ACER ConQuest. Multifaceted models can be used to examine variation in the harshness or leniency of raters, they can be used to examine the propensity of raters to favour different response categories, and they can be used to examine the fit (or consistency) of individual raters with other raters.

1.4.4 Estimating Latent Correlations and Testing Dimensionality

By providing the opportunity to fit multidimensional item response models, ACER ConQuest allows the correlations between latent variables to be estimated. Estimating the correlations in this fashion avoids the problems associated with the influence of measurement error on obtaining accurate and unbiased estimates of correlations between constructed variables. Fitting ACER ConQuest with alternatively posited dimensionality structures and comparing the fit of these models also provides a powerful mechanism for formally checking dimensionality assumptions.

1.4.5 Drawing Plausible Values

The combination of item response modelling techniques and methods for dealing with missing-response data through multiple imputation has resulted in the so-called plausible values methodology (Mislevy, 1991) that is now widely used in sophisticated measurement contexts. Through the use of plausible values, secondary analysts are able to use standard software and techniques to analyse data that have been collected using complex matrix sampling designs. A particularly powerful feature of ACER ConQuest is its ability to draw plausible values for each of the models that it fits.

1.5 Where to find more information

ACER ConQuest is able to fit a large range of statistically sophisticated models, and it is not possible for either the measurement or statistical theory underpinning those models to be adequately discussed in this manual. Nor is it possible for the manual to do anything but touch on the range of applications to which the models can be applied. For those interested in further information on the ACER ConQuest models and their application, we refer you to the following papers: Adams & Wilson (1996); Adams, Wilson, & Wang (1997); Adams, Wilson, & Wu (1997); Mislevy et al. (1992); Wright & Masters (1982); and Wright & Stone (1979).

1.6 Installing ACER ConQuest

ACER ConQuest requires installation on both Windows and Mac OS. On Windows, double click the installer file on Windows (ACER ConQuest.msi) to be guided through installation.

On Mac OS, open the installer disk image (ConQuest_X_YY_Z.dmg - where X, Y, and Z is the version number) and drag the folder ConQuest to the Applications folder (or any other location you would like to install to).

To open ACER ConQuest, double click the icon in the install location, or type the location of the executable in a console window. The default install locations are:

  • Windows
    • %ProgramFiles%/ACER ConQuest/ConQuestConsole.exe
    • %ProgramFiles%/ACER ConQuest/ConQuestGUI.exe
    • NOTE: There is a deprecated 32 Bit version of ConQuest that will be installed to %ProgramFiles(x86)% instead.
  • Mac OS x86
    • ~/Applications/ConQuest/ConQuest

1.6.1 Licence key instructions

Once you have installed ACER ConQuest, you will need to activate you licence. The licence key activates both console and GUI versions (GUI is Windows only).

1.6.1.1 Console Mode

Start the program, type the following ACER ConQuest set command (where you use the key provided to you instead of ‘std-999-0000’) and press enter.

set softkey=std-999-0000;

Then close and restart the program.

1.6.1.2 GUI Mode

Start the program and select File, then New. In the Input Window, type the following ACER ConQuest set (where you use the key provided to you instead of ‘std-999-0000’).

set softkey=std-999-0000;

Now select Run, then Run All. Then close and restart the program.


  1. If ACER ConQuest is being used to estimate a model that has within-item multidimensionality, then the set command argument lconstraints=cases must be provided. ACER ConQuest can be used to estimate a within-item multidimensional model without lconstraints=cases. This will, however, require the user to define and import an appropriate design matrix.↩︎