Psychological terminology from A to Z

Adaptive testing
In adaptive testing the presentation of items to a respondent taking an ability test is individualized. Respondents first receive an item of medium difficulty. If they answer the item correctly, a more difficult one is presented; if they do not, an easier item follows. This process is repeated until a sufficiently good estimate of the respondent’s ability has been obtained. The basis for adaptive testing is the Rasch model.

A sub-function of attention; describes an individual’s general alertness or readiness to respond.

Assessment Center
An Assessment Center is an elaborate assessment process in which the individuals to be assessed are required to complete a number of exercises (e.g. group discussion, presentation) in front of a panel of observers. The advantage of the Assessment Center is that behavior can be directly observed in a simulated work environment.

Assessment process
The systematic collection, evaluation and processing of specific information in order to be able to derive particular conclusions, prognoses and interventions from it.

Base rate
The relative frequency with which a particular characteristic occurs in a particular population; e.g. the percentage of suitable applicants among all applicants.

Big Five personality model
A model that assumes that there are five basic dimensions of human personality. Has been repeatedly confirmed and is used worldwide. The dimensions are emotional stability (neuroticism), extraversion, openness to experience, agreeableness and conscientiousness.

Branched testing
A form of adaptive testing. On the basis of the number of correctly worked items in a starting block, other items are selected and presented in predefined blocks. This enables adaptive testing to be carried out even without the use of a computer.

Classical test theory
Also termed measurement error theory. The basic assumption of classical test theory is that each measured score is comprised of a true score and an error score (measurement error). There are also other assumptions (axioms). These axioms state that measurement error occurs randomly, has a mean (expected value) of 0 and does not correlate with the true score on the test in question or on another test. Classical test theory focuses on the overall result of the test. The quality criterion of reliability is strongly anchored in classical test theory.

Related to processes of perception, thought, memory or visualization.

Confidence interval
This states a score range around a measured score within which the true (measurement-error-free) score can be expected to lie with a particular probability. This means that the greater the quoted degree of certainty, the larger the score range within which the true score may lie. It is calculated on the basis of reliability.

Constructs are mental or theoretical concepts that are not directly observable or measurable (e.g. intelligence). The concept in question must therefore be investigated by means of other, measurable concepts (indicators), such as test items. The process of “investigation” is called operationalization.

Statistical relationship between two (or more) characteristics. Both the direction of the correlation and its strength are described. The correlation coefficient that is calculated can assume values between –1 and +1. The +/- sign indicates the direction of the correlation, while the number describes its size. The larger the absolute value (regardless of the +/- sign), the stronger the correlation.

Crystallized intelligence
Knowledge (e.g. factual knowledge) and abilities (e.g. arithmetic) built up in the course of a lifetime. Crystallized intelligence depends on fluid intelligence and is also determined by environmental conditions (e.g. cultural background).

Divided attention
The ability to divide attention between two or more stimuli.

Quality criterion of a test. Indicates whether the effort and resources involved in administering a test (e.g. time, materials) are justifiably proportional to the expected information gain (benefit).

Based on objective facts; empirical research is distinguished from everyday perception by the systematic nature of the process – the systematic acquisition of data – and by the requirement, not normally applied to everyday experience, for observations to be objective and replicable.

Evaluation encompasses all the methods that are used in the empirical investigation of the quality/effectiveness of a measure (such as a training program). Evaluation provides the basis for reviewing existing measures for the purpose of improving them.

Executive functions
Higher mental processes that are needed for the planning, execution and monitoring of actions and thought processes.

Relating to expression; for example, expressive speech disorders are disorders of verbal expression or of speech production.

External criterion
An external criterion is a comparison value that is extremely important in the development of psychological assessment procedures since it can be used to check a statistical correlation. By this means it is possible to identify whether two tests measure the same thing or different things. External criteria may be similar tests, assessments by teachers/parents, etc.

A test quality criterion; it is met if an individual’s test result does not lead to any systematic discrimination on the basis of ethnic or cultural background, socio-economic circumstances or gender.

Factor analysis
Statistical procedure for identifying common aspects in sets of data. The aim is to reduce the quantity of data so that the material is easier to interpret. For example, factor analysis is frequently used with personality questionnaires in order to identify questions that can be combined into issue blocks representing personality traits. This enables a large number of questions to be reduced to a small number of more easily interpretable personality characteristics.

Fluid intelligence
The ability to solve new types of problem and to adapt to new situations and conditions. Fluid intelligence is independent of existing knowledge. According to Cattell, fluid intelligence involves the ability areas of logical reasoning, attention and memory. It is independent of language and culture and is a precondition for crystallized intelligence.

Flynn effect
States that average performance on ability tests (intelligence tests) rises by about 5 IQ points in 10 years. This means that norm data must be regularly revised. The effect was first documented and published in a study by Flynn in 1987.

Focused attention
The ability to restrict attention to a particular portion of reality, in order to perceive that portion more accurately.

General factor (g factor)
Spearman assumes in his theory of intelligence that there is a basic dimension of intelligence that influences or forms the basis of all the different ability areas. He terms this dimension the g factor (general factor).

Glia cells
Various types of cells in the brain and nervous system that serve to protect, support and supply the nerve cells. Roughly comparable to the insulation of an electrical cable (where the wire corresponds to the nerve cell).

Describes the degree of similarity between individuals, items or characteristics. For example, a group of items in an ability test must be homogeneous if all the items in the group are supposed to measure the same ability.

A statement about correlations, differences or changes that are to be scientifically investigated. Hypotheses are derived from theories and tested by means of statistical tests. It is not possible to prove (verify) a hypothesis; it can only be refuted (falsified).

A directly measurable (manifest) state used to measure a latent construct. Items in a test are indicators for the construct that the test is designed to measure.

Intelligence quotient (IQ)
Measure of a person’s intellectual ability. IQ is defined as having a mean of 100 and a standard deviation of 15 points; this means that anyone with an IQ between 85 and 115 can be said to be of average intelligence. People with extreme IQ scores are defined as being mentally handicapped / of reduced intelligence (IQ < 70) or highly talented (IQ >= 130).

Intelligence theory
There is at present no authoritative, generally accepted theory of intelligence. Various models exist, sometimes with major differences between them. Common to most models is the idea that intelligence is not a construct in itself but rather a system of different abilities that guide intelligent behavior.

Intelligence theory according to Cattell
An expansion of the general-factor model of Spearman. Assumes that the g factor is made up of two components, fluid (gf) and crystallized (gc) intelligence.

Intelligence theory according to Cattell-Horn
An expansion of the model of Cattell. Postulates further factors in addition to fluid and crystallized intelligence.

Item parameter
Characteristic or difficulty of an item. Parameter of a test item indicating what percentage of a respondent sample work the item correctly.

Item-response theory
A test theory that investigates how conclusions about underlying latent variables (e.g. the respondent’s spatial visualization ability) can be drawn from manifest data (e.g. responses to test items). Assumes that an item’s solution probability depends on an individual’s ability (person parameter) and the item difficulty (item parameter). The best-known model is that of Rasch. Item-response theory focuses on how the answers to individual items arise.

Interview method
Form of data-gathering. The interview can be more or less structured, depending on the type and quantity of information to be collected. Examples of interview methods are taking case histories and recording life data (anamnesis) in a clinical context, and interviews in a wide range of situations.

Latent variable
Variable that cannot be directly observed or measured and can only be deduced from other, measurable circumstances; e.g. personality traits such as extraversion and other constructs.

Learning effect
Undesirable effects in the context of testing that arise through repeated presentation of identical or similar items. As a result of learning effects, respondents who are tested on a second occasion may obtain results that are better than they would be expected to achieve on the basis of their actual ability, because they have remembered the answers or become familiar with the type of item. Their results are thus affected by memory factors.

Local stochastic independence
Means that answering an item correctly does not influence the probability of answering another item correctly – in other words, the order of the items is unimportant. An item‘s solution probability must depend only on the respondent’s ability (person parameter) and the difficulty of the item (item parameter).

Manifest variable
Directly observable and directly measurable variable, e.g. body weight.

Measure of spread
Statistical parameter that indicates the spread or dispersion of the values in a distribution. Commonly encountered measures of spread are standard deviation and variance.

Measurement error
Any measurement is associated with a degree of error, irrespective of the test used. The smaller the error, the more reliable the measurement. The size of the measurement error is expressed by the reliability. The measurement error forms the basis for calculation of the confidence interval.

Mental disorder
Severe impairment of the normal functioning of mental capacities in the area of ability and/or personality (thinking, feeling, acting).

Assessment of the brain or nervous system using imaging techniques such as computer tomography (CT) or functional magnetic resonance tomography (fMRT), also known as functional magnetic resonance imaging (fMRI).

Neuronal plasticity
The brain’s ability to renew and/or restructure connections (synapses) between nerve cells as a means of adapting to changed conditions (learning processes, environmental conditions, injury, etc.). The basis of all learning processes and of rehabilitation after brain injury.

Normal distribution
Specific form of statistical distribution, which in graph from resembles a bell curve (Gaussian bell curve). A normal distribution is a basic requirement of many statistical tests and parameters.

Quality criterion of a test. Norming involves compiling a standard of comparison for a test (calibration). Can be compared with calibrating a weighing scales e.g. in kg or lb.

Objective personality tests
Objective tests for measuring personality are tools that measure an individual’s behaviour in a standardized situation directly, usually without requiring the individual to assess himself. For candidates these tests look like ability tests and are not transparent.

Quality criterion of a test. Objectivity describes the extent to which administration, scoring and interpretation of a test always produce the same results, irrespective of the test user. Objectivity can be attained by standardizing a test.

Operationalization is the way in which a construct is made measurable. It involves measuring the latent trait (construct) by means of manifest indicators. Test items are a type of manifest indicator for the latent construct that the test is designed to measure. The test (e.g. INSBAT) is then the operationalization of the trait (e.g. intelligence).

Orienting of attention
Directing attention towards a particular direction in space or a particular time.

(1) A term for statistical variables, the variables of a distribution (mean, standard deviation, etc.);
(2) General term in mathematics for undefined, constant or variable amounts and auxiliary amounts.

Person parameter
A value from item-response theory models. The person parameter is a measure of the latent trait that underlies the manifest test behavior.

Personality theory
Personality theories attempt to integrate psychological knowledge about human individuality and explain the internal correlations between personality traits. In addition, they try to describe and explain inter- and intra-individual differences in personality. Personality theories thus contain systems for describing, explaining and predicting individual human characteristics.

Personality theory according to H.J. Eysenck
Assumes that human personality is mainly genetically/biologically determined. The theory describes three factors of personality (extraversion/introversion, neuroticism, psychoticism) and assigns biological correlates to them.

Primary factors theory
Developed by Thurstone. Describes seven basic abilities of human intelligence (primary mental abilities).

Probabilistic test theory
See item-response theory.

Projective tests
Tests that claim to identify the individual’s basic personality structure and motives. The way the individual explains or responds to material or stimuli that can be interpreted in a number of different ways is designed to reveal unconscious aspects of the personality from which inferences about the personality can then be drawn. The Rorschach test is a well-known example of this type of test.

Quality criterion
Qualitative information about the effectiveness of a test. There are 3 main quality criteria and 7 secondary quality criteria. Purpose: assessment of the quality of psychological tests.

Rasch model
Statistical model in the context of item-response theory that is concerned with the solution probability of an item. Whether an item is solved depends on the difficulty of the item (item parameter) and the ability of the individual (person parameter). The Rasch model is the basis for adaptive testing.

Raw score
Within classical test theory, a score obtained in a test is termed a raw score. Raw scores are usually points scores obtained by applying the test’s scoring rules.

Quality criterion of a test. The criterion is met if the person to be tested is not put under mental or physical stress or required to invest an unreasonable amount of time.

Concerned with reception; e.g.: receptive language disorders involve impairment of the understanding of spoken language.

Quality criterion of a test. A test’s reliability describes the degree of precision with which the test measures a particular characteristic – in other words, the extent to which the test result is free of measurement error.

Reliability coefficient
A measure of reliability (measurement precision) that can vary between 0 and 1. To ensure sufficient precision of measurement, the value of the reliability coefficient should not be less than 0.70.

Requirements profile
The requirements profile describes what level on particular ability and personality dimensions is necessary and/or desirable for a particular position.

Resistance to falsification
Quality criterion of a test. Also known as resistance to faking. The criterion is met if respondents cannot influence or distort their test scores at will, or can do so only to an insignificant extent.

Response inhibition
The ability to consciously suppress the response to a stimulus for a certain period of time.

Term for the measurement system on which measurement is based. The scales of a psychological test usually consist of a grouping of test items.

Scale level
Relates to the properties of a scale. Usually there are five scale levels, each of which is higher than the preceding one.

Quality criterion of a test. The criterion of scaling is met if respondents’ answers are transformed by defined rules (scoring rules) in such a way that the differences between two levels of a trait are in exact proportion to the numbers in which the levels are expressed. If one person is twice as intelligent as another and the first has an IQ score of 100, the second should have an IQ score of 50.

Screening test
Tests that classify people roughly as abnormal vs. normal.

Selection rate
The proportion of people to be selected from the total number, e.g. the number of vacant jobs as a proportion of the total number of applicants.

Selective attention
The ability to perceive only stimuli that are important in the current situation and to ignore others.

Parameter of a test item that expresses how well the overall scale result can be predicted from the way the individual test item is worked. Selectivity is defined as the correlation between the item and the scale result.

Semi-projective test
A combination of personality questionnaires and projective tests. The respondent must imagine himself being involved in scenes that are presented. He must then assess what it felt like to be “in” these pictures.

In test assessment, sensitivity describes the reliability with which, in a screening process, a certain conspicuous group of individuals is actually identified by a test as conspicuous (ratio of the positively classified individuals to all positive individuals). Closely connected to specificity.

A term used in statistics to describe the probability of an event under the assumption of a hypothesis that is to be rejected. This hypothesis that is to be rejected (null hypothesis) states that no correlation, difference or change exists. When referring to the likelihood of an event under the null hypothesis reference is usually made to the probability of a significant event being less than 5% or less than 1%.

In test assessment, specificity describes the reliability with which, in a screening process, a certain inconspicuous group of individuals is actually identified by a test as inconspicuous (ratio of the negatively classified individuals to all negative individuals). For the quality of a screening test to be evaluated, both the sensitivity and the specificity must be stated.

Standard deviation
Measure of spead; statistical parameter of a trait distribution that expresses the disperson of the values around the (arithmetical) mean. The standard deviation is obtained by calculating the deviations of all scores from the mean, summing them, squaring the result, dividing this by the number of scores minus one, and taking the square root of the answer.

Standard measurement error
Statistical parameter that describes the extent of measurement error. It can generally be assumed that any test is affected by measurement error. The standard measurement error can be used to determine the confidence interval.

Standardized score
A transformed raw score that enables better comparability of the results obtained. Standardization is based on the average value (mean) and the spread (standard deviation) of the scores of a reference sample. The most commonly encountered standardized scales are the C scale, the IQ scale, the standard scale, the T scale, the Z scale and the stanine scale.

A measurement instrument may be more or less standardized. Standardization describes the extent to which administration, scoring and interpretation are clearly defined. The highest possible degree of standardization is desirable so that everyone who takes the test experiences the same conditions and maximum comparability is ensured.

Influenced by chance.

Stochastic independence
A basic concept in probability theory that assumes that two chance events do not influence each other. If two events are stochastically independent, it is irrelevant for the occurrence of one event whether the other event occurs or not.

Synaptic connections
Connections between nerve cells. They are essential for the transfer of information between nerve cells. Each nerve cell has between several hundred and several thousand synaptic connections to other nerve cells. Synaptic connections are not innate but are continuously being restructured and/or newly created through experience and learning processes (see neuronal plasticity).

Tailored testing
A form of adaptive testing. Respondents are presented with an item on a computer. If they answer the items correctly, the computer presents a more difficult one; if they do not, an easier item follows. This process is repeated until the respondent’s ability can be sufficiently well estimated.

Taylor-Russell tables
Tables that makes it possible to estimate from the base rate, the selection rate and the validity of a test how many of the selected individuals are actually suitable for the position for which they have been selected.

Test battery
A combination of several individual tests (subtests) that together are designed to measure a complex characteristic (e.g. intelligence). Each individual test measures a sub-aspect of the characteristic; taken together the subtests aim to provide a valid test outcome.

Test theory
Test theory describes the mathematical assumptions on which psychological measurement is based. It thus has a crucial influence on test development. The two most commonly used test theories are classical test theory and item-response theory.

In the narrow sense, assessment instruments that can be used to measure performance. In the wider sense the term is also applied to all scientifically based psychological measurement instruments (such as personality questionnaires, etc.).

Three-stratum theory of intelligence
Developed by Carroll. Based on factor analysis of large data sets. The theory distinguishes three hierarchy levels that become increasingly abstract. Level 1 consists of 69 specific abilities; at Level II these are combined into eight basic functions, and Level III corresponds in essence to the g factor.

Triarchic theory of intelligence
Developed by Sternberg. He views intelligence as the interaction between various aspects that are divided into the areas of analytic, experiential and contextual/practical intelligence.

Quality criterion of a test. Also known as utility. A test fulfills the quality criterion of usefulness if the characteristic that it measures has practical relevance and if the psychological decisions made and/or measures taken on the basis of the test are likely to be more beneficial than harmful.

Utility analysis
Asks whether a test is suitable for a particular investigative purpose. Utility exists only if use of the test provides more information that could be obtained using other decision-making strategies.

Quality criterion of a test. Indicates the extent to which a measurement instrument actually measures what it purports to measure.

Validity coefficient
Expresses a test’s validity as a numerical value. Values can fluctuate between –1 and +1. The higher the absolute number, the higher the validity. In no circumstances should the validity of an individual test be less than 0.30.

Measure of spread; statistical parameter of a trait distribution that expresses the dispersion of the values around the (arithmetical) mean. The variance is obtained by calculating the deviations of all scores from the mean, summing them, squaring the result and dividing this by the number of scores minus one (corresponds to the square of the standard deviation).

Sub-function of attention. Describes the ability to maintain attention over a relatively long period of time in (monotonous) situations.