Mental chronometry is the study of reaction time (RT; also referred to as "response time") in perceptual-motor tasks to infer the content, duration, and temporal sequencing of mental operations. Mental chronometry is one of the core methodological paradigms of human experimental and cognitive psychology, but is also commonly analyzed in psychophysiology, cognitive neuroscience, and behavioral neuroscience to help elucidate the biological mechanisms underlying perception, attention, and decision-making across species.
Mental chronometry uses measurements of elapsed time between sensory stimulus onsets and subsequent behavioral responses. It is considered an index of processing speed and efficiency indicating how fast an individual can execute task-relevant mental operations. Behavioral responses are typically button presses, but eye movements, vocal responses, and other observable behaviors can be used. RT is constrained by the speed of signal transmission in white matter as well as the processing efficiency of neocortical gray matter. Conclusions about information processing drawn from RT are often made with consideration of task experimental design, limitations in measurement technology, and mathematical modeling.
Reaction time ("RT") is the time that elapses between a person being presented with a stimulus and the person initiating a motor response to the stimulus. It is usually on the order of 200 ms. The processes that occur during this brief time enable the brain to perceive the surrounding environment, identify an object of interest, decide an action in response to the object, and issue a motor command to execute the movement. These processes span the domains of perception and movement, and involve perceptual decision making and motor planning.
There are several commonly used paradigms for measuring RT:
Due to momentary attentional lapses, there is a considerable amount of variability in an individual's response time, which does not tend to follow a normal (Gaussian) distribution. To control for this, researchers typically require a subject to perform multiple trials, from which a measure of the 'typical' or baseline response time can be calculated. Taking the mean of the raw response time is rarely an effective method of characterizing the typical response time, and alternative approaches (such as modeling the entire response time distribution) are often more appropriate.
Sir Francis Galton is typically credited as the founder of differential psychology, which seeks to determine and explain the mental differences between individuals. He was the first to use rigorous RT tests with the express intention of determining averages and ranges of individual differences in mental and behavioral traits in humans. Galton hypothesized that differences in intelligence would be reflected in variation of sensory discrimination and speed of response to stimuli, and he built various machines to test different measures of this, including RT to visual and auditory stimuli. His tests involved a selection of over 10,000 men, women and children from the London public.
Donders also devised a subtraction method to analyze the time it took for mental operations to take place. By subtracting simple RT from choice RT, for example, it is possible to calculate how much time is needed to make the connection.
This method provides a way to investigate the cognitive processes underlying simple perceptual-motor tasks, and formed the basis of subsequent developments.
Although Donders' work paved the way for future research in mental chronometry tests, it was not without its drawbacks. His insertion method, often referred to as "pure insertion", was based on the assumption that inserting a particular complicating requirement into an RT paradigm would not affect the other components of the test. This assumption—that the incremental effect on RT was strictly additive—was not able to hold up to later experimental tests, which showed that the insertions were able to interact with other portions of the RT paradigm. Despite this, Donders' theories are still of interest and his ideas are still used in certain areas of psychology, which now have the statistical tools to use them more accurately.
W. E. Hick (1952) devised a CRT experiment which presented a series of nine tests in which there are n equally possible choices. The experiment measured the subject's RT based on the number of possible choices during any given trial. Hick showed that the individual's RT increased by a constant amount as a function of available choices, or the "uncertainty" involved in which reaction stimulus would appear next. Uncertainty is measured in "bits", which are defined as the quantity of information that reduces uncertainty by half in information theory. In Hick's experiment, the RT is found to be a function of the binary logarithm of the number of available choices (n). This phenomenon is called "Hick's law" and is said to be a measure of the "rate of gain of information". The law is usually expressed by the formula , where and are constants representing the intercept and slope of the function, and is the number of alternatives. The Jensen Box is a more recent application of Hick's law. Hick's law has interesting modern applications in marketing, where restaurant menus and web interfaces (among other things) take advantage of its principles in striving to achieve speed and ease of use for the consumer.
Saul Sternberg (1966) devised an experiment wherein subjects were told to remember a set of unique digits in short-term memory. Subjects were then given a probe stimulus in the form of a digit from 0–9. The subject then answered as quickly as possible whether the probe was in the previous set of digits or not. The size of the initial set of digits determined the RT of the subject. The idea is that as the size of the set of digits increases the number of processes that need to be completed before a decision can be made increases as well. So if the subject has 4 items in short-term memory (STM), then after encoding the information from the probe stimulus the subject needs to compare the probe to each of the 4 items in memory and then make a decision. If there were only 2 items in the initial set of digits, then only 2 processes would be needed. The data from this study found that for each additional item added to the set of digits, about 38 milliseconds were added to the response time of the subject. This supported the idea that a subject did a serial exhaustive search through memory rather than a serial self-terminating search. Sternberg (1969) developed a much-improved method for dividing RT into successive or serial stages, called the additive factor method.
Shepard and Metzler (1971) presented a pair of three-dimensional shapes that were identical or mirror-image versions of one another. RT to determine whether they were identical or not was a linear function of the angular difference between their orientation, whether in the picture plane or in depth. They concluded that the observers performed a constant-rate mental rotation to align the two objects so they could be compared. Cooper and Shepard (1973) presented a letter or digit that was either normal or mirror-reversed, and presented either upright or at angles of rotation in units of 60 degrees. The subject had to identify whether the stimulus was normal or mirror-reversed. Response time increased roughly linearly as the orientation of the letter deviated from upright (0 degrees) to inverted (180 degrees), and then decreases again until it reaches 360 degrees. The authors concluded that the subjects mentally rotate the image the shortest distance to upright, and then judge whether it is normal or mirror-reversed.
Mental chronometry has been used in identifying some of the processes associated with understanding a sentence. This type of research typically revolves around the differences in processing 4 types of sentences: true affirmative (TA), false affirmative (FA), false negative (FN), and true negative (TN). A picture can be presented with an associated sentence that falls into one of these 4 categories. The subject then decides if the sentence matches the picture or does not. The type of sentence determines how many processes need to be performed before a decision can be made. According to the data from Clark and Chase (1972) and Just and Carpenter (1971), the TA sentences are the simplest and take the least time, than FA, FN, and TN sentences.
Hierarchical network models of memory were largely discarded due to some findings related to mental chronometry. The TLC model proposed by Collins and Quillian (1969) had a hierarchical structure indicating that recall speed in memory should be based on the number of levels in memory traversed in order to find the necessary information. But the experimental results did not agree. For example, a subject will reliably answer that a robin is a bird more quickly than he will answer that an ostrich is a bird despite these questions accessing the same two levels in memory. This led to the development of spreading activation models of memory (e.g., Collins & Loftus, 1975), wherein links in memory are not organized hierarchically but by importance instead.
Michael Posner (1978) used a series of letter-matching studies to measure the mental processing time of several tasks associated with recognition of a pair of letters. The simplest task was the physical match task, in which subjects were shown a pair of letters and had to identify whether the two letters were physically identical or not. The next task was the name match task where subjects had to identify whether two letters had the same name. The task involving the most cognitive processes was the rule match task in which subjects had to determine whether the two letters presented both were vowels or not vowels.
The physical match task was the most simple; subjects had to encode the letters, compare them to each other, and make a decision. When doing the name match task subjects were forced to add a cognitive step before making a decision: they had to search memory for the names of the letters, and then compare those before deciding. In the rule based task they had to also categorize the letters as either vowels or consonants before making their choice. The time taken to perform the rule match task was longer than the name match task which was longer than the physical match task. Using the subtraction method experimenters were able to determine the approximate amount of time that it took for subjects to perform each of the cognitive processes associated with each of these tasks.
There is extensive recent research using mental chronometry for the study of cognitive development. Specifically, various measures of speed of processing were used to examine changes in the speed of information processing as a function of age. Kail (1991) showed that speed of processing increases exponentially from early childhood to early adulthood. Studies of RTs in young children of various ages are consistent with common observations of children engaged in activities not typically associated with chronometry. This includes speed of counting, reaching for things, repeating words, and other developing vocal and motor skills that develop quickly in growing children. Once reaching early maturity, there is then a long period of stability until speed of processing begins declining from middle age to senility (Salthouse, 2000). In fact, cognitive slowing is considered a good index of broader changes in the functioning of the brain and intelligence. Demetriou and colleagues, using various methods of measuring speed of processing, showed that it is closely associated with changes in working memory and thought (Demetriou, Mouyi, & Spanoudis, 2009). These relations are extensively discussed in the neo-Piagetian theories of cognitive development.
During senescence, RT deteriorates (as does fluid intelligence), and this deterioration is systematically associated with changes in many other cognitive processes, such as executive functions, working memory, and inferential processes. In the theory of Andreas Demetriou, one of the neo-Piagetian theories of cognitive development, change in speed of processing with age, as indicated by decreasing RT, is one of the pivotal factors of cognitive development.
Research into this link between mental speed and general intelligence (perhaps first proposed by Charles Spearman) was re-popularised by Arthur Jensen, and the "Choice reaction Apparatus" associated with his name became a common standard tool in RT-IQ research.
The strength of the RT-IQ association is a subject of research. Several studies have reported association between simple RT and intelligence of around (r=−.31), with a tendency for larger associations between choice RT and intelligence (r=−.49). Much of the theoretical interest in RT was driven by Hick's Law, relating the slope of RT increases to the complexity of decision required (measured in units of uncertainty popularized by Claude Shannon as the basis of information theory). This promised to link intelligence directly to the resolution of information even in very basic information tasks. There is some support for a link between the slope of the RT curve and intelligence, as long as reaction time is tightly controlled.
Standard deviations of RTs have been found to be more strongly correlated with measures of general intelligence (g) than mean RTs. The RTs of low-g individuals are more spread-out than those of high-g individuals.
The cause of the relationship is unclear. It may reflect more efficient information processing, better attentional control, or the integrity of neuronal processes.
Performance on simple and choice reaction time tasks is associated with a variety of health-related outcomes, including general, objective health composites as well as specific measures like cardiorespiratory integrity. The association between IQ and earlier all-cause mortality has been found to be chiefly mediated by a measure of reaction time. These studies generally find that faster and more accurate responses to reaction time tasks are associated with better health outcomes and longer lifespan.
The drift-diffusion model (DDM) is a well-defined mathematical formulation to explain observed variance in response times and accuracy across trials in a (typically two-choice) reaction time task. This model and its variants account for these distributional features by partitioning a reaction time trial into a non-decision residual stage and a stochastic "diffusion" stage, where the actual response decision is generated. The distribution of reaction times across trials is determined by the rate at which evidence accumulates in neurons with an underlying "random walk" component. The drift rate (v) is the average rate at which this evidence accumulates in the presence of this random noise. The decision threshold (a) represents the width of the decision boundary, or the amount of evidence needed before a response is made. The trial terminates when the accumulating evidence reaches either the correct or the incorrect boundary.
With the advent of the functional neuroimaging techniques of PET and fMRI, psychologists started to modify their mental chronometry paradigms for functional imaging. Although psycho(physio)logists have been using electroencephalographic measurements for decades, the images obtained with PET have attracted great interest from other branches of neuroscience, popularizing mental chronometry among a wider range of scientists in recent years. The way that mental chronometry is utilized is by performing RT based tasks which show through neuroimaging the parts of the brain which are involved in the cognitive process.
With the invention of functional magnetic resonance imaging (fMRI), techniques were used to measure activity through electrical event-related potentials in a study when subjects were asked to identify if a digit that was presented was above or below five. According to Sternberg's additive theory, each of the stages involved in performing this task includes: encoding, comparing against the stored representation for five, selecting a response, and then checking for error in the response. The fMRI image presents the specific locations where these stages are occurring in the brain while performing this simple mental chronometry task.
In the 1980s, neuroimaging experiments allowed researchers to detect the activity in localized brain areas by injecting radionuclides and using positron emission tomography (PET) to detect them. Also, fMRI was used which have detected the precise brain areas that are active during mental chronometry tasks. Many studies have shown that there is a small number of brain areas which are widely spread out which are involved in performing these cognitive tasks.
Current medical reviews indicate that signaling through the dopamine pathways originating in the ventral tegmental area is strongly positively correlated with improved (shortened) RT; e.g., dopaminergic pharmaceuticals like amphetamine have been shown to expedite responses during interval timing, while dopamine antagonists (specifically, for D2-type receptors) produce the opposite effect. Similarly, age-related loss of dopamine from the striatum, as measured by SPECT imaging of the dopamine transporter, strongly correlates with slowed RT.
The neurotransmitter dopamine is released from projections originating in the midbrain. Manipulations of dopaminergic signaling profoundly influence interval timing, leading to the hypothesis that dopamine influences internal pacemaker, or "clock," activity (Maricq and Church, 1983; Buhusi and Meck, 2005, 2009; Lake and Meck, 2013). For instance, amphetamine, which increases concentrations of dopamine at the synaptic cleft (Maricq and Church, 1983; Zetterström et al., 1983) advances the start of responding during interval timing (Taylor et al., 2007), whereas antagonists of D2 type dopamine receptors typically slow timing (Drew et al., 2003; Lake and Meck, 2013). ... Depletion of dopamine in healthy volunteers impairs timing (Coull et al., 2012), while amphetamine releases synaptic dopamine and speeds up timing (Taylor et al., 2007).