Single-Case Methodology in Neuropsychology

This research is conducted by Prof John R Crawford, School of Psychology, University of Aberdeen) in collaboration with:

For computer programs that implement the statistical methods described on this page click here

Click here for powerpoint file of Professor Crawford's Investigator Workshop delivered at the 34th International Neuropsychological Society Annual Meeting in Boston in February 2006

"Dissociation is the key word in neuropsychology"

Rossetti & Revensuo, 2000, p. 1)

(A) Background: The Single-Case Approach

The single-case approach in neuropsychology has made a significant contribution to our understanding of the architecture of human cognition.  However, as Caramazza and McCloskey (1988) note, if advances in theory are to be sustainable they “… must be based on unimpeachable methodological foundations” (p. 619).  The statistical treatment of single-case study data is one area of methodology that has been relatively neglected. In general terms, the motivation behind the work described below is to provide methods for single-case research that more closely match the standards demanded in group studies. 

There are three basic approaches to inference in single-case studies:

  1. The patient is administered fully standardized neuropsychological tests and performance is compared to large sample normative data
  2. At the other extreme, the patient’s performance is not referenced to normative data or control performance; i.e., analysis is limited to intra-individual comparisons
  3. The patient is compared to a (modestly sized) matched control sample

(1) The fully standardized approach:

Very useful and elegant methods have been devised for drawing inferences using approach (1); e.g. see Capitani, (1997), De Renzi, Faglioni, Grossi, & Nicheli, (1997), Willmes (1985).  However, because new constructs are constantly emerging in neuropsychology and the collection of large-scale normative data is a time-consuming and arduous process (Crawford, 2004) the prototypical single-case study remains one in which a patient is compared to a modestly-sized (matched) control sample.

(2) The intra-individual approach:

There are numerous single-case studies in the literature (some of which have been very influential) in which a patient’s performance is not referenced to a control sample; i.e. approach (2) is employed.  Typically in these studies within-individual inferential methods are employed (chi-square tests are typical) to compare a patient’s performance on Task X with their performance on Task Y.  For example, chi-square tests have been used in attempts to demonstrate a dissociation between naming of living-things and non-living things.  It is clear that the chi-square test’s assumption of independence is violated in these circumstances.

Moreover, in a recent collaboration on category specificity in Alzheimer’s disease with Keith Laws and colleagues (Laws, Gale, Leeson & Crawford, 2005, (reprint as pdf) we have demonstrated how these intra-individual analyses can be very misleading.  For example, patients can show a significant difference between living and non-living naming on chi-square tests (i.e. they are classified as exhibiting a dissociation) but such raw differences are not unusual when standardized against control performance (i.e. the within-individual method yields a false positive). Conversely, patients whose chi-square results are not significant, can show strong evidence of a dissociation between living and non-living naming when referenced to control performance.  Laws et al even found a case in which a putative dissociation in one direction was reversed when performance was referenced to control performance.  It can be concluded that single-case studies should never rely on within-individual analysis alone; a patient’s performance should always be referenced to control performance.

(3) The matched control sample approach (i.e., the case-controls design):

As noted above, in the third approach to inference, a patient’s performance is referenced to a matched control sample.  This approach is very widely employed in single-case research and is the focus of the methods outlined below.  By far the most common approach to the statistical analysis of such data uses z.  That is, the patient’s performance is converted to a z score based on the mean and SD of the control sample and this z is referred to a table of areas under the normal curve.  With this approach the control sample is treated as though it were a population (i.e. the sample statistics are treated as parameters).  However, as the size of the control samples in most single-case studies is modest (i.e. N is often < 10 and can even be < 5), this is not appropriate. The upshot is that these methods are associated with an inflated Type I error rate and overestimate the abnormality of the patient’s score.

Back to top

(B) Our Basic Methods

Testing for a deficit:

Methods have been developed for comparing an individual patient's score with a control sample: these address the question of whether or not a patient exhibits a statistically significant deficit (Crawford & Howell, 1998) (Reprint as pdf).  In contrast to the use of z, Crawford & Howell’s (1998) method treats the control sample statistics as statistics rather than as parameters.  Recent work using Monte Carlo simulations (Crawford & Garthwaite, 2005a) confirms that this test controls the Type I error rate regardless of the size of the control sample.

Furthermore, it is very common for the control data in single-case studies to exhibit severe depatures from normality. Simulation studies have shown that Crawford & Howell’s method is surprisingly robust even in the face of very severe skew and /or leptokurtosis (Crawford et al, 2006) (reprint). This latter study also evaluates potential non-parametric alternatives to our methods.

Crawford and Howell's (1998) method performs two roles simultaneously: (1) it tests whether a patient's score is significantly below controls, and (2) it provides a point estimate of the abnormality of the score; i.e. it estimates the percentage of the control population that would obtain a score lower than the patient (a formal proof that the p value for the significance test is also a point estimate of abnormality can be found in Crawford & Garthwaite,(2006b) (reprint).

A method for setting confidence limits on the abnormality of a patient’s score has also been developed (Crawford & Garthwaite, 2002) (reprint as pdf); this latter methods make use of non-central t-distributions. A computer program accompanies this latter paper (singlims.exe); the program performs the significance test and provides the point estimate of abnormaility and 95% confidence limits on this abnormality.

The program singlims.exe implements these methods; that is, it tests whether a patient's score is significantly below controls, provides a point estimate of the abnormality of the patient's score (ie it estimates the percentage of the control population exhibiting a lower score), and provides accompanying confidence limits on this quantity. An upgraded version of singlims is now available, Singlims_ES.exe; that supplements the foregoing results with point and interval stimates of effect sizes; as described in Crawford, Garthwaite & Porter (2010) (reprint).

One issue that, until relatively recently, has not been studied formally is the power to detect a deficit in single-case studies (that is, although there is now a body of work examining Type I error rates, little attention has been given to Type II errors). Crawford and Garthwaite (2006) (reprint) have examined the power of Crawford & Howell's method, and an alternative method proposed by Mycroft et al (2002). Power to detect a (large) deficit was typically moderate for the former method but was very low for the latter method. The paper also examines the effects of measurement error on power to detect deficits, including scenarios in which a patient's scores are more unreliable than those of controls. Finally (Study 5) it is demonstrated that low power to detect a deficit (i.e., a high Type II error rate), such as typifies Mycroft et al's method, can have the paradoxical effect of inducing very high Type I error rates when attempting to detect a classical dissociation.

Testing for differences among a patient's scores:

Methods have also been developed for comparing the difference between a patient's performance on two (Crawford, Howell & Garthwaite, 1998) (Reprint as pdf) or more tasks (Crawford & Garthwaite, 2002) (Reprint as pdf) with the distribution of differences in controls.  These methods address the further question of whether there is evidence of a dissociation or a differential deficit in the patient.  The methods are all modified t-tests (i.e. unlike the use of z they treat the control sample as a sample rather than as a population).  In addition to providing a significance test they also simultaneously provide a point estimate of the rarity / abnormality of a patient's score or score difference. 

A superior test on the difference between a patient’s performance on two tasks has been developed (Crawford & Garthwaite, 2005; Garthwaite & Crawford, 2004).  This test, the Revised Standardized Difference Test (RSDT) controls the Type I error rate regardless of (a) the control sample N and (b) magnitude of the correlation between tasks X and YIt should therefore be used in preference to Crawford, Howell & Garthwaite’s (1998) method. This test has been implemented in a computer program (RSDT.exe) and is also incorporated into a program that tests for dissociations (dissocs.exe). Both of these programs have now been upgraded to provide point and interval estimates of effect sizes; see upgraded versions (RSDT_ES.exe and dissocs_ES.exe) and click here for preprint of paper on effect sizes in the case-controls design.

It should be noted that, in testing whether the difference between a patient's performance on two tasks differs significantly from the distribution of differences in controls, it is normally necessary to standardize the scores. That is, the two tasks involved typically have different means and standard deviations. For example, a researcher may want to compare a patient's performance on a Theory of Mind task (with mean of 25 and SD of 4.5) to performance on a measure of executive function (mean = 52 and SD = 12).

Finding a suitable method for this type of analysis has proved to be much more difficult than it appears. Although we believe that further advances can be made, the results for The Revised Standardized Difference Test (RSDT) are very positive. The graphic below shows the results from a Monte Carlo simulation in which we examined control of the Type I error rate for three methods: the widely used test of Payne and Jones (1957), Crawford, Howell and Garthwaite's (1998) method and the RSDT (Crawford and Garthwaite, 2005). It can be seen that Type I errors (which should be at the specified rate of 5%) are very high for the Payne and Jones test particulalrly for control sample sizes that are typical in single-case studies (in this context a Type I error occurs if a member of the control population is classified as differing from controls). Furthermore, although Crawford et al's (1998) test is a marked improvement over this earlier test, the Type I error rate is still inflated. In contrast, the RSDT achieves control of the Type I error rate.

Back to top

(C) Extensions to the Basic Methods

Comparing a patient to controls when data are in the form of slopes of regression lines or correlation coefficients

The methods have been further extended to allow inferences to be drawn when the patient's performance is quantified, not by a conventional score, but as the slope of a regression line (Crawford & Garthwaite, 2004; Reprint as a pdf) (e.g. in distance or time estimation tasks where actual distance or elapsed time is regressed on estimated distance or time) or as a correlation coefficient (Crawford, Garthwaite, Howell & Venneri, 2003; Reprint as pdf) (e.g. in assessing temporal order memory when the rank order correlation between actual and reported order of presentation is used to quantify performance).  Note that in both cases performance is quantified by an intra-individual measure of association but, crucially and unlike the intra-individual methods discussed above, a patient’s performance is still compared against the performance of controls when applying the inferential statistics. These methods have been implemented in computer programs: singlslope.exe deals with data in the form of slopes and iima.exe deals with data in the form of correlation coefficients

Comparing patients' obtained scores with their scores predicted by a regression equation

We (Crawford and Garthwaite, 2006a) (reprint as pdf) have also developed methods for drawing inferences concerning the discrepancy between an individual's obtained score and their scores predicted by a regression equation. Regression equations are widely used in neuropsychology to draw inferences concerning the cognitive status of individual patients. For example, an equation predicting retest scores from scores at original testing can be used to test whether there has been change in a patient's level of functioning. Equations can also be used as an alternative to conventional normative data by providing "continuous norms" as when a patient's score on a neuropsychological test is compared to the score predicted by their age (sse paper for further examples).

These programs test if there is a significant discrepancy between an individual's obtained and predicted score (one- and two-tailed p values are provided). They also provide a point estimate of the abnormality of the discrepancy (i.e., a point estimate of the percentage of the population exhibiting a larger discrepancy) and accompanying confidence limits on this quantitiy. The methods have been implemented in computer programs (regdiscl.exe and regdisclv.exe).

A commonly used alternative to these methods of analyzing the discrepancy between obtained and predicted scores involves dividing the discrepancy by the equation's standard error of estimate and treating the result as a standard normal deviate (the p value for this z is then obtained from a table of areas under the normal curve). Monte Carlo simulations (Crawford & Garthwaite, 2006a) show that, unlike the method implemented in these programs, the latter method does not control Type I errors and overestimates the abnormality of an individual's discrepancy. In addition, because it does not acknowledge the uncertainty associated with sample regression statistics, it cannot provide confidence limits on the abnormality of the discrepancy.

We have extended this work (Crawford & Garthwaite, 2007) to allow researchers or clinicians to build regression equations from summary data (correlation between a predictor and criterion variable together with these variables means and SDs). There is a large reservoir of published data that could be used to build regression equations, these equations could then be used to draw inferences concerning a single-case. The methods have been implemented in computer programs (regbuild.exe and regbuild_t.exe)

Comparing two cases

We have recently developed methods that allow single case reseasrchers to compare the scores of two single cases (Crawford, Garthwaite & Wood, 2010 ). Unlike existing methods of comparing two cases, these new methods refer the scores of the two cases to a control sample. (See the accompanying paper for a discussion of the issues arising when control data are not used). All of the methods provide a significance test (one and two-tailed), point and interval estimates of the effect size for the difference, and point and interval estimates of the percentage of pairs of controls that will exhibit a larger difference than that observed for the cases (i.e., these latter statistics quantify the abnormality of the difference). The methods are predominantly classical (although a Bayesian approach is also developed - this provides results that converge with the classical approach). There is the option of allowingl for the effects of covariates when comparing the two cases. The computer programs accompanying the paper can be used with summary data or raw data from the controls.

Comparing a case to controls allowing for the effects of covariates

We have recently developed Bayesian methods that allow a researcher to compare a case's score to controls allowing for the effects of covariates (Crawford, Garthwaite & Ryan, in press). The methods include comparison of a case's standardized difference between two tasks to differences observed in controls (i.e., one can test for a dissocation in the case, allowing for the effects of covariates)

The covariate methods provide the same full range of results provided by our earlier methods. That is they provide: (a) a signficance test (i.e. tests whether we can reject the null hypothesis that the case's score, or score difference, is an observation from the scores, or score differences, in the control population); (b) point and interval estimate of the abnormality of the case's score, or score difference; and (c) point and interval estimates of the effect size for the difference between case and controls

These methods have a very wide range of potential applications, e.g., they can provide a means of increasing the statistical power to detect deficits or dissociations, or can be used to test whether differences between a case and controls survive partialling out the effects of potential confounding variables

Back to top

(D) Criteria for Classical and Strong Dissociations

Crawford, Garthwaite & Gray (2003) reviewed existing definitions of dissociations used in single-case studies and argued that they are vague (i.e. the statistical methods used to determine whether a patient meets the definitions are rarely specified) and are insufficiently rigorous.  In response they proposed formal criteria for deficits, strong and classical dissociations and double dissociations (Reprint as a pdf).  For example, the typical (conventional) definition of a classical dissociation is that a patient is impaired on Task X and “within normal limits” or “unimpaired” on Task Y.  There are two related problems with this definition, (1) the second half of the definition boils down to an attempt to prove the null hypothesis, and (2) the difference between a patient’s performance on Tasks X and Y could be very trivial (i.e. the score on Task X could be just below a particular cut-off and Task Y just above it). In these circumstances one would not want to claim that a dissociation had been established.

Crawford, Garthwaite & Gray’s (2003) criteria provided operational definitions of a deficit and “within normal limits” and proposed that a classical dissociation is established when the patient has a deficit on Task X, is within normal limits on Y, and that there is a significant difference between performance on Task X and Y.  The latter criterion deals with the two problems outlined above.  These criteria draw on the statistical methods described in Section B (above).  Monte Carlo simulations indicated that very few individuals drawn from the healthy population would be misclassified as exhibiting a strong or classical dissociation when these criteria are applied.  Crawford & Garthwaite (2005) revised these criteria by replacing Crawford, Howell & Garthwaite’s (1998) test on the difference between tasks with the Revised Standardized Difference Test (reprint as pdf).  An accompanying computer program (dissocs.exe) automates testing for dissociations; this program has been upgraded (dissocs_ES.exe) to include point and interval estimate of effect sizes (see Crawford, Garthwaite & Porter, 2010).

Crawford & Garthwaite (2005b) reprint as pdf) have examined the Type I error rates for the conventional criteria for a classical dissocation and find that the rates are high (up to 18.6%) when a Type I error rate is defined as misclassifying a control case as exhibiting a classical dissociation. We have also conducted further simulation studies in which we "lesion" cases; this allows us to study a different form of Type I error; namely misclassifying a patient with equivalent deficits on the tasks of interest as exhibiting a dissocation (as equivalent deficits were imposed, such cases do not exhibit a true dissociation). The error rates for the conventional criteria were alarmingly high in these simulations (range 19.3% to 49.6%). In contrast the rates were markedly lower for our criteria (range 2.7% to 7.1%). Some of the results are displayed in the figure below (in this example the population correlation between Task X and Y was 0.5). These results demonstrate the importance of testing for a significant difference between a patient’s performance on Tasks X and Y when attempting to identify classical dissociations. Further simulations indicated that both sets of criteria are robust in the face of departures from normality.


Simulation results: Percentage of patients with strictly equivalent impairments on two tasks misclassified as exhibiting a dissociation using conventional criteria or Crawford & Garthwaite’s (2005a) criteria for a classical dissociation (drawn with data from Crawford & Garthwaite, 2005b)

Crawford & Garthwaite, 2006c) (reprint as pdf) quantified the Type I error rates and power for competing methods of detecting strong dissociations and quantified the overall error rate and power when attempting to detect either form of dissociation (ie strong or classical). Using our criteria, power to detect either form of dissociation is moderate-to-high in most scenarios and Type I error rates are reasonable.

This study also found that, regardless of the criteria used, many patients classified as exhibiting classical dissociations will in reality be cases of strong dissociation (i.e., there will be a failure to detect the deficit on one of the tasks). In contrast, cases of strong dissocations will rarely be misclassified as exhibiting a classical dissocation. In light of these findings we suggest that the term "classical dissocation" should be changed to "a dissociation, putatively classical" - this terminology captures the fact that although (if Crawford & Garthwaite's criteria are employed) one can be confident that a patient has suffered a dissocation of some form, but one cannot be confident that it is classical in form.


The need to standardize scores when testing for a difference / dissociation: The cow-canary problem

Our classical and Bayesian methods for testing for a difference between a case's scores on two tasks standardize the case's scores. In contrast, if the analysis is based on the difference between raw scores there is the danger of artefacts arising from what Capitani et al. (1999) have termed the Cow-Canary problem. If the tasks differ in their standard deviations, the task with the larger standard deviation will exert more weight on the raw difference score. Indeed, if the difference in SDs is very large, the difference score will alomost entirely be a reflection of the task with the larger SD. Crawford, Garthwaite & Howell (2009) provided dramtic illustrations of these dangers in which a case's scores were analyzed using a mixed ANOVA (case vs controls as the between-subjects factor, task as within-subjects factor): the p value for the interaction was used to test for a dissociation. For example, a case with identical z scores on two tasks was recorded as exhibiting a dissociation between the tasks.

Back to top

(E) Computer Programs for the Single-Case Researcher

Computer programs that accompany the papers on single-case methods have been written and made available; these implement all the statistical methods discussed above.  By using these programs single-case researchers can use the statistical methods to analyse their data in seconds (and also reduce the possibility of clerical error).

Back to top

(F) References for the methods

Crawford, J. R., Garthwaite, P. H., & Ryan, K. (in press). Comparing a single case to a control sample: Testing for neuropsychological deficits and dissociations in the presence of covariates. Cortex.
Crawford, J. R., Garthwaite, P. H., Wood, L. T. (2010). The case controls design in neuropsychology: Inferential methods for comparing two single cases. Cognitive Neuropsychology, 27, 377-400.
Crawford, J. R., Garthwaite, P. H., and Porter, S. (2010). Point and interval estimates of effect sizes for the case‑controls design in neuropsychology: Rationale, methods, implementations, and proposed reporting standards. Cognitive Neuropsychology, 27, 245-260.

Crawford, J. R., Garthwaite, P. H., & Howell, D. C. (2009) On comparing a single case with a control sample: An alternative perspective. Neuropsychologia, 47, 2690-2695.

Crawford, J. R., & Garthwaite, P. H. (2007). Using regression equations built from summary data in the neuropsychological assessment of the individual case. Neuropsychology, 21, 611-620.
Crawford, J. R., & Garthwaite, P. H. (2007). Comparison of a single case to a control or normative sample in neuropsychology: Development of a Bayesian approach. Cognitive Neuropsychology, 24, 343-372.
Crawford, J. R., & Garthwaite, P. H. (2006c). Detecting dissociations in single case studies: Type I errors, statistical power and the classical versus strong distinction. Neuropsychologia, 44, 2249-2258

Crawford, J. R., & Garthwaite, P. H. (2006b). Methods of testing for a deficit in single case studies: Evaluation of statistical power by Monte Carlo simulation. Cognitive Neuropsychology, 23, 877-904.

Crawford, J. R., & Garthwaite, P. H. (2006a). Comparing patients’ predicted test scores from a regression equation with their obtained scores: a significance test and point estimate of abnormality with accompanying confidence limits. Neuropsychology, 20, 259-271.

Crawford, J. R., Garthwaite, P. H., Azzalini, A., Howell, D. C., Laws, K. R. (2006). Testing for a deficit in single case studies: Effects of departures from normality. Neuropsychologia, 44, 666-677.

Crawford, J. R. & Garthwaite, P. H. (2005b) Evaluation of criteria for classical dissociations in single-case studies by Monte Carlo simulation. Neuropsychology, 19, 664-678.

Crawford, J. R., & Garthwaite, P. H. (2005a). Testing for suspected impairments and dissociations in single-case studies in neuropsychology: Evaluation of alternatives using Monte Carlo simulations and revised tests for dissociations. Neuropsychology, 19, 318-331.

Reprint as a pdf

Garthwaite, P. H., & Crawford, J. R.  (2004).  The distribution of the difference between two t-variates.  Biometrika, 91, 987-994.

REPRINT as a pdf

Crawford, J. R., Garthwaite, P. H., Howell, D. C., & Gray, C. D.  (2004).  Inferential methods for comparing a single case with a control sample: Modified t-tests versus Mycroft et al's. (2002) modified ANOVA. Cognitive Neuropsychology, 21, 750-755.

REPRINT as a pdf

Laws, K. R., Gale, T. M., Leeson, V. C., & Crawford, J. R.  (2005).  When is category specific in Alzheimer’s disease? Cortex, 41, 452-463.

REPRINT as a pdf

Crawford, J. R., & Garthwaite, P. H.  (2004), Statistical methods for single-case research: Comparing the slope of a patient’s regression line with those of a control sample. Cortex, 40, 533-548.

REPRINT as a pdf

Crawford, J. R., Garthwaite, P. H., Howell, D. C., & Venneri, A.  (2003). Intra-individual measures of association in neuropsychology: Inferential methods for comparing a single case with a control or normative sample. Journal of the International Neuropsychological Society, 9, 989-1000.

REPRINT as pdf

Crawford, J. R. (2004). Psychometric foundations of neuropsychological assessment.  In L. H. Goldstein & J. McNeil (Eds.), Clinical Neuropsychology: A Practical Guide to Assessment and Management for Clinicians (pp. 121-140).  Chichester: Wiley.


Crawford, J. R., Gray, C. D, & Garthwaite, P. H. (2003).  Wanted: Fully operational definitions of dissociations in single-case studies. Cortex, 39, 357-370.

REPRINT as a pdf

Crawford, J. R., & Garthwaite, P. H. (2002). Investigation of the single case in neuropsychology: Confidence limits on the abnormality of test scores and test score differences. Neuropsychologia, 40, 1196-1208.

REPRINT as a pdf

Crawford, J. R., & Howell, D. C. (1998). Regression equations in clinical neuropsychology: An evaluation of statistical methods for comparing predicted and obtained scores. Journal of Clinical and Experimental Neuropsychology, 20, 755-762.

REPRINT as a pdf

Crawford, J. R., & Howell, D. C. (1998). Comparing an individual’s test score against norms derived from small samples. The Clinical Neuropsychologist, 12, 482-486.

REPRINT as a pdf

Crawford, J. R., Howell, D. C., & Garthwaite, P. H. (1998). Payne and Jones revisited: Estimating the abnormality of test score differences using a modified paired samples t test. Journal of Clinical and Experimental Neuropsychology, 20, 898-905.

REPRINT as a pdf

Back to top

(G) Some examples of the use of the above methods

Bastin, C., Van der Linden, M., Charnallet, A., Denby, C., Montaldi, D., Roberts, N., & Mayes, A. R. (2004). Dissociation between recall and recognition memory performance in an amnesic patient with hippocampal damage following carbon monoxide poisoning. Neurocase, 10, 330-344.

Benuzzi, F., Meletti, S., Zamboni, G., Calandra-Buonaura, G., Serafini, M., Lui, F., Baraldi, P., Rubboli, G., Tassinari, C. A., & Nichelli, P. (2004). Impaired fear processing in right mesial temporal sclerosis: a fMRI study. Brain Research Bulletin, 63, 269-281.

Bird, C. M., Castelli, F., Malik, O., Frith, U., & Husain, M. (2004). The impact of extensive medial frontal lobe damage on 'Theory of Mind' and cognition. Brain, 127, 914-928.

Bisiacchi, P., Cendron, M., Gugliotta, M., Lonciari, I., Tressoldi, P. E., & Vio, C. (2005). Batteria di valutazione neuropsicologica per l'età evolutiva. Trento: Erickson.

Bosbach, S., Cole, J., Prinz, W., & Knoblich, G. (2005). Inferring another's expectation from action: the role of peripheral sensation. Nature Neuroscience, 8, 1295-1297.

Brock, J., McCormack, T., & Boucher, J. (2005). Probed serial recall in Williams syndrome: Lexical influences on phonological short-term memory. Journal of Speech Language and Hearing Research, 48, 360-371.

Carey, D. P., Dijkerman, H. C., Murphy, K. J., Goodale, M. A., & Milner, A. D. (in press). Pointing to places and spaces in a patient with visual form agnosia. Neuropsychologia.

Cipolotti, L., Bird, C., Good, T., MacManus, D., Rudge, P., & Shallice, T. (2006). Recollection and familiarity in dense hippocampal amnesia: a case study. Neuropsychologia, 44, 489-506.

Cohen Kadosh, R., & Henik, A. (2006). When a line is a number: Color yields magnitude information in a digit-color synesthete. Neuroscience, 137, 3-5.

de Oliveira-Souza, R., Moll, J., Moll, F. T., & de Oliveira, D. L. G. (2001). Executive amnesia in a patient with pre-frontal damage due to a gunshot wound. Neurocase, 7, 383-389.

de Schotten, M. T., Urbanski, M., Duffau, H., Voue, E., Levy, R., Dubois, B., & Bartolomeo, P. (2005). Direct evidence for a parietal-frontal pathway subserving spatial awareness in humans. Science, 309, 2226-2228.

d'Honincthun, P., & Pillon, A. (2005). Why verbs could be more demanding of executive resources than nouns: Insight from a case study of a fv-FTD patient. Brain and Language, 95, 36-37.

Di Pietro, M., Laganaro, M., Leemann, B., & Schnider, A. (2004). Receptive amusia: temporal auditory processing deficit in a professional musician following a left temporo-parietal lesion. Neuropsychologia, 42, 868-877.

Kilner, J. M., Fisher, R. J., & Lemon, R. N. (2004). Coupling of oscillatory activity between muscles is strikingly reduced in a deafferented subject compared with normal controls. Journal of Neurophysiology, 92, 790-796.

Kondel, T. K., Mortimer, A. M., Leeson, V. C., Laws, K. R., & Hirsch, S. R. (2003). Intellectual differences between schizophrenic patients and normal controls across the adult lifespan. Journal of Clinical and Experimental Neuropsychology, 25, 1045-1056.

Lafargue, G., Paillard, J., Lamarre, Y., & Sirigu, A. (2003). Production and perception of grip force without proprioception: is there a sense of effort in deafferented subjects? European Journal of Neuroscience, 17, 2741-2749.

Laws, K. R. (2005a). Categories, controls and ceilings. Cortex, 41, 869-872.

Laws, K. R. (2005b). Illusions of normality: A methodological critique of category-specific naming. Cortex, 41, 842-866.

Laws, K. R., Gale, T. M., Leeson, V. C., & Crawford, J. R. (2005). When is category specific in Alzheimer's disease? Cortex, 41, 452-463.

Laws, K. R., Leeson, V. C., & Gale, T. M. (2002). A domain-specific deficit for foodstuffs in patients with Alzheimer's disease. Journal of the International Neuropsychological Society, 8, 956-957.

Laws, K. R., Leeson, V. C., & McKenna, P. J. (in press). Domain specific deficits in patients with schizophrenia. Cognitive Neuropsychiatry.

Laws, K. R., & Sartori, G. (2005). Category deficits and paradoxical dissociations in Alzheimer's disease and herpes simplex encephalitis. Journal of Cognitive Neuroscience, 17, 1453-1459.

Majerus, S., Barisnikov, K., Vuillemin, I., Poncelet, M., & van der Linden, M. (2003). An investigation of verbal short-term memory and phonological processing in four children with Williams syndrome. Neurocase, 9, 390-401.

Milders, M., Crawford, J. R., Lamb, A., & Simpson, S. A. (2003). Differential deficits in expression recognition in gene- carriers and patients with Huntington's disease. Neuropsychologia, 41, 1484-1492.

Nickels, L., & Cole-Virtue, J. (2004). Reading tasks from PALPA: How do controls perform on visual lexical decision, homophony, rhyme, and synonym judgements? Aphasiology, 18, 103-126.

Nyffeler, T., Pflugshaupt, T., Hofer, H., Baas, U., Gutbrod, K., von Wartburg, R., W., H. C., & Müri, R. M. (2005). Oculomotor behaviour in simultanagnosia: A longitudinal case study. Neuropsychologia, 43, 1591-1597.

Papps, B. P., Calder, A. J., Young, A. W., & O'Carroll, R. E. (2003). Dissociation of affective modulation of recollective and perceptual experience following amygdala damage. Journal of Neurology Neurosurgery and Psychiatry, 74, 253-254.

Parslow, D. M., Morris, R. G., Fleminger, S., Rahman, Q., Abrahams, S., & Recce, M. (2005). Allocentric spatial memory in humans with hippocampal lesions. Acta Psychologica, 118, 123-147.

Robinson, G., Shallice, T., & Cipolotti, L. (2005). A failure of high level verbal response selection in progressive dynamic aphasia. Cognitive Neuropsychology, 22, 661-694.

Rosazza, C., Zorzi, M., & Cappa, S. F. (2005). A re-analysis of a case of category-specific semantic impairment. Cortex, 41, 865-872.

Rosenbaum, R. S., Fuqiang, G., Richards, B., Black, S. E., & Moscovitch, M. (2005). "Where to?" remote memory for spatial relations and landmark identity in former taxi drivers with Alzheimer's disease and encephalitis. Journal of Cognitive Neuroscience, 17, 446-462.

Rosenbaum, R. S., McKinnon, M. C., Levine, B., & Moscovitch, M. (2004). Visual imagery deficits, impaired strategic retrieval, or memory loss: disentangling the nature of an amnesic person's autobiographical memory deficit. Neuropsychologia, 42, 1619-1635.

Ruigendijk, E., Kouwenberg, M., & Friedmann, N. (2004). Question production in Dutch agrammatism. Brain and Language, 91, 116-117.

Rusconi, E., Priftis, K., Rusconi, M. L., & Umilta, C. (in press). Arithmetic priming from neglected numbers. Cognitive Neuropsychology.

Schiltz, C., Sorger, B., Caldara, R., Ahmed, F., Mayer, E., Goebel, R., & Rossion, B. (in press). Impaired face discrimination in acquired prosopagnosia is associated with abnormal response to individual faces in the right middle fusiform gyrus. Cerebral Cortex.

Schindler, I., Rice, N. J., McIntosh, R. D., Rossetti, Y., Vighetto, A., & Milner, A. D. (2004). Automatic avoidance of obstacles is a dorsal stream function: evidence from optic ataxia. Nature Neuroscience, 7, 779-784.

Simons, J. S., Graham, K. S., & Hodges, J. R. (2002). Perceptual and semantic contributions to episodic memory: evidence from semantic dementia and Alzheimer's disease. Journal of Memory and Language, 47, 197-213.

Stip, E., Remington, G. J., Dursun, S., Reiss, J. P., Rotstein, E., MacEwan, G. W., Chokka, P. R., Jones, B., & Dickson, R. A. (2003). A Canadian multicenter trial assessing memory and executive functions in patients with schizophrenia spectrum disorders treated with olanzapine. Journal of Clinical Psychopharmacology, 23, 400-404.

Temple, C. A., & Sanfilippo, P. M. (2003). Executive skills in Klinefelter's syndrome. Neuropsychologia, 41, 1547-1559.

Tonini, G., Shanks, M. F., & Venneri, A. (2003). Short-term longitudinal evaluation of cerebral blood flow in mild Alzheimer's disease. Neurological Sciences, 24, 24-30.

Turnbull, O. H., Della Sala, S., & Beschin, N. (2002). Agnosia for object orientation: Naming and mental rotation evidence. Neurocase, 8, 296-305.

Westmacott, R., Black, S. E., Freedman, M., & Moscovitch, M. (2004). The contribution of autobiographical significance to semantic memory: evidence from Alzheimer's disease, semantic dementia, and amnesia. Neuropsychologia, 42, 25-48.

Ypsilanti, A., Grouios, G., Alevriadou, A., & Tsapkini, K. (2005). Expressive and receptive vocabulary in children with Williams and Down syndromes. Journal of Intellectual Disability Research, 49, 353-364.

Back to top