A series of articles on the importation of Randomised Controlled Trials in Education. This is being transfered form original postings on Typepad, which has closed. References to original papers are at the end of individual articles below, pending re-organisation.
Randomised controlled trials and research methodology
The study by Torgerson and Brooks discussed attempts to tackle one of the problems of using drugs trial research methods in education. Every patient, on whatever branch of a trial receives exactly the same drug. In normal circumstances, every pupil will not receive exactly the same teaching. By setting out the content in advance, and using only technology, the researchers were able to ensure consistency of input. Unfortunately, this did not ensure any effective learning, as anyone who has ever taught spelling could have predicted. Different children need different levels of explanation and support. This leads me to conclude not only that the methodology of drugs trials should not be used as the sole criterion for evaluating educational research, but that the application of the method in education is, in fact, impossible.
Torgerson and Brooks’ most recent paper, available behind the most outrageous paywall I’ve ever seen – 30 days access for £196 – shows the weakness of the methodology beyond a scintilla of doubt. The studies they analyse are so unspecified, and so varied in their techniques, that they might as well be comparing Association and Rugby football. Some of the weaknesses are noted in these quotations:
Galuschka et al. (2014), Some interventions (e.g. orally dividing words into syllables with supporting hand signals) would not fit standard definitions of phonics…no details of control group instruction….samples drawn from Italy, Spain, Finland, Brazil and unspecified “English-speaking countries.”
What is a study of this type going to tell us about the role of phonics, let alone “systematic synthetic phonics” in learning to read in English? I’ll answer the question – absolutely nothing.
Hann et al (2010)Studies which taught phonemic awareness, phonics or both. No specific varieties of phonics mentioned, and of 11 teaching ctivities mentioned (120) only ‘decoding’ would meet standard definitions of phonics; all the rest are whole-word approaches, hence not phonics. No detail of control group instruction.
Does not this make the study completely irrelevant to the issue of phonics? And again, no details on controls?
I could continue, as not one of these studies provides any evidence of the effectiveness or ineffectiveness of any of the items under discussion. We might as well say that they don’t tell us anything about underwater archeology. They do not provide evidence of the effectiveness of systematic synthetic phonics (a term I use here under duress, and for the sake of consistency only) for the simple reason that they were not looking for it. They are so poorly designed that they don’t provide evidence of anything at all, particularly as they do not have the six year follow-up that was the clinching argument in the Clackmannanshire study. I’m astonished that this study can be taken seriously by anyone at all, including its authors.
Posted at 11:10 AM in Randomised controlled trials and research methodology | Permalink | Comments (4)
July 21, 2014
The Empress Has No Clothes: Torgerson/Brooks’ serious misjudgement of educational research
Professor Rhona Johnston, co-author of the Clackmannanshire research in reading, has replied to Torgerson and Brooks’ survey of the research evidence on phonics, here. It is a telling critique, showing that Torgerson and Brooks were so obsessed with the idea of randomisation in educational research as to distort their whole view of the evidence. Torgerson and Brooks found only two studies directly comparing analytic and synthetic phonics, hardly enough for a “meta-analysis”. One was a smaller study that formed part of the Clackmannanshire research, and the other this conference paper from 1971.
If it were not for Torgerson and Brooks’ own botched study of spelling in a comprehensive school, it would be difficult to believe that they would give this paper any credence at all. Here is what it did, cut and pasted from the original:
Materials
Teaching materials were seven-by-nine inch tagboard cards on which 28 words and sylables were printed, two to a card. The selection of the pairs of words for the cards difered acording to the treatments. The words and sylables used in instruction and as measures of transfer are listed in
Tables A and B.
Procedure
During a typical 15-minute training period, from two to four of the 28 words and sylables were presented. Sometimes words or sylables were reviewed. All 28 words and sylables had ben presented to al of the subjects by the end of the ten wek training period.
Who on earth would set out to teach children twenty-eight words in ten weeks? Who would confine reading teaching to fifteen minutes a day, or to two to four words a session? Who would confine teaching materials to one set of words on cards, with no books, stories or anything else? Only someone with no educational experience, such as Professor Torgerson, could entertain such pathetic nonsense, let alone dissect it as a potential instrument of policy.
Professor Johnston is too polite to her opponents. As far as I’m concerned, the Empress of Randomisation has no clothes.
Posted at 03:19 PM in Randomised controlled trials and research methodology | Permalink | Comments (0)
April 25, 2014
The Dyslexia Debate, Julian Elliott and Elena Grigorenko. Review
In 2005, Professor Julian (Joe) Elliott, of Durham University, took part in a television programme that claimed that there was no way of distinguishing between “dyslexia” and other forms of reading difficulty. This was followed by a conference in London, to which I contributed a case study, and this book is the long-term follow up.
Professor Elliott, and his Yale colleague Professor Elena Grigorenko, have produced the widest-ranging analysis of research into the subject I’ve ever seen, with seventy pages of references to research, ranging from the battles over phonics to the latest brain scan evidence. While they give credit where it’s due, their criticisms of the limitations of the research evidence they scrutinise are consistent and convincing. Even brain research evidence, the clearest yet of a genuinely biological basis to reading difficulty, has its limitations clearly exposed – the illustrations of brain dysfunction that have resulted from scans are composites, and do not yet provide any clear evidence that can be used to help individuals. Their conclusion is that it is still impossible “to identify a dyslexic subgroup in any consistent or coherent fashion that would be acceptable to the scientific or professional community” (p.170). The term “dyslexia” therefore has “outgrown its diagnostic and conceptual usefulness”, and should be “discontinued.”
And yet it is “untenable”, say the authors, to deny that some reading difficulties are biological in origin. They propose instead that such difficulties be termed “reading disability”. The problem here is that most people assessed as dyslexic are not disabled – a young man who had been so assessed approached me in Oxford while I was carrying the book, and we had an interesting discussion about its contents and his experience. He was now working as a retail manager, and found it ridiculous to think that he was in some way disabled. What he and others in his position need are some adjustments to the ways in which they are taught. These adjustments are known to some (not all) teachers, and are based on a strand of case-study research beginning with the work of Grace Fernald – whose 1943 classic, Remedial Techniques in Basic School Subjects is not listed in the references. Only one article of Fernald’s is listed, from 1921, and evidence of her techniques is seen as “anecdotal”. This term in itself is too easy – the whole of surgery is based on clinical observation and case study, and it is as valid a research technique as any other.
The dismissal of evidence on the use of tints is similarly based. If we take from the physical sciences and the development of new drugs the idea of a double-blind, randomised, controlled trial as “gold standard” research, it is self-evident that the method can’t be applied to a tint, as everyone can see it. We can’t have a “partially sighted” trial.
Here are three cases involving tints:
1. A very bright girl who read excruciatingly slowly, but with perfect accuracy and understanding. She explained that she needed to read everything twice, and with an overlay immediately read just as well, but faster than she could articulate the words.
2. A school secretary whose desk was opposite a glass panel behind which was an unshaded fluorescent light, and who was losing days of work at a stretch through incapacitating migraine. Covering the panel with cardboard and adding a blue tint to her computer screen removed the problem.
3. A boy whose behaviour deteriorated so rapidly when he started school that he became a danger to himself and his family. After five years of misery, a teacher in a private school tried a tint and the problems disappeared.
Three anecdotes. There are many similar cases on migraine, but they remain anecdotes. Or cases, if we consider that they are essentially true, and the outcome of professional observation. None of these people were dyslexic, however the word might be defined – they were sensitive to certain elements in light, a different issue, but one that can cause devastating problems with learning to read and write.
Which brings us to Dr Terry Moore and Chris Carling, and their advocacy of Thomas Locke’s approach to language study in Language Understanding: Towards a Post-Chomskyan linguistics. Locke’s pragmatism, using best avaiable evidence, will not produce a perfect solution – or the illusion of one – but will, as he says, probably get our ship into port. We can’t match people who are assessed as dyslexic with controls with any reliability, so case studies and other evidence are the best we have, and some of their shortcomings can be tackled by using standardised tests and long-term follow-up, as in the Clackmannanshire reading research.
In the end, the problems of investigating the range of issues known as dyslexia lie as much in the limitations of scientific method as in the elusiveness of the phenomenon itself. It is there – though probably in no more than 1.5% of people and not the 5%+ suggested by psychological tests – and teachers need to tackle it. The way forward is to consider each element of evidence, including test scores on issues such as memory and processing speed, on its merits, and to tackle the specific issues that they indicate, without recourse to the global term “dyslexia”. I never use it myself unless someone else does first, which may be a start.
Posted at 06:10 AM in Dyslexia, Educational Policy, Literacy, Randomised controlled trials and research methodology, Reading and the Eyes, Reading research | Permalink | Comments (16) | TrackBack (0)
November 08, 2013
Randomisation and small trials.
Professor Torgerson’s discussions of randomised trials suggest that a large sample is needed to produce a statistically and educationally significant result – she proposes a rather arbitrary criterion of half a standard deviation’s improvement as significant. Prof Debra Myhill’s trial on English grammar cost £750k, which, in our case, we have not got.
But – if we collect evidence from classes taking part in research, and have a realistic comparison group, we can then analyse it in various ways, including random selection, say, of two pupils per group in the middle, upper and lower bands. Differences in a small sample might not reach statistical significance, but they could form part of a wider analysis, for example by confirming the results obtained by a whole class. It seems, incidentally, strange to argue that a sample might provide better evidence than a whole group (The term cohort belongs to Roman military history rather than twenty-first century education).
Silver, rather than gold, standard research, but within our means.
Further notes on the design of small scale research are here.
Posted at 08:05 AM in German, Languages, Literacy, Randomised controlled trials and research methodology, Reading Recovery, Reading research | Permalink | Comments (0)
September 04, 2013
The place of randomised controlled studies in educational research: Dr Ben Goldacre
I discussed the attempt to move RCTs from health research into education in a technical paper 3 years ago, which focused chiefly on reading research. Since then, we have had one excellent study by Professor Debra Myhill on the value of teaching grammar to young secondary pupils, an a further attempt to introduce the health service model to education from Dr Ben Goldacre, an epidemiologist and columnist for the Guardian (Bad Science). Dr Goldacre’s paper, with replies from Professors Mary James and Geoff Whitty, is here.
Dr Goldacre says there is
a huge prize waiting to be claimed by teachers. By collecting better evidence about what works best, and establishing a culture where this evidence is used as a matter of routine, we can improve outcomes for children, and increase professional independence.
He then says
it’s only by conducting randomised trials – fair tests – that we’ve been able to find out what works best.
Well, sometimes. There is also a long and important history in medicine of clinical investigation based on a doctor deciding what to do in individual cases. The papers published in The Lancet by Joseph Lister in 1867 represent one of the greatest advances in medical history, indeed in human history, and were not the result of controlled trials. Clinical experiment remains a key source of evidence in surgery; to suggest otherwise is to present part of medical research as the whole.
Dr Goldacre continues:
Where they are feasible, randomised trials are generally the most reliable tool we have for finding out which of two interventions work best.
This is an important limitation. The trial tests one intervention against another. The other may be doing nothing at all, or providing a different type of teaching. What if we have more than one choice? What if a slight alteration in approach might produce a different answer in the taught control group? What if we can’t find a suitable comparison group? And what if our trial is not “blind” as it is with a drugs trial? Teachers need to know what they’re doing or they can’t do it. The idea of a partially sighted trial is not known to science.
But Dr Goldacre does not discuss these limitations, jumping instead to an example from microfinance. He also says, in a passage intended to debunk myths about trials, that
there are some situations where trials aren’t appropriate – and where
we need to be cautious in interpreting the results
So, where are RCTs appropriate? Dr Goldacre tells us:
Randomisation, in a trial, adds one simple extra chink to this existing
variation: we need a group of schools, teachers, pupils, or parents, who are able
to honestly say: “we don’t know which of these two strategies is best, so we don’t
mind which we use. We want to find out which is best, and we know it won’t
harm us.”
In the case of a drug, this is the right approach. Expensive and promising drugs have been refused licences because they have failed their trials, and others have been shown to be unexpectedlly effective. But teaching a child is not the same as adminstering a drug. There is more than one choice most of the time, and infinite variation in the ways things can be done.
Professor Debra Myhill’s important study of contextualised grammar vs no grammar was favourable to contextualised grammar, but did not show us what should be taught, at what age, or why higher-attaining pupils benefited more than others, or evenwhether contextualised grammar was better than decontextualised grammar. This is not Professor Myhill’s fault, but a limitation of the research technique – it lets us decide between two clear alternatives, but only that.
Dr Goldacre closes with this:
Now we recognise that being a good doctor, or teacher, or manager, isn’t
about robotically following the numerical output of randomised trials; nor is it
about ignoring the evidence, and following your hunches and personal
experiences instead. We do best, by using the right combination of skills to get
the best job done.
And once again, I agree. But we are not, in education, dealing with a straight choice between statistical evidence and personal experience. Experience in education is moderated by other people’s experience, notably that of HMI, just as the personal experience of a surgeon is informed by clinical practice. It is also informed by smaller scale research, that need not be randomised or expensive (Professor Myhill’s trial cost £750k), but that can provide indications that can be interpeted using professional judgement and experience. A culture of research in education is vital. RCTs can and should be part of it. They are not the only source of knowledge about what “works best”.
Posted at 12:29 PM in Educational Policy, Randomised controlled trials and research methodology, Reading research | Permalink | Comments (1) | TrackBack (0)
July 08, 2013
Designing small scale research in languages – first draft.
This file is the first draft of a template for constructing small-scale research projects in modern languages, based on the principle of best evidence rather than absolute truth. There has been much more experimental work in reading than in languages, and hence many more mistakes. It would be straightforward to provide a reference for each of the errors I’ve listed, but the point here is not to criticise the studies themselves – see my notes on randomised controlled trials as a starting point for such criticism – but to avoid repeating the errors. Correspondence is, as ever, most welcome.
Download Some Principles of Small-scale Research Design.
Posted at 02:59 PM in Languages, Randomised controlled trials and research methodology | Permalink | Comments (0) | TrackBack (0)
June 30, 2013
Randomised Controlled Trials – flaws, limitations and alternatives.
The limitations of randomised controlled trials are pretty obvious, but their advocates have managed to convince nearly everyone that there is no alternative. There is. In 1867, The Lancet published a series of papers by Professor Joseph Lister, setting out his work in treating cases of compound fracture by using carbolic acid sprays to eliminate infection and so avoid amputation. This is one of the greatest breakthroughs of modern medicine, and anaesthesia and antibiotics originated in the same approach of experiment and observation. This approach to research is in the spirit of John Locke, and has been advocated over three decades by Dr Terry Moore, of Clare College, Cambridge. The application of models derived from social studies, which have tried to apply the methods of the physical sciences in fields for which they were not designed, has tied us in a knot that has made it impossible to investigate anything without prohibitive cost. To take just one snag, medical trials are double blind by design, and we can’t have a blind trial in education, as the teachers need to know what they are doing. The RCT model has then set tight limits on what can be investigated at any one time – it is virtually impossible, for example, to investigate one form of grammatical teaching against another, and against none at all. Lord Lister and Dr Moore provide the means to cut through the knot.
Download Some thoughts on the state of research in languages
Posted at 09:37 AM in Randomised controlled trials and research methodology, Reading research | Permalink | Comments (2) | TrackBack (0)
August 04, 2010
Randomised Controlled Trials – Technical Paper
This weblog aims to communicate with everyone interested in literacy, including researchers. The following paper is technical, but is needed to correct the error of trying to impose a single structure on educational studies, which may not always tell us what we need to know. As always, correspondence is appreciated. This is a slightly revised version to that originally posted. See also notes on randomisation of small trials, and Dr Ben Goldacre’s paper on randomised trials.
Since 2001, Professors David and Carole Torgerson have argued for randomised controlled trials in educational research, and have lamented the lack of them. Their work continuously asserts the benefits of this approach, for example –
The randomised controlled trial is the best method of assessing causality[i]
The best method of ascertaining whether an intervention is effective or not is through the use of a randomised trial[ii]
The only research method that can adequately control for external confounding factors is the randomised controlled trial [iii]
This gold standard methodology should be more widely used as it is an appropriate and robust research technique.[iv]
At times their arguments amount to a reasonable suggestion that two fields of enquiry might learn from each other – for example:
Many aspects of health care research have sufficient similarities to educational research that some of the lessons learned by health care researchers over the past two decades are readily applicable to the re-emerging interest in randomised controlled trials methodology by educational researchers[v]
Elsewhere, their confidence in their “gold standard methodology” comes close to totalitarianism. In Carole Torgerson’s review of evidence on phonics, following the publication of the Clackmannanshire study in 2005[vi], she and her co-authors systematically disqualified every study that did not use this method, irrespective of any other strengths they may have had, including the key issue of long-term follow up to check on whether improvements are lasting. Several of the randomised controlled trials admitted by the authors had very serious flaws, including imprecise methodology, lack of follow-up of initial results, and sample sizes too small to detect potentially significant improvements. [vii]
The last issue is particularly important in view of Professors Torgerson’s convincing arguments on sample size in research. They say that most studies show a benefit at or below half a standard deviation (roughly 5% more progress than a control group), and that studies should be designed to detect this with 80% certainty. This requires a minimum sample of 64, with the same number for a control group.[viii] Of the 20 studies included in the DfES survey, only three (including an initial study by Johnstone and Watson) met this criterion for sample size. The rest had samples of under 50, and mostly well under 50, with some as low as 12. These studies used a wide range of methods, some to teach initial reading some to teach pupils described as “learning disabled”; there is no consistency in their methodology, and in some cases it is barely described. Basing conclusions on grouping flawed studies together, an approach that has had a long and sad history in phonics research, might – and did – produce a result that looks favourable to phonics, but it is not reliable science.
Professors Torgerson recognise potential flaws in studies using randomised controlled trials, including bias occurring in a sample by chance, technical bias, chiefly from a sample that is too small, subversion bias (from someone who does not believe in whatever they have been asked to do), attrition bias (where people lost from a sample may result in the final sample being skewed), attribution bias (in allocating people to one or the other group, in practice v similar to chance bias), reporting bias, dilution bias (aka seepage of a technique from the group being given it to other pupils), and exclusion bias[ix].
These problems and more are illustrated in a randomised controlled trial led by Professor Greg Brooks, one of the co-authors of the DfES review, with the participation of Carole Torgerson.[x]The study involved the complete Year 7 of a comprehensive school, 155, and its goal was to bring about an overall improvement in spelling using an unidentified computerised system that based spelling on pupils’ own pronunciation. Pupils were allocated to the additional spelling and control groups at random, and rigorously pre- and post- tested using NFER-Nelson reading and spelling tests. The results showed no improvements in spelling as a result of the exercise, and even a dip in reading scores from the group given the extra spelling, though this was corrected later. So, the answer to the question in the study’s title, Is an intervention using computer software effective in literacy learning? is, apparently, no.
But is it? First, the study used one computerised program only, and gives no clear reason for its selection against dozens of others. Any skilled teacher investigating the use of ICT to improve spelling would select software carefully, considering the needs of the pupils and the nature of their spelling difficulties. No such process took place – there is no evidence of children or teachers being consulted about the choice of the software, or any analysis of strengths and weaknesses in their spelling beyond the mean test scores. Next, it is not clear that the software was used properly. Children had to use it, on laptops, in groups of six, for one hour a day. However, one group had to have five two-hour sessions, and another had one two-hour session and eight hour sessions. Here again, teaching experience should have been taken into account and wasn’t. Would any sensible teacher put a child in front of a computer for two hours a day to improve their spelling? The effects on motivation might be predicted. Did the anonymous program’s manual provide for two hours a day? We should at least have been told if it did so. If it did not – and I have never heard of any that did – the error is serious enough to invalidate this group’s scores. Finally, the researchers retested children’s spelling and reading after just two weeks, with a further test after the control group had been given access to the machines. In other words, the pupils were tested three times in half a term, with no follow up to see whether any changes were permanent – or indeed whether there were any longer term benefits that were not immediately apparent.
Professors Torgerson’s claims for randomisation’s ability to eliminate sampling bias range from the categoric randomisation only guarantees comparable groups at pre-test to (in the same paper!) the qualified although randomised controlled trials should, in theory, eliminate selection bias, there are instances where bias can and does occur.[xi] In this study, the second was true. The authors say that the randomised controlled trial is the only research method that can adequately control for all the unmeasured variables that may affect student outcomes. Randomisation ensures that all potential confounding factors are distributed without bias across the randomised groups, and controls for temporal and regression to the mean effects. Nevertheless, the group given the extra spelling had significantly more girls than boys, and a higher starting point that had to be dealt with by additional statistical analysis. It is not clear just how large a sample would be needed to meet the claim that randomisation could control for all variables, but it is clearly larger than the sample needed for statistical validity. How “regression to the mean” is meant to operate over a six week period is not explained, any more than the phenomenon itself can be shown to apply to reading and spelling, where there is evidence of increasing divergence as children move into secondary school. [xii]Finally, the agglomeration of the scores for all pupils in each group into a single figure does not allow us to say whether any group of pupils – higher, average or lower-attaining, or those with special educational needs – gained any more or less benefit from the extra spelling than any other. The effect of randomisation here is to not to improve the quality of evidence generated by the study, but to tell us less, and to make the evidence generated less precise. Overall, there is not a scrap of evidence that randomisation added to the value of this study or compensated for its elementary errors. Its only identifiable effect has been to make a bad study worse.
This study shows that sampling is only one element of research design, that randomisation is one of a range of options, and that it is not necessarily the best. There are examples of successful educational randomised controlled trials – one showed the effectiveness of extra funding for schools in poor areas in New York, and another the negative response of pupils to an anti-smoking campaign[xiii]. Full consideration of what made these studies successful is beyond the scope of this paper, though it seems that they were concerned with a single issue that was relatively easy to measure, and used large samples. Professors Torgerson estimate that a study designed to show the likelihood of raising by 1 the number in a class achieving 5+ higher grade GCSEs might run to 8-10,000.[xiv] The key point is whether a sample will give us the information we are looking for, and the success of some randomised trials does not constitute grounds for rejecting well-constructed studies that use different sampling techniques that are more suited to their purposes.
The final, crucial weakness in Professors Torgerson’s argument for randomisation as an essential factor in educational research is their application of statistical techniques without sufficient consideration of the context of education. They cite with approval a 1997 longitudinal study showing that socio-economic status is a consistent predictor of educational success[xv]. This is a generally accepted view, but the issue is complex and has consequences for researchers. To make an impact on the problem of low achievement by poorer socio-economic groups, we need to pinpoint why they are achieving less, and we cannot do this simply by agglomerating their results with those of other groups and randomising. This is obvious, but Brooks et al did not do it in the study cited above. Could this be another uncontrolled variable? We cannot tell, and yet randomisation is supposed to distribute such variables across the sample. At one point, Professors Torgerson suggest “stratified randomisation” as a possible solution to the problem[xvi]though they appear to retreat from this in their 2008 book. The key point is that educational performance is not the result of chance, but are the result of a complex interaction between children’s experiences, their personal characteristics, including the way their brain is structured (eg sensitivity to light, dyslexia[xvii]) and their intellect. To investigate these issues we need precision, not randomisation.
The issue’s political dimensions are summarised in this quotation from the former Secretary of State, David Blunkett –
We welcome studies which combine large-scale, quantitative information, on effect sizes that will enable us to generalise.
Generalisation leads to policy, which is, in this case, the commitment of the Labour and Conservative parties to the use of phonics as the main vehicle for teaching reading and spelling in infant schools, which was reinforced by the statistically significant, long-term gains in the Clackmannanshire study quoted above. The imposition of a new criterion for validating research was very useful to opponents of this approach, as it appeared to knock out the research that showed the strongest evidence in favour of phonics. As far as I can find, none of these opponents had organised randomised controlled trials themselves, but this did not limit their enthusiasm. Professors Torgerson and Brooks are not responsible for triggering this response, but no discussion of randomised controlled trials in the present context can ignore it. Perhaps the most important point of all, however, lies in the application of generalisations derived from statistical science to educational research. Statistical generalisations are derived from observed tendencies, and are not laws of nature. They need to be tested against established knowledge in whatever field they are applied.
John Bald, independent consultant.
Author, Using Phonics to Teach Reading and Spelling (Sage, 2007).
[i]Torgerson, DJ and Torgerson, CJ Designing Randomised Trials in Health, Education and the Social Sciences: an introduction. (Palgrave Macmillan, 2008) viii
[ii]Torgerson, DJ and Torgerson, CJ (2003) Avoiding bias in randomised controlled trials in educational research: British Journal of Educational Studies 51: 36-45
[iii]Torgerson, CJ and Torgerson, DJ (2003) The design and conduct of randomised controlled trials in education: Lessons from health care. Oxford Review of Education, 29: 67-80
[iv]Torgerson, CJ and Torgerson DJ (2001) The need for randomised controlled trials in educational research. British Journal of Educational Studies 49,3:316-328.
[v]Torgerson, CJ and Torgerson, DJ (2003) The design and conduct of randomised controlled trials in education: Lessons from health care. Oxford Review of Education, 29: 67-80 (p68)
[vi]Johnstone, R and Watson, J (2005) The Effects of Synthetic Phonics Teaching on Reading and Spelling Attainment: A Seven Year Longitudinal Study. Scottish Executive Central Research Unit.
[vii] Torgerson, C, Brooks G and Hall, J (2006) A Systematic Review of the Research Literature on the Use of Phonics in the Teaching of Reading and Spelling. DFES Research Report RR711
[viii]Torgerson and Torgerson (2008) 130
[ix]Torgerson, DJ and Torgerson, CJ (2003) Avoiding bias in randomised controlled trials in educational research: British Journal of Educational Studies 51: 36-45
[x]Brooks et al (2006) Is an intervention using computer software effective in literacy learning? A randomised controlled trial Educational Studies 32: 133-143
[xi]Torgerson, DJ and Torgerson, CJ (2003) Avoiding bias in randomised controlled trials in educational research: British Journal of Educational Studies 51: 36-45
[xii]Eg Chall, J et al, (1990) The Reading Crisis, Why Poor Children Fall Behind, Harvard UP.
[xiii]Crain and York (1976) in Torgerson and Torgerson (2001)
[xiv]Torgerson, CJ and Torgerson, DJ (2003) The design and conduct of randomised controlled trials in education: Lessons from health care. Oxford Review of Education, 29: 67-80
[xv]Robinson, P. (1997) Literacy, Numeracy and Economic Performance (LSE), in Torgerson, CJ and Torgerson, DJ (2003) The design and conduct of randomised controlled trials in education: Lessons from health care. Oxford Review of Education, 29: 67-80
[xvi]Torgerson, DJ and Torgerson, CJ (2003) Avoiding bias in randomised controlled trials in educational research: British Journal of Educational Studies 51: 36-45
[xvii]See, for example, Reading Through Colour, (Wilkins, A, 2003) The Learning Brain (Blakemore, S and Frith, U2005) and Pink Brain Blue Brain (Eliot, L 2010)
Posted at 12:52 PM in Randomised controlled trials and research methodology, Reading research | Permalink | Comments (0) | TrackBack (0)