Before you start - Research or evaluation?

There have been many attempts to clarify the main differences between research and evaluation, although the list of criteria involved in doing so remains far from clear cut. As one commentator has observed:

'…Research and evaluation are not mutually exclusive binary oppositions, nor, in reality, are there differences between them. Their boundaries are permeable, similarities are often greater than differences and there is often overlap; indeed, evaluative research and applied research often bring the two together.'
Levin-Rozalis, 2003, cited in Cohen, Manion and Morrison, 2018, p. 81.

Both research and evaluation can be “beset with issues of politics” and it is the reality of these politics (particularly in relation to funding and time pressures) which can “blur distinctions” between research and evaluation (Cohen et al., 2018, p. 83): as much as the social context in which they operate. Nevertheless, there are some important general differences which have been identified and which provide a useful point of reference when the methodological overlap between the two make the distinctions more difficult to detect.

Watch our animated video below, which presents a conversation comparing evaluation and educational research. The resource highlights important distinctions when designing and conducting educational research and when preparing an EERP ethics application.   

Note: As this is a podcast episode, please feel free to listen to the file as audio only; for inclusivity and accessibility purposes, we have also included visual aids, closed captions and an automatically generated transcript; these features may be turned off if you prefer. Please select the format that best suits your needs or preferences. Drawing on content from EDU teaching materials (i.e. MEd course materials and the Teaching Toolkit), the video design was co-developed with students and NotebookLM was used to create the podcast recording.

Watch our animated video, which presents a conversation comparing evaluation and educational research. The resource highlights important distinctions when designing and conducting educational research and when preparing an EERP ethics application.

Educational evaluation vs educational research

The similarities and differences between evaluation and educational research (6’ 14”)

Please note: It can be difficult to generalise whether a piece of work requires ethical approval as this depends on the context, purpose, and how the findings will be shared/published externally. If you are unsure, it is always best to seek advice from the EERP team.

The table below provides a summary of the main distinguishing features compiled by Cohen et al. (2018). Although the authors are quick to acknowledge that these features are not all “as rigidly separate” as might be suggested, the distinctions help to provide some conceptual clarity for practical purposes:.

 Summary of the table's contents

Distinguish feature Research Evaluation(s)
 Agenda Generally involves greater control (though often constrained by funding providers); researchers create and construct the field. Work within a given brief / a set of “givens” – e.g. programme, field, participants, terms of reference and agenda, variables.
 Audiences Disseminated widely and publicly. Often commissioned and become the property of the sponsors; not for the public domain.
 Data sources and types More focused body of evidence. Has a wide field of coverage (e.g costs, benefits, feasibility, justifiability, needs, value for money) – so tends to employ a wider and more eclectic range of evidence from an array of disciplines and sources than research. 
 Decision making Used for macro decision making. Used for micro decision making.  
 Focus Concerned with how something works. Concerned with how well something works. 
 Origins From scholars working in a field. Issued from/by stakeholders.
 Outcome focus May not prescribe or know its intended out comes in advance. Concerned with the achievement of intended outcomes. 
 Ownership of data Intellectual property held by the researcher. Cedes ownership to the sponsor, upon completion.  
 Participants Less (or no) focus on stakeholders. Focuses almost exclusively on stakeholders. 
 Politics of the situation Provides information for others to use. May be unable to stand outside the politics of the purposes and uses of (or participants in) an evaluation. 
 Purposes

Contributes to knowledge in the field, regardless of its practical application; provides empirical information – i.e. “what is”.

Conducted to gain, expand and extend knowledge; to generate theory, “discover” and predict what will happen.

Designed to use the information / facts to judge the worth, merit, value, efficacy, impact and effectiveness of something – i.e. “what is valuable”.

Conducted to assess performance and to provide feedback; to inform policy making and “uncover”. The concern is with what has happened or is happening.

 Relevance Can have wide boundaries (e.g. to generalise to a wider community); can be prompted by interest rather than relevance. Relevance to the programme or what is being evaluated is a prime feature. Has to take particular account of timeliness and particularity.  
 Reporting May include stakeholders / commissioners of research – but may also report more widely (e.g. in publications Reports to stakeholders and commissioners of research.
 Scope Often (though not always) seeks to generalise (external validity) and may not include evaluation. Concerned with the particular – e.g. a focus only on specific programmes. Seeks to ensure internal validity and often has a more limited scope.  
 Stance Active and proactive. Reactive. 
 Standards for judging quality Judgements are made by peers; standards for which include: validity, reliability, accuracy, causality, generalizability, rigour. Judgements are made by stakeholders; standards for which also include: utility, feasibility, involvement of stakeholders, side effects, efficacy, fitness for purpose.
 Status An end in itself. A means to an end. 
 Time frames Often ongoing and less time bound: although this is not the case with funded research. Begins at the start of a project and finishes at its end. 
 Use of results

Designed to demonstrate or prove.

Provides the basis for drawing conclusions, and information on which others might or might not act – i.e. it does not prescribe.
Based in social science theory – i.e. is “theory dependent”.

Designed to improve.

Provides the basis for decision making; might be used to increase or withhold resources or to change practice.

 Use of theory Creates the research findings.

Not necessary to base in theory; is “field dependent” – i.e. derived from the participants, the project and stakeholders.

May (or may not) use research findings.

 
 

To reiterate: for every feature given above, it may be possible to identify an “exception to the rule”, but they provide a set of guiding principles that can help to ensure that the issue for investigation is addressed in the appropriate way from the outset. As Savin-Baden and Howell Major (2013) have observed:

 '…Evaluation can be used as both a form of research as well as an evaluation procedure, and it is important to decide on which of these it is before the researcher proceeds.'

References and further reading

Cohen, L., Manion, L. & Morrison, K. (2018), Chapter 5 – “Evaluation and research”. In Cohen, L., Manion, L. & Morrison, K. (eds), Research Methods in Education (Abingdon, Routledge, 8th edn), pp. 79-86.

Savin-Baden, M. and Howell Major, C. (2013), Chapter 18 – “Evaluation”. In Savin-Baden, M. and Howell Major, C. (eds), Qualitative Research: The Essential Guide to Theory and Practice (Abingdon, Routledge), pp. 273-287.

Wall, D. (2010) Evaluation: Improving practice, influencing policy. In: Swanwick, T. (Ed.) Understanding Medical Education: Evidence, Theory and Practice. Association for the Study of Medical Education.

Educational research and evaluation overview [pdf] - a handout produced by the Educational Development Unit for those thinking about educational research.