Taking a complexity perspective.
The first paper in this series 17 outlines aspects of complexity associated with complex interventions and health systems that can potentially be explored by different types of evidence, including synthesis of quantitative and qualitative evidence. Petticrew et al 17 distinguish between a complex interventions perspective and a complex systems perspective. A complex interventions perspective defines interventions as having “implicit conceptual boundaries, representing a flexible, but common set of practices, often linked by an explicit or implicit theory about how they work”. A complex systems perspective differs in that “ complexity arises from the relationships and interactions between a system’s agents (eg, people, or groups that interact with each other and their environment), and its context. A system perspective conceives the intervention as being part of the system, and emphasises changes and interconnections within the system itself”. Aspects of complexity associated with implementation of complex interventions in health systems that could potentially be addressed with a synthesis of quantitative and qualitative evidence are summarised in table 2 . Another paper in the series outlines criteria used in a new evidence to decision framework for making decisions about complex interventions implemented in complex systems, against which the need for quantitative and qualitative evidence can be mapped. 16 A further paper 18 that explores how context is dealt with in guidelines and reviews taking a complexity perspective also recommends using both quantitative and qualitative evidence to better understand context as a source of complexity. Mixed-method syntheses of quantitative and qualitative evidence can also help with understanding of whether there has been theory failure and or implementation failure. The Cochrane Qualitative and Implementation Methods Group provide additional guidance on exploring implementation and theory failure that can be adapted to address aspects of complexity of complex interventions when implemented in health systems. 19
Health-system complexity-related questions that a synthesis of quantitative and qualitative evidence could address (derived from Petticrew et al 17 )
Aspect of complexity of interest | Examples of potential research question(s) that a synthesis of qualitative and quantitative evidence could address | Types of studies or data that could contribute to a review of qualitative and quantitative evidence |
What ‘is’ the system? How can it be described? | What are the main influences on the health problem? How are they created and maintained? How do these influences interconnect? Where might one intervene in the system? | Quantitative: previous systematic reviews of the causes of the problem); epidemiological studies (eg, cohort studies examining risk factors of obesity); network analysis studies showing the nature of social and other systems Qualitative data: theoretical papers; policy documents |
Interactions of interventions with context and adaptation | Qualitative: (1) eg, qualitative studies; case studies Quantitative: (2) trials or other effectiveness studies from different contexts; multicentre trials, with stratified reporting of findings; other quantitative studies that provide evidence of moderating effects of context | |
System adaptivity (how does the system change?) | (How) does the system change when the intervention is introduced? Which aspects of the system are affected? Does this potentiate or dampen its effects? | Quantitative: longitudinal data; possibly historical data; effectiveness studies providing evidence of differential effects across different contexts; system modelling (eg, agent-based modelling) Qualitative: qualitative studies; case studies |
Emergent properties | What are the effects (anticipated and unanticipated) which follow from this system change? | Quantitative: prospective quantitative evaluations; retrospective studies (eg, case–control studies, surveys) may also help identify less common effects; dose–response evaluations of impacts at aggregate level in individual studies or across studies included with systematic reviews (see suggested examples) Qualitative: qualitative studies |
Positive (reinforcing) and negative (balancing) feedback loops | What explains change in the effectiveness of the intervention over time? Are the effects of an intervention are damped/suppressed by other aspects of the system (eg, contextual influences?) | Quantitative: studies of moderators of effectiveness; long-term longitudinal studies Qualitative: studies of factors that enable or inhibit implementation of interventions |
Multiple (health and non-health) outcomes | What changes in processes and outcomes follow the introduction of this system change? At what levels in the system are they experienced? | Quantitative: studies tracking change in the system over time Qualitative: studies exploring effects of the change in individuals, families, communities (including equity considerations and factors that affect engagement and participation in change) |
It may not be apparent which aspects of complexity or which elements of the complex intervention or health system can be explored in a guideline process, or whether combining qualitative and quantitative evidence in a mixed-method synthesis will be useful, until the available evidence is scoped and mapped. 17 20 A more extensive lead in phase is typically required to scope the available evidence, engage with stakeholders and to refine the review parameters and questions that can then be mapped against potential review designs and methods of synthesis. 20 At the scoping stage, it is also common to decide on a theoretical perspective 21 or undertake further work to refine a theoretical perspective. 22 This is also the stage to begin articulating the programme theory of the complex intervention that may be further developed to refine an understanding of complexity and show how the intervention is implemented in and impacts on the wider health system. 17 23 24 In practice, this process can be lengthy, iterative and fluid with multiple revisions to the review scope, often developing and adapting a logic model 17 as the available evidence becomes known and the potential to incorporate different types of review designs and syntheses of quantitative and qualitative evidence becomes better understood. 25 Further questions, propositions or hypotheses may emerge as the reviews progress and therefore the protocols generally need to be developed iteratively over time rather than a priori.
Following a scoping exercise and definition of key questions, the next step in the guideline development process is to identify existing or commission new systematic reviews to locate and summarise the best available evidence in relation to each question. For example, case study 2, ‘Optimising health worker roles for maternal and newborn health through task shifting’, included quantitative reviews that did and did not take an additional complexity perspective, and qualitative evidence syntheses that were able to explain how specific elements of complexity impacted on intervention outcomes within the wider health system. Further understanding of health system complexity was facilitated through the conduct of additional country-level case studies that contributed to an overall understanding of what worked and what happened when lay health worker interventions were implemented. See table 1 online supplementary file 2 .
There are a few existing examples, which we draw on in this paper, but integrating quantitative and qualitative evidence in a mixed-method synthesis is relatively uncommon in a guideline process. Box 2 includes a set of key questions that guideline developers and review authors contemplating combining quantitative and qualitative evidence in mixed-methods design might ask. Subsequent sections provide more information and signposting to further reading to help address these key questions.
Compound questions requiring both quantitative and qualitative evidence?
Questions requiring mixed-methods studies?
Separate quantitative and qualitative questions?
Separate quantitative and qualitative research studies?
Related quantitative and qualitative research studies?
Mixed-methods studies?
Quantitative unpublished data and/or qualitative unpublished data, eg, narrative survey data?
Throughout the review?
Following separate reviews?
At the question point?
At the synthesis point?
At the evidence to recommendations stage?
Or a combination?
Narrative synthesis or summary?
Quantitising approach, eg, frequency analysis?
Qualitising approach, eg, thematic synthesis?
Tabulation?
Logic model?
Conceptual model/framework?
Graphical approach?
Petticrew et al 17 define the different aspects of complexity and examples of complexity-related questions that can potentially be explored in guidelines and systematic reviews taking a complexity perspective. Relevant aspects of complexity outlined by Petticrew et al 17 are summarised in table 2 below, together with the corresponding questions that could be addressed in a synthesis combining qualitative and quantitative evidence. Importantly, the aspects of complexity and their associated concepts of interest have however yet to be translated fully in primary health research or systematic reviews. There are few known examples where selected complexity concepts have been used to analyse or reanalyse a primary intervention study. Most notable is Chandler et al 26 who specifically set out to identify and translate a set of relevant complexity theory concepts for application in health systems research. Chandler then reanalysed a trial process evaluation using selected complexity theory concepts to better understand the complex causal pathway in the health system that explains some aspects of complexity in table 2 .
Rehfeuss et al 16 also recommends upfront consideration of the WHO-INTEGRATE evidence to decision criteria when planning a guideline and formulating questions. The criteria reflect WHO norms and values and take account of a complexity perspective. The framework can be used by guideline development groups as a menu to decide which criteria to prioritise, and which study types and synthesis methods can be used to collect evidence for each criterion. Many of the criteria and their related questions can be addressed using a synthesis of quantitative and qualitative evidence: the balance of benefits and harms, human rights and sociocultural acceptability, health equity, societal implications and feasibility (see table 3 ). Similar aspects in the DECIDE framework 15 could also be addressed using synthesis of qualitative and quantitative evidence.
Integrate evidence to decision framework criteria, example questions and types of studies to potentially address these questions (derived from Rehfeuss et al 16 )
Domains of the WHO-INTEGRATE EtD framework | Examples of potential research question(s) that a synthesis of qualitative and/or quantitative evidence could address | Types of studies that could contribute to a review of qualitative and quantitative evidence |
Balance of benefits and harms | To what extent do patients/beneficiaries different health outcomes? | Qualitative: studies of views and experiences Quantitative: Questionnaire surveys |
Human rights and sociocultural acceptability | Is the intervention to patients/beneficiaries as well as to those implementing it? To what extent do patients/beneficiaries different non-health outcomes? How does the intervention affect an individual’s, population group’s or organisation’s , that is, their ability to make a competent, informed and voluntary decision? | Qualitative: discourse analysis, qualitative studies (ideally longitudinal to examine changes over time) Quantitative: pro et contra analysis, discrete choice experiments, longitudinal quantitative studies (to examine changes over time), cross-sectional studies Mixed-method studies; case studies |
Health equity, equality and non-discrimination | How is the intervention for individuals, households or communities? How —in terms of physical as well as informational access—is the intervention across different population groups? | Qualitative: studies of views and experiences Quantitative: cross-sectional or longitudinal observational studies, discrete choice experiments, health expenditure studies; health system barrier studies, cross-sectional or longitudinal observational studies, discrete choice experiments, ethical analysis, GIS-based studies |
Societal implications | What is the of the intervention: are there features of the intervention that increase or reduce stigma and that lead to social consequences? Does the intervention enhance or limit social goals, such as education, social cohesion and the attainment of various human rights beyond health? Does it change social norms at individual or population level? What is the of the intervention? Does it contribute to or limit the achievement of goals to protect the environment and efforts to mitigate or adapt to climate change? | Qualitative: studies of views and experiences Quantitative: RCTs, quasi-experimental studies, comparative observational studies, longitudinal implementation studies, case studies, power analyses, environmental impact assessments, modelling studies |
Feasibility and health system considerations | Are there any that impact on implementation of the intervention? How might , such as past decisions and strategic considerations, positively or negatively impact the implementation of the intervention? How does the intervention ? Is it likely to fit well or not, is it likely to impact on it in positive or negative ways? How does the intervention interact with the need for and usage of the existing , at national and subnational levels? How does the intervention interact with the need for and usage of the as well as other relevant infrastructure, at national and subnational levels? | Non-research: policy and regulatory frameworks Qualitative: studies of views and experiences Mixed-method: health systems research, situation analysis, case studies Quantitative: cross-sectional studies |
GIS, Geographical Information System; RCT, randomised controlled trial.
Questions can serve as an ‘anchor’ by articulating the specific aspects of complexity to be explored (eg, Is successful implementation of the intervention context dependent?). 27 Anchor questions such as “How does intervention x impact on socioeconomic inequalities in health behaviour/outcome x” are the kind of health system question that requires a synthesis of both quantitative and qualitative evidence and hence a mixed-method synthesis. Quantitative evidence can quantify the difference in effect, but does not answer the question of how . The ‘how’ question can be partly answered with quantitative and qualitative evidence. For example, quantitative evidence may reveal where socioeconomic status and inequality emerges in the health system (an emergent property) by exploring questions such as “ Does patterning emerge during uptake because fewer people from certain groups come into contact with an intervention in the first place? ” or “ are people from certain backgrounds more likely to drop out, or to maintain effects beyond an intervention differently? ” Qualitative evidence may help understand the reasons behind all of these mechanisms. Alternatively, questions can act as ‘compasses’ where a question sets out a starting point from which to explore further and to potentially ask further questions or develop propositions or hypotheses to explore through a complexity perspective (eg, What factors enhance or hinder implementation?). 27 Other papers in this series provide further guidance on developing questions for qualitative evidence syntheses and guidance on question formulation. 14 28
For anchor and compass questions, additional application of a theory (eg, complexity theory) can help focus evidence synthesis and presentation to explore and explain complexity issues. 17 21 Development of a review specific logic model(s) can help to further refine an initial understanding of any complexity-related issues of interest associated with a specific intervention, and if appropriate the health system or section of the health system within which to contextualise the review question and analyse data. 17 23–25 Specific tools are available to help clarify context and complex interventions. 17 18
If a complexity perspective, and certain criteria within evidence to decision frameworks, is deemed relevant and desirable by guideline developers, it is only possible to pursue a complexity perspective if the evidence is available. Careful scoping using knowledge maps or scoping reviews will help inform development of questions that are answerable with available evidence. 20 If evidence of effect is not available, then a different approach to develop questions leading to a more general narrative understanding of what happened when complex interventions were implemented in a health system will be required (such as in case study 3—risk communication guideline). This should not mean that the original questions developed for which no evidence was found when scoping the literature were not important. An important function of creating a knowledge map is also to identify gaps to inform a future research agenda.
Table 2 and online supplementary files 1–3 outline examples of questions in the three case studies, which were all ‘COMPASS’ questions for the qualitative evidence syntheses.
The shift towards integration of qualitative and quantitative evidence in primary research has, in recent years, begun to be mirrored within research synthesis. 29–31 The natural extension to undertaking quantitative or qualitative reviews has been the development of methods for integrating qualitative and quantitative evidence within reviews, and within the guideline process using evidence to decision-frameworks. Advocating the integration of quantitative and qualitative evidence assumes a complementarity between research methodologies, and a need for both types of evidence to inform policy and practice. Below, we briefly outline the current designs for integrating qualitative and quantitative evidence within a mixed-method review or synthesis.
One of the early approaches to integrating qualitative and quantitative evidence detailed by Sandelowski et al 32 advocated three basic review designs: segregated, integrated and contingent designs, which have been further developed by Heyvaert et al 33 ( box 3 ).
Segregated design.
Conventional separate distinction between quantitative and qualitative approaches based on the assumption they are different entities and should be treated separately; can be distinguished from each other; their findings warrant separate analyses and syntheses. Ultimately, the separate synthesis results can themselves be synthesised.
The methodological differences between qualitative and quantitative studies are minimised as both are viewed as producing findings that can be readily synthesised into one another because they address the same research purposed and questions. Transformation involves either turning qualitative data into quantitative (quantitising) or quantitative findings are turned into qualitative (qualitising) to facilitate their integration.
Takes a cyclical approach to synthesis, with the findings from one synthesis informing the focus of the next synthesis, until all the research objectives have been addressed. Studies are not necessarily grouped and categorised as qualitative or quantitative.
A recent review of more than 400 systematic reviews 34 combining quantitative and qualitative evidence identified two main synthesis designs—convergent and sequential. In a convergent design, qualitative and quantitative evidence is collated and analysed in a parallel or complementary manner, whereas in a sequential synthesis, the collation and analysis of quantitative and qualitative evidence takes place in a sequence with one synthesis informing the other ( box 4 ). 6 These designs can be seen to build on the work of Sandelowski et al , 32 35 particularly in relation to the transformation of data from qualitative to quantitative (and vice versa) and the sequential synthesis design, with a cyclical approach to reviewing that evokes Sandelowski’s contingent design.
Convergent synthesis design.
Qualitative and quantitative research is collected and analysed at the same time in a parallel or complementary manner. Integration can occur at three points:
a. Data-based convergent synthesis design
All included studies are analysed using the same methods and results presented together. As only one synthesis method is used, data transformation occurs (qualitised or quantised). Usually addressed one review question.
b. Results-based convergent synthesis design
Qualitative and quantitative data are analysed and presented separately but integrated using a further synthesis method; eg, narratively, tables, matrices or reanalysing evidence. The results of both syntheses are combined in a third synthesis. Usually addresses an overall review question with subquestions.
c. Parallel-results convergent synthesis design
Qualitative and quantitative data are analysed and presented separately with integration occurring in the interpretation of results in the discussion section. Usually addresses two or more complimentary review questions.
A two-phase approach, data collection and analysis of one type of evidence (eg, qualitative), occurs after and is informed by the collection and analysis of the other type (eg, quantitative). Usually addresses an overall question with subquestions with both syntheses complementing each other.
The three case studies ( table 1 , online supplementary files 1–3 ) illustrate the diverse combination of review designs and synthesis methods that were considered the most appropriate for specific guidelines.
In this section, we draw on examples where specific review designs and methods have been or can be used to explore selected aspects of complexity in guidelines or systematic reviews. We also identify other review methods that could potentially be used to explore aspects of complexity. Of particular note, we could not find any specific examples of systematic methods to synthesise highly diverse research designs as advocated by Petticrew et al 17 and summarised in tables 2 and 3 . For example, we could not find examples of methods to synthesise qualitative studies, case studies, quantitative longitudinal data, possibly historical data, effectiveness studies providing evidence of differential effects across different contexts, and system modelling studies (eg, agent-based modelling) to explore system adaptivity.
There are different ways that quantitative and qualitative evidence can be integrated into a review and then into a guideline development process. In practice, some methods enable integration of different types of evidence in a single synthesis, while in other methods, the single systematic review may include a series of stand-alone reviews or syntheses that are then combined in a cross-study synthesis. Table 1 provides an overview of the characteristics of different review designs and methods and guidance on their applicability for a guideline process. Designs and methods that have already been used in WHO guideline development are described in part A of the table. Part B outlines a design and method that can be used in a guideline process, and part C covers those that have the potential to integrate quantitative, qualitative and mixed-method evidence in a single review design (such as meta-narrative reviews and Bayesian syntheses), but their application in a guideline context has yet to be demonstrated.
Depending on the review design (see boxes 3 and 4 ), integration can potentially take place at a review team and design level, and more commonly at several key points of the review or guideline process. The following sections outline potential points of integration and associated practical considerations when integrating quantitative and qualitative evidence in guideline development.
In a guideline process, it is common for syntheses of quantitative and qualitative evidence to be done separately by different teams and then to integrate the evidence. A practical consideration relates to the organisation, composition and expertise of the review teams and ways of working. If the quantitative and qualitative reviews are being conducted separately and then brought together by the same team members, who are equally comfortable operating within both paradigms, then a consistent approach across both paradigms becomes possible. If, however, a team is being split between the quantitative and qualitative reviews, then the strengths of specialisation can be harnessed, for example, in quality assessment or synthesis. Optimally, at least one, if not more, of the team members should be involved in both quantitative and qualitative reviews to offer the possibility of making connexions throughout the review and not simply at re-agreed junctures. This mirrors O’Cathain’s conclusion that mixed-methods primary research tends to work only when there is a principal investigator who values and is able to oversee integration. 9 10 While the above decisions have been articulated in the context of two types of evidence, variously quantitative and qualitative, they equally apply when considering how to handle studies reporting a mixed-method study design, where data are usually disaggregated into quantitative and qualitative for the purposes of synthesis (see case study 3—risk communication in humanitarian disasters).
Clearly specified key question(s), derived from a scoping or consultation exercise, will make it clear if quantitative and qualitative evidence is required in a guideline development process and which aspects will be addressed by which types of evidence. For the remaining stages of the process, as documented below, a review team faces challenges as to whether to handle each type of evidence separately, regardless of whether sequentially or in parallel, with a view to joining the two products on completion or to attempt integration throughout the review process. In each case, the underlying choice is of efficiencies and potential comparability vs sensitivity to the underlying paradigm.
Once key questions are clearly defined, the guideline development group typically needs to consider whether to conduct a single sensitive search to address all potential subtopics (lumping) or whether to conduct specific searches for each subtopic (splitting). 36 A related consideration is whether to search separately for qualitative, quantitative and mixed-method evidence ‘streams’ or whether to conduct a single search and then identify specific study types at the subsequent sifting stage. These two considerations often mean a trade-off between a single search process involving very large numbers of records or a more protracted search process retrieving smaller numbers of records. Both approaches have advantages and choice may depend on the respective availability of resources for searching and sifting.
Closely related to decisions around searching are considerations relating to screening and selecting studies for inclusion in a systematic review. An important consideration here is whether the review team will screen records for all review types, regardless of their subsequent involvement (‘altruistic sifting’), or specialise in screening for the study type with which they are most familiar. The risk of missing relevant reports might be minimised by whole team screening for empirical reports in the first instance and then coding them for a specific quantitative, qualitative or mixed-methods report at a subsequent stage.
Within a guideline process, review teams may be more limited in their choice of instruments to assess methodological limitations of primary studies as there are mandatory requirements to use the Cochrane risk of bias tool 37 to feed into Grading of Recommendations Assessment, Development and Evaluation (GRADE) 38 or to select from a small pool of qualitative appraisal instruments in order to apply GRADE; Confidence in the Evidence from Reviews of Qualitative Research (GRADE-CERQual) 39 to assess the overall certainty or confidence in findings. The Cochrane Qualitative and Implementation Methods Group has recently issued guidance on the selection of appraisal instruments and core assessment criteria. 40 The Mixed-Methods Appraisal Tool, which is currently undergoing further development, offers a single quality assessment instrument for quantitative, qualitative and mixed-methods studies. 41 Other options include using corresponding instruments from within the same ‘stable’, for example, using different Critical Appraisal Skills Programme instruments. 42 While using instruments developed by the same team or organisation may achieve a degree of epistemological consonance, benefits may come more from consistency of approach and reporting rather than from a shared view of quality. Alternatively, a more paradigm-sensitive approach would involve selecting the best instrument for each respective review while deferring challenges from later heterogeneity of reporting.
The way in which data and evidence are extracted from primary research studies for review will be influenced by the type of integrated synthesis being undertaken and the review purpose. Initially, decisions need to be made regarding the nature and type of data and evidence that are to be extracted from the included studies. Method-specific reporting guidelines 43 44 provide a good template as to what quantitative and qualitative data it is potentially possible to extract from different types of method-specific study reports, although in practice reporting quality varies. Online supplementary file 5 provides a hypothetical example of the different types of studies from which quantitative and qualitative evidence could potentially be extracted for synthesis.
The decisions around what data or evidence to extract will be guided by how ‘integrated’ the mixed-method review will be. For those reviews where the quantitative and qualitative findings of studies are synthesised separately and integrated at the point of findings (eg, segregated or contingent approaches or sequential synthesis design), separate data extraction approaches will likely be used.
Where integration occurs during the process of the review (eg, integrated approach or convergent synthesis design), an integrated approach to data extraction may be considered, depending on the purpose of the review. This may involve the use of a data extraction framework, the choice of which needs to be congruent with the approach to synthesis chosen for the review. 40 45 The integrative or theoretical framework may be decided on a priori if a pre-developed theoretical or conceptual framework is available in the literature. 27 The development of a framework may alternatively arise from the reading of the included studies, in relation to the purpose of the review, early in the process. The Cochrane Qualitative and Implementation Methods Group provide further guidance on extraction of qualitative data, including use of software. 40
Relatively few synthesis methods start off being integrated from the beginning, and these methods have generally been subject to less testing and evaluation particularly in a guideline context (see table 1 ). A review design that started off being integrated from the beginning may be suitable for some guideline contexts (such as in case study 3—risk communication in humanitarian disasters—where there was little evidence of effect), but in general if there are sufficient trials then a separate systematic review and meta-analysis will be required for a guideline. Other papers in this series offer guidance on methods for synthesising quantitative 46 and qualitative evidence 14 in reviews that take a complexity perspective. Further guidance on integrating quantitative and qualitative evidence in a systematic review is provided by the Cochrane Qualitative and Implementation Methods Group. 19 27 29 40 47
It is highly likely (unless there are well-designed process evaluations) that the primary studies may not themselves seek to address the complexity-related questions required for a guideline process. In which case, review authors will need to configure the available evidence and transform the evidence through the synthesis process to produce explanations, propositions and hypotheses (ie, findings) that were not obvious at primary study level. It is important that guideline commissioners, developers and review authors are aware that specific methods are intended to produce a type of finding with a specific purpose (such as developing new theory in the case of meta-ethnography). 48 Case study 1 (antenatal care guideline) provides an example of how a meta-ethnography was used to develop a new theory as an end product, 48 49 as well as framework synthesis which produced descriptive and explanatory findings that were more easily incorporated into the guideline process. 27 The definitions ( box 5 ) may be helpful when defining the different types of findings.
Descriptive findings —qualitative evidence-driven translated descriptive themes that do not move beyond the primary studies.
Explanatory findings —may either be at a descriptive or theoretical level. At the descriptive level, qualitative evidence is used to explain phenomena observed in quantitative results, such as why implementation failed in specific circumstances. At the theoretical level, the transformed and interpreted findings that go beyond the primary studies can be used to explain the descriptive findings. The latter description is generally the accepted definition in the wider qualitative community.
Hypothetical or theoretical finding —qualitative evidence-driven transformed themes (or lines of argument) that go beyond the primary studies. Although similar, Thomas and Harden 56 make a distinction in the purposes between two types of theoretical findings: analytical themes and the product of meta-ethnographies, third-order interpretations. 48
Analytical themes are a product of interrogating descriptive themes by placing the synthesis within an external theoretical framework (such as the review question and subquestions) and are considered more appropriate when a specific review question is being addressed (eg, in a guideline or to inform policy). 56
Third-order interpretations come from translating studies into one another while preserving the original context and are more appropriate when a body of literature is being explored in and of itself with broader or emergent review questions. 48
A critical element of guideline development is the formulation of recommendations by the Guideline Development Group, and EtD frameworks help to facilitate this process. 16 The EtD framework can also be used as a mechanism to integrate and display quantitative and qualitative evidence and findings mapped against the EtD framework domains with hyperlinks to more detailed evidence summaries from contributing reviews (see table 1 ). It is commonly the EtD framework that enables the findings of the separate quantitative and qualitative reviews to be brought together in a guideline process. Specific challenges when populating the DECIDE evidence to decision framework 15 were noted in case study 3 (risk communication in humanitarian disasters) as there was an absence of intervention effect data and the interventions to communicate public health risks were context specific and varied. These problems would not, however, have been addressed by substitution of the DECIDE framework with the new INTEGRATE 16 evidence to decision framework. A d ifferent type of EtD framework needs to be developed for reviews that do not include sufficient evidence of intervention effect.
Mixed-method review and synthesis methods are generally the least developed of all systematic review methods. It is acknowledged that methods for combining quantitative and qualitative evidence are generally poorly articulated. 29 50 There are however some fairly well-established methods for using qualitative evidence to explore aspects of complexity (such as contextual, implementation and outcome complexity), which can be combined with evidence of effect (see sections A and B of table 1 ). 14 There are good examples of systematic reviews that use these methods to combine quantitative and qualitative evidence, and examples of guideline recommendations that were informed by evidence from both quantitative and qualitative reviews (eg, case studies 1–3). With the exception of case study 3 (risk communication), the quantitative and qualitative reviews for these specific guidelines have been conducted separately, and the findings subsequently brought together in an EtD framework to inform recommendations.
Other mixed-method review designs have potential to contribute to understanding of complex interventions and to explore aspects of wider health systems complexity but have not been sufficiently developed and tested for this specific purpose, or used in a guideline process (section C of table 1 ). Some methods such as meta-narrative reviews also explore different questions to those usually asked in a guideline process. Methods for processing (eg, quality appraisal) and synthesising the highly diverse evidence suggested in tables 2 and 3 that are required to explore specific aspects of health systems complexity (such as system adaptivity) and to populate some sections of the INTEGRATE EtD framework remain underdeveloped or in need of development.
In addition to the required methodological development mentioned above, there is no GRADE approach 38 for assessing confidence in findings developed from combined quantitative and qualitative evidence. Another paper in this series outlines how to deal with complexity and grading different types of quantitative evidence, 51 and the GRADE CERQual approach for qualitative findings is described elsewhere, 39 but both these approaches are applied to method-specific and not mixed-method findings. An unofficial adaptation of GRADE was used in the risk communication guideline that reported mixed-method findings. Nor is there a reporting guideline for mixed-method reviews, 47 and for now reports will need to conform to the relevant reporting requirements of the respective method-specific guideline. There is a need to further adapt and test DECIDE, 15 WHO-INTEGRATE 16 and other types of evidence to decision frameworks to accommodate evidence from mixed-method syntheses which do not set out to determine the statistical effects of interventions and in circumstances where there are no trials.
When conducting quantitative and qualitative reviews that will subsequently be combined, there are specific considerations for managing and integrating the different types of evidence throughout the review process. We have summarised different options for combining qualitative and quantitative evidence in mixed-method syntheses that guideline developers and systematic reviewers can choose from, as well as outlining the opportunities to integrate evidence at different stages of the review and guideline development process.
Review commissioners, authors and guideline developers generally have less experience of combining qualitative and evidence in mixed-methods reviews. In particular, there is a relatively small group of reviewers who are skilled at undertaking fully integrated mixed-method reviews. Commissioning additional qualitative and mixed-method reviews creates an additional cost. Large complex mixed-method reviews generally take more time to complete. Careful consideration needs to be given as to which guidelines would benefit most from additional qualitative and mixed-method syntheses. More training is required to develop capacity and there is a need to develop processes for preparing the guideline panel to consider and use mixed-method evidence in their decision-making.
This paper has presented how qualitative and quantitative evidence, combined in mixed-method reviews, can help understand aspects of complex interventions and the systems within which they are implemented. There are further opportunities to use these methods, and to further develop the methods, to look more widely at additional aspects of complexity. There is a range of review designs and synthesis methods to choose from depending on the question being asked or the questions that may emerge during the conduct of the synthesis. Additional methods need to be developed (or existing methods further adapted) in order to synthesise the full range of diverse evidence that is desirable to explore the complexity-related questions when complex interventions are implemented into health systems. We encourage review commissioners and authors, and guideline developers to consider using mixed-methods reviews and synthesis in guidelines and to report on their usefulness in the guideline development process.
Handling editor: Soumyadeep Bhaumik
Contributors: JN, AB, GM, KF, ÖT and ES drafted the manuscript. All authors contributed to paper development and writing and agreed the final manuscript. Anayda Portela and Susan Norris from WHO managed the series. Helen Smith was series Editor. We thank all those who provided feedback on various iterations.
Funding: Funding provided by the World Health Organization Department of Maternal, Newborn, Child and Adolescent Health through grants received from the United States Agency for International Development and the Norwegian Agency for Development Cooperation.
Disclaimer: ÖT is a staff member of WHO. The author alone is responsible for the views expressed in this publication and they do not necessarily represent the decisions or policies of WHO.
Competing interests: No financial interests declared. JN, AB and ÖT have an intellectual interest in GRADE CERQual; and JN has an intellectual interest in the iCAT_SR tool.
Patient consent: Not required.
Provenance and peer review: Not commissioned; externally peer reviewed.
Data sharing statement: No additional data are available.
Supplemental material: This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Run a free plagiarism check in 10 minutes, automatically generate references for free.
Published on 4 April 2022 by Raimo Streefkerk . Revised on 8 May 2023.
When collecting and analysing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings. Both are important for gaining different kinds of knowledge.
Common quantitative methods include experiments, observations recorded as numbers, and surveys with closed-ended questions. Qualitative research Qualitative research is expressed in words . It is used to understand concepts, thoughts or experiences. This type of research enables you to gather in-depth insights on topics that are not well understood.
The differences between quantitative and qualitative research, data collection methods, when to use qualitative vs quantitative research, how to analyse qualitative and quantitative data, frequently asked questions about qualitative and quantitative research.
Quantitative and qualitative research use different research methods to collect and analyse data, and they allow you to answer different kinds of research questions.
Quantitative and qualitative data can be collected using various methods. It is important to use a data collection method that will help answer your research question(s).
Many data collection methods can be either qualitative or quantitative. For example, in surveys, observations or case studies , your data can be represented as numbers (e.g. using rating scales or counting frequencies) or as words (e.g. with open-ended questions or descriptions of what you observe).
However, some methods are more commonly used in one type or the other.
A rule of thumb for deciding whether to use qualitative or quantitative data is:
For most research topics you can choose a qualitative, quantitative or mixed methods approach . Which type you choose depends on, among other things, whether you’re taking an inductive vs deductive research approach ; your research question(s) ; whether you’re doing experimental , correlational , or descriptive research ; and practical considerations such as time, money, availability of data, and access to respondents.
You survey 300 students at your university and ask them questions such as: ‘on a scale from 1-5, how satisfied are your with your professors?’
You can perform statistical analysis on the data and draw conclusions such as: ‘on average students rated their professors 4.4’.
You conduct in-depth interviews with 15 students and ask them open-ended questions such as: ‘How satisfied are you with your studies?’, ‘What is the most positive aspect of your study program?’ and ‘What can be done to improve the study program?’
Based on the answers you get you can ask follow-up questions to clarify things. You transcribe all interviews using transcription software and try to find commonalities and patterns.
You conduct interviews to find out how satisfied students are with their studies. Through open-ended questions you learn things you never thought about before and gain new insights. Later, you use a survey to test these insights on a larger scale.
It’s also possible to start with a survey to find out the overall trends, followed by interviews to better understand the reasons behind the trends.
Qualitative or quantitative data by itself can’t prove or demonstrate anything, but has to be analysed to show its meaning in relation to the research questions. The method of analysis differs for each type of data.
Quantitative data is based on numbers. Simple maths or more advanced statistical analysis is used to discover commonalities or patterns in the data. The results are often reported in graphs and tables.
Applications such as Excel, SPSS, or R can be used to calculate things like:
Qualitative data is more difficult to analyse than quantitative data. It consists of text, images or videos instead of numbers.
Some common approaches to analysing qualitative data include:
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.
In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .
The research methods you use depend on the type of data you need to answer your research question .
Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organisations.
There are various approaches to qualitative data analysis , but they all share five steps in common:
The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .
If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.
Streefkerk, R. (2023, May 08). Qualitative vs Quantitative Research | Examples & Methods. Scribbr. Retrieved 24 June 2024, from https://www.scribbr.co.uk/research-methods/quantitative-qualitative-research/
|
| ||
|
|
|
|
Reliability | Same findings upon replication? Test-retest & interrater reliability | Dependability; Trustworthiness; Consistency | Similar context yields similar findings? Inquiry audit |
Internal validity | Measured what intention was? Experimental control; statistical triangulation | Credibility | Compatibility between respondents’ and reported perceptions? Prolonged engagement; member checks; quality record; narrative triangulation |
External validity | Generalisability to population? Random sampling | Transferability | Applicable to other cases and contexts? Purposive sampling; detailed descriptions of process |
‘Objectivity’ | Reflecting own views? Control over subjective factors | Confirmability | Findings not function of biases of researcher? Audit trail; trust & rapport with subject; intersubjectivity |
Replicability | Can next researcher replicate the study? Peer reviewed publication | Replicability | Clear description of procedures? Appropriate peer-reviewed publication |
Source : Golafshani, 2003
About Systematic Reviews
A systematic review is designed to be transparent and replicable. Therefore, systematic reviews are considered reliable tools in scientific research and clinical practice. They synthesize the results using multiple primary studies by using strategies that minimize bias and random errors. Depending on the research question and the objectives of the research, the reviews can either be qualitative or quantitative. Qualitative reviews deal with understanding concepts, thoughts, or experiences. Quantitative reviews are employed when researchers want to test or confirm a hypothesis or theory. Let’s look at some of the differences between these two types of reviews.
To learn more about how long it takes to do a systematic review , you can check out the link to our full article on the topic.
The differences lie in the scope of the research, the methodology followed, and the type of questions they attempt to answer. Some of these differences include:
As mentioned earlier qualitative reviews attempt to answer open-ended research questions to understand or formulate hypotheses. This type of research is used to gather in-depth insights into new topics. Quantitative reviews, on the other hand, test or confirm existing hypotheses. This type of research is used to establish generalizable facts about a topic.
The data collected for both types of research differ significantly. For qualitative research, data is collected as words using observations, interviews, and interactions with study subjects or from literature reviews. Quantitative studies collect data as numbers, usually from a larger sample size.
To collect data as words for a qualitative study, researchers can employ tools such as interviews, recorded observations, focused groups, videos, or by collecting literature reviews on the same subject. For quantitative studies, data from primary sources is collected as numbers using rating scales and counting frequencies. The data for these studies can also be collected as measurements of variables from a well-designed experiment carried out under pre-defined, monitored conditions.
Data by itself cannot prove or demonstrate anything unless it is analyzed. Qualitative data is more challenging to analyze than quantitative data. A few different approaches to analyzing qualitative data include content analysis, thematic analysis, and discourse analysis. The goal of all of these approaches is to carefully analyze textual data to identify patterns, themes, and the meaning of words or phrases.
Quantitative data, since it is in the form of numbers, is analyzed using simple math or statistical methods. There are several software programs that can be used for mathematical and statistical analysis of numerical data.
Learn more about distillersr.
(Article continues below)
3 reasons to connect.
Systematic reviews.
From Munn et al (2018): “Systematic reviews can be broadly defined as a type of research synthesis that are conducted by review groups with specialized skills, who set out to identify and retrieve international evidence that is relevant to a particular question or questions and to appraise and synthesize the results of this search to inform practice, policy and in some cases, further research. .. Systematic reviews follow a structured and pre-defined process that requires rigorous methods to ensure that the results are both reliable and meaningful to end users. .. A systematic review may be undertaken to confirm or refute whether or not current practice is based on relevant evidence, to establish the quality of that evidence, and to address any uncertainty or variation in practice that may be occurring. .. Conducting a systematic review may also identify gaps, deficiencies, and trends in the current evidence and can help underpin and inform future research in the area. .. Indications for systematic reviews are:
From Munn et al (2018): “Scoping reviews are an ideal tool to determine the scope or coverage of a body of literature on a given topic and give clear indication of the volume of literature and studies available as well as an overview (broad or detailed) of its focus. Scoping reviews are useful for examining emerging evidence when it is still unclear what other, more specific questions can be posed and valuably addressed by a more precise systematic review. They can report on the types of evidence that address and inform practice in the field and the way the research has been conducted. The general purpose for conducting scoping reviews is to identify and map the available evidence . Purposes for conducting a scoping review:
Munn, Z., Peters, M. D. J., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Medical Research Methodology , 18(1), 143. https://doi.org/10.1186/s12874-018-0611-x
A quantitative review will include studies that have numerical data. A qualitative review derives data from observation, interviews, or verbal interactions and focuses on the meanings and interpretations of the participants. It will include focus groups, interviews, observations and diaries. See the qualitative research section for more information.
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses.
The PRISMA 2020 statement was published in 2021 and comprises a 27-item checklist addressing the introduction, methods, results and discussion sections of a systematic review report. It is intended to be accompanied by the PRISMA 2020 Explanation and Elaboration document .
The PRISMA extension for scoping reviews (PRISMA-ScR) was published in 2018. The checklist contains 20 essential reporting items and 2 optional items to include when completing a scoping review.
A systematic review involves the following steps:
This table outlines the differences between a systematic review and a literature review:
Focused on a single question | Not necessarily focused on a single question, but may describe an overview | |
Includes a peer review protocol or plan | No protocol is included | |
Provides summaries of the available literature on a topic | Provides summaries of the available literature on a topic | |
Clear objectives are identified | Objectives may or may not be identified | |
Criteria is stated before review is conducted | Criteria is not specified | |
Comprehensive search conducted in a systematic way | Strategy not explicitly stated | |
Process usually clear and explicit | Not described in a literature review | |
Comprehensive evaluation of study quality | Evaluation of study quality may or may not be included | |
Clear summaries based on high quality evidence | Summary based on studies where the quality of the articles may not be specified. May also be influenced by the reviewer’s theories, needs and beliefs | |
Written by an expert or group of experts with a detailed and well grounded knowledge of the issues | Written by an expert or group of experts with a well grounded knowledge of the issues |
Adapted from: University of Newcastle Australia Library
This table outlines the differences between a systematic review and a scoping review:
Systematic Review | Scoping Review | |
---|---|---|
Attempts to identify, appraise and synthesize all empirical evidence that meets pre-specified eligibility criteria to answer a given research question | A rapid gathering of literature in a given area, aiming to provide an overview of the type, extent and quantity of research available | |
To address a clearly focused review question by finding the best available, relevant studies and synthesizing the results | To capture the breadth of literature; identify gaps in a research area; occasionally used as a precursor to a systematic review | |
Focused research question with narrow parameters | The research question is often broad | |
Inclusion/exclusion usually defined at outset | Inclusion/exclusion can be developed | |
Rigorous critical appraisal and evaluation of study quality | Appraisal can be variable; typically not done, or may be done in a narrative form | |
Clear summaries of studies based on high quality evidence. May include a meta-analysis | The summary is usually descriptive | |
Evidence based | Evidence based |
Adapted from: University of South Australia
References:
Pollock, D., Davies, E. L., Peters, M. D. J., et al. (2021). Undertaking a scoping review: A practical guide for nursing and midwifery students, clinicians, researchers, and academics. J Adv Nurs, 77, 2102-2113. https://doi.org/10.1111/jan.14743
“Rapid reviews have emerged as a streamlined approach to synthesizing evidence-typically for informing emergent decisions faced by decision makers in health care setting”.
Often a focused clinical question (focused PICOS) | Narrow question (may use PICOS) | |
Comprehensive sources searched and explicit strategies | Sources may be limited but sources and strategies made explicit | |
Criterion-based | Criterion-based; uniformly applied | |
Rigorous; critical appraisal | Rigorous, critical appraisal (SRs only) | |
Qualitative summary with/without meta-analysis | Descriptive summary/categorisation of data | |
Evidence-based | Limited/cautious interpretation of findings |
Source: Khangura, S., Konyu, K., Cushman, R., Grimshaw, J. & Moher, D. (2012). Evidence summaries: the evolution of a rapid review approach. Systematic Review, 1-10. https://doi.org/10.1186/2046-4053-1-10
Examples of different types of reviews:
Literature review: A Literature review of mentorship programs in academic nursing https://doi.org/10.1016/j.profnurs.2017.02.007
Narrative review: A silent burden—prolapse, incontinence, and infertility in Australian Aboriginal and Torres Strait Islander women: A systematic search and narrative review https://doi.org/10.1002/ijgo.13920
Rapid review: Blended foods for tube-fed children: a safe and realistic option? A rapid review of the evidence https://doi.org/10.1136/archdischild-2016-311030
Scoping review: How do patients experience caring? Scoping review https://doi.org/10.1016/j.pec.2017.03.029
Systematic review: Barriers and facilitators to health screening in men: A systematic review https://doi.org/10.1016/j.socscimed.2016.07.023
A typology of reviews: an analysis of 14 review types and associated methodologies (2009) https://doi.org/10.1111/j.1471-1842.2009.00848.x
IMAGES
VIDEO
COMMENTS
Quantitative research. Quantitative research is 'explaining phenom enon by collection numerical data that are. analyzed using mathematically based methods (in particular statistics)' (Aliaga and ...
Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes.2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed ...
Qualitative Research and the Review of Related Literature Unlike quantitative researchers, who spend a great deal of time examining the research on their topic at the outset of the study, some qualitative researchers will not delve deeply into their literature until their topic has emerged over time. There is disagreement among qualitative
between qualitative and quantitative research designs is about the question of scale or depth versus. breath (Sayer, 1992). There are limited preliminary changes between both re search designs ...
Quantitative Research (an operational definition) Quantitative research: an operational description. Purpose: explain, predict or control phenomena through focused collection and analysis of numberical data. Approach: deductive; tries to be value-free/has objectives/ is outcome-oriented. Hypotheses: Specific, testable, and stated prior to study.
0 Qualitative main traditions1. 0 Quantitative Qualitative research. 0 sampling different = = research: Quantitative inferential interpretive. 0 one another i.e., in mixed Although procedures, different, of of be applications, methods2 complementary data analysis, of. Introduction.
Each approach offers distinct frameworks, tools, and philosophies that researchers employ to. investigate and comprehend diverse aspects of the wo rld [1 ]. Qualitative research involves a nuanced ...
The Difference Between Qualitative And Quantitative Jennifer Cleland,Steven J. Durning Qualitative Research for Quantitative Researchers Helen Kara,2022-01-12 Approaching qualitative research for the first time and unsure how to get started? This book captures what you need to know to jump into effective qualitative or mixed methods research.
Literature review: Survey of published works by other authors. When to use qualitative vs. quantitative research. A rule of thumb for deciding whether to use qualitative or quantitative data is: Use quantitative research if you want to confirm or test something (a theory or hypothesis)
This paper describes a broad framework of critical appraisal of published research literature that covers both quantitative and qualitative methodologies. The aim is the heart of a research study. It should be robust, concisely stated and specify a study factor, outcome factor(s) and reference population. Quantitative study designs, including ...
Learning Objectives for Chapter 4. Upon completion of this chapter, the reader should be able to: Understand the differences between quantitative and qualitative research, including: d. the differing assumptions underlying the two approaches; d. the methods typical of each approach; and. d Understand and discuss how these two approaches to ...
Summary This assignment will guide you through the analysis of how qualitative, quantitative, and mixed-methods researchers frame research questions, construct literature reviews, and integrate citations. You will engage in discourse analysis, rhetorical analysis, and citation analysis of one qualitative study and one quantitative or mixed-methods research study. This creative challenge will ...
Quantitative methodology is the dominant research framework in the social sciences. It refers to a set of strategies, techniques and assumptions used to study psychological, social and economic processes through the exploration of numeric patterns. Quantitative research gathers a range of numeric data.
There is considerable literature showing the complexity, connectivity and blurring of 'qualitative' and 'quantitative' methods in research. Yet these concepts are often represented in a binary way as independent dichotomous categories. This is evident in many key textbooks which are used in research methods courses to guide students and newer researchers in their research training. This paper ...
Chapter 3 of the dissertation provides the reader with a detailed description of the components of the method that will be used in the research. This chapter helps the reader to judge if the method used in the research provided an adequate opportunity to examine the research questions and hypotheses.
Pluye and Hong 52 define mixed-methods research as "a research approach in which a researcher integrates (a) qualitative and quantitative research questions, (b) qualitative research methods* and quantitative research designs, (c) techniques for collecting and analyzing qualitative and quantitative evidence, and (d) qualitative findings and quantitative results".A mixed-method synthesis ...
Learn how to choose and apply quantitative and qualitative methods for evidence-based outcomes. A comprehensive guide for young researchers.
investigations. Therefore, the purpose of current literature review is to distinguish the imperative comparison of quantitative and qualitative in the research methodology and determining the brilliant differences between these research factors. Furthermore, realizing the accurate appr oach and apply it
When collecting and analysing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings. Both are important for gaining different kinds of knowledge. Quantitative research. Quantitative research is expressed in numbers and graphs. It is used to test or confirm theories and assumptions.
3. How do I structuring my literature review? Turning circles into a triangle Your research Aims The text of the literature review Stepped out argument Leading to the aims The literature to review 1 1 3 2 2 3 What methods are available? 1.Traditional narrative 2.Meta‐analysis 3.Systematic quantitative literature review
Literature review process; Purpose of a literature review; Evaluating sources; Managing sources; Request a literature search; Sage Research Methods Toggle Dropdown. Selecting the approach to use ; Quantitative vs qualitative method ; Summary of different research methodologies ; Research design vs research methodology ; Diagram: importance of ...
A systematic review can be qualitative, quantitative, or a combination of the two. The approach that is chosen is determined by the research question and the scope of the research. When qualitative and quantitative techniques are used together in a given study, it is called a mixed method. In a mixed-method study, synthesis for the quantitative ...
Difference Between Qualitative and Quantitative Literature Review - Free download as PDF File (.pdf), Text File (.txt) or read online for free. difference between qualitative and quantitative literature review
A typology of reviews: an analysis of 14 review types and associated methodologies (2009) A systematic literature review is a review of a clearly formulated question that uses systematic and reproducible methods to identify, select and critically appraise all relevant research. A scoping search is a search of the existing literature which will ...
The search dates for identifying literature were from September 2001 to September 2021. The review's search strategy identified 3208 articles from 4 databases and 9 studies from web and citation search, of which 11 met the inclusion criteria for the review. Quality appraisal was conducted using the Critical Appraisal Skills Programme (CASP).