Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 11 January 2023

The effectiveness of collaborative problem solving in promoting students’ critical thinking: A meta-analysis based on empirical literature

  • Enwei Xu   ORCID: orcid.org/0000-0001-6424-8169 1 ,
  • Wei Wang 1 &
  • Qingxia Wang 1  

Humanities and Social Sciences Communications volume  10 , Article number:  16 ( 2023 ) Cite this article

17k Accesses

19 Citations

3 Altmetric

Metrics details

  • Science, technology and society

Collaborative problem-solving has been widely embraced in the classroom instruction of critical thinking, which is regarded as the core of curriculum reform based on key competencies in the field of education as well as a key competence for learners in the 21st century. However, the effectiveness of collaborative problem-solving in promoting students’ critical thinking remains uncertain. This current research presents the major findings of a meta-analysis of 36 pieces of the literature revealed in worldwide educational periodicals during the 21st century to identify the effectiveness of collaborative problem-solving in promoting students’ critical thinking and to determine, based on evidence, whether and to what extent collaborative problem solving can result in a rise or decrease in critical thinking. The findings show that (1) collaborative problem solving is an effective teaching approach to foster students’ critical thinking, with a significant overall effect size (ES = 0.82, z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]); (2) in respect to the dimensions of critical thinking, collaborative problem solving can significantly and successfully enhance students’ attitudinal tendencies (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI[0.87, 1.47]); nevertheless, it falls short in terms of improving students’ cognitive skills, having only an upper-middle impact (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI[0.58, 0.82]); and (3) the teaching type (chi 2  = 7.20, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), and learning scaffold (chi 2  = 9.03, P  < 0.01) all have an impact on critical thinking, and they can be viewed as important moderating factors that affect how critical thinking develops. On the basis of these results, recommendations are made for further study and instruction to better support students’ critical thinking in the context of collaborative problem-solving.

Similar content being viewed by others

collaborative problem solving criticism

A meta-analysis of the effects of design thinking on student learning

collaborative problem solving criticism

Fostering twenty-first century skills among primary school students through math project-based learning

collaborative problem solving criticism

A meta-analysis to gauge the impact of pedagogies employed in mixed-ability high school biology classrooms

Introduction.

Although critical thinking has a long history in research, the concept of critical thinking, which is regarded as an essential competence for learners in the 21st century, has recently attracted more attention from researchers and teaching practitioners (National Research Council, 2012 ). Critical thinking should be the core of curriculum reform based on key competencies in the field of education (Peng and Deng, 2017 ) because students with critical thinking can not only understand the meaning of knowledge but also effectively solve practical problems in real life even after knowledge is forgotten (Kek and Huijser, 2011 ). The definition of critical thinking is not universal (Ennis, 1989 ; Castle, 2009 ; Niu et al., 2013 ). In general, the definition of critical thinking is a self-aware and self-regulated thought process (Facione, 1990 ; Niu et al., 2013 ). It refers to the cognitive skills needed to interpret, analyze, synthesize, reason, and evaluate information as well as the attitudinal tendency to apply these abilities (Halpern, 2001 ). The view that critical thinking can be taught and learned through curriculum teaching has been widely supported by many researchers (e.g., Kuncel, 2011 ; Leng and Lu, 2020 ), leading to educators’ efforts to foster it among students. In the field of teaching practice, there are three types of courses for teaching critical thinking (Ennis, 1989 ). The first is an independent curriculum in which critical thinking is taught and cultivated without involving the knowledge of specific disciplines; the second is an integrated curriculum in which critical thinking is integrated into the teaching of other disciplines as a clear teaching goal; and the third is a mixed curriculum in which critical thinking is taught in parallel to the teaching of other disciplines for mixed teaching training. Furthermore, numerous measuring tools have been developed by researchers and educators to measure critical thinking in the context of teaching practice. These include standardized measurement tools, such as WGCTA, CCTST, CCTT, and CCTDI, which have been verified by repeated experiments and are considered effective and reliable by international scholars (Facione and Facione, 1992 ). In short, descriptions of critical thinking, including its two dimensions of attitudinal tendency and cognitive skills, different types of teaching courses, and standardized measurement tools provide a complex normative framework for understanding, teaching, and evaluating critical thinking.

Cultivating critical thinking in curriculum teaching can start with a problem, and one of the most popular critical thinking instructional approaches is problem-based learning (Liu et al., 2020 ). Duch et al. ( 2001 ) noted that problem-based learning in group collaboration is progressive active learning, which can improve students’ critical thinking and problem-solving skills. Collaborative problem-solving is the organic integration of collaborative learning and problem-based learning, which takes learners as the center of the learning process and uses problems with poor structure in real-world situations as the starting point for the learning process (Liang et al., 2017 ). Students learn the knowledge needed to solve problems in a collaborative group, reach a consensus on problems in the field, and form solutions through social cooperation methods, such as dialogue, interpretation, questioning, debate, negotiation, and reflection, thus promoting the development of learners’ domain knowledge and critical thinking (Cindy, 2004 ; Liang et al., 2017 ).

Collaborative problem-solving has been widely used in the teaching practice of critical thinking, and several studies have attempted to conduct a systematic review and meta-analysis of the empirical literature on critical thinking from various perspectives. However, little attention has been paid to the impact of collaborative problem-solving on critical thinking. Therefore, the best approach for developing and enhancing critical thinking throughout collaborative problem-solving is to examine how to implement critical thinking instruction; however, this issue is still unexplored, which means that many teachers are incapable of better instructing critical thinking (Leng and Lu, 2020 ; Niu et al., 2013 ). For example, Huber ( 2016 ) provided the meta-analysis findings of 71 publications on gaining critical thinking over various time frames in college with the aim of determining whether critical thinking was truly teachable. These authors found that learners significantly improve their critical thinking while in college and that critical thinking differs with factors such as teaching strategies, intervention duration, subject area, and teaching type. The usefulness of collaborative problem-solving in fostering students’ critical thinking, however, was not determined by this study, nor did it reveal whether there existed significant variations among the different elements. A meta-analysis of 31 pieces of educational literature was conducted by Liu et al. ( 2020 ) to assess the impact of problem-solving on college students’ critical thinking. These authors found that problem-solving could promote the development of critical thinking among college students and proposed establishing a reasonable group structure for problem-solving in a follow-up study to improve students’ critical thinking. Additionally, previous empirical studies have reached inconclusive and even contradictory conclusions about whether and to what extent collaborative problem-solving increases or decreases critical thinking levels. As an illustration, Yang et al. ( 2008 ) carried out an experiment on the integrated curriculum teaching of college students based on a web bulletin board with the goal of fostering participants’ critical thinking in the context of collaborative problem-solving. These authors’ research revealed that through sharing, debating, examining, and reflecting on various experiences and ideas, collaborative problem-solving can considerably enhance students’ critical thinking in real-life problem situations. In contrast, collaborative problem-solving had a positive impact on learners’ interaction and could improve learning interest and motivation but could not significantly improve students’ critical thinking when compared to traditional classroom teaching, according to research by Naber and Wyatt ( 2014 ) and Sendag and Odabasi ( 2009 ) on undergraduate and high school students, respectively.

The above studies show that there is inconsistency regarding the effectiveness of collaborative problem-solving in promoting students’ critical thinking. Therefore, it is essential to conduct a thorough and trustworthy review to detect and decide whether and to what degree collaborative problem-solving can result in a rise or decrease in critical thinking. Meta-analysis is a quantitative analysis approach that is utilized to examine quantitative data from various separate studies that are all focused on the same research topic. This approach characterizes the effectiveness of its impact by averaging the effect sizes of numerous qualitative studies in an effort to reduce the uncertainty brought on by independent research and produce more conclusive findings (Lipsey and Wilson, 2001 ).

This paper used a meta-analytic approach and carried out a meta-analysis to examine the effectiveness of collaborative problem-solving in promoting students’ critical thinking in order to make a contribution to both research and practice. The following research questions were addressed by this meta-analysis:

What is the overall effect size of collaborative problem-solving in promoting students’ critical thinking and its impact on the two dimensions of critical thinking (i.e., attitudinal tendency and cognitive skills)?

How are the disparities between the study conclusions impacted by various moderating variables if the impacts of various experimental designs in the included studies are heterogeneous?

This research followed the strict procedures (e.g., database searching, identification, screening, eligibility, merging, duplicate removal, and analysis of included studies) of Cooper’s ( 2010 ) proposed meta-analysis approach for examining quantitative data from various separate studies that are all focused on the same research topic. The relevant empirical research that appeared in worldwide educational periodicals within the 21st century was subjected to this meta-analysis using Rev-Man 5.4. The consistency of the data extracted separately by two researchers was tested using Cohen’s kappa coefficient, and a publication bias test and a heterogeneity test were run on the sample data to ascertain the quality of this meta-analysis.

Data sources and search strategies

There were three stages to the data collection process for this meta-analysis, as shown in Fig. 1 , which shows the number of articles included and eliminated during the selection process based on the statement and study eligibility criteria.

figure 1

This flowchart shows the number of records identified, included and excluded in the article.

First, the databases used to systematically search for relevant articles were the journal papers of the Web of Science Core Collection and the Chinese Core source journal, as well as the Chinese Social Science Citation Index (CSSCI) source journal papers included in CNKI. These databases were selected because they are credible platforms that are sources of scholarly and peer-reviewed information with advanced search tools and contain literature relevant to the subject of our topic from reliable researchers and experts. The search string with the Boolean operator used in the Web of Science was “TS = (((“critical thinking” or “ct” and “pretest” or “posttest”) or (“critical thinking” or “ct” and “control group” or “quasi experiment” or “experiment”)) and (“collaboration” or “collaborative learning” or “CSCL”) and (“problem solving” or “problem-based learning” or “PBL”))”. The research area was “Education Educational Research”, and the search period was “January 1, 2000, to December 30, 2021”. A total of 412 papers were obtained. The search string with the Boolean operator used in the CNKI was “SU = (‘critical thinking’*‘collaboration’ + ‘critical thinking’*‘collaborative learning’ + ‘critical thinking’*‘CSCL’ + ‘critical thinking’*‘problem solving’ + ‘critical thinking’*‘problem-based learning’ + ‘critical thinking’*‘PBL’ + ‘critical thinking’*‘problem oriented’) AND FT = (‘experiment’ + ‘quasi experiment’ + ‘pretest’ + ‘posttest’ + ‘empirical study’)” (translated into Chinese when searching). A total of 56 studies were found throughout the search period of “January 2000 to December 2021”. From the databases, all duplicates and retractions were eliminated before exporting the references into Endnote, a program for managing bibliographic references. In all, 466 studies were found.

Second, the studies that matched the inclusion and exclusion criteria for the meta-analysis were chosen by two researchers after they had reviewed the abstracts and titles of the gathered articles, yielding a total of 126 studies.

Third, two researchers thoroughly reviewed each included article’s whole text in accordance with the inclusion and exclusion criteria. Meanwhile, a snowball search was performed using the references and citations of the included articles to ensure complete coverage of the articles. Ultimately, 36 articles were kept.

Two researchers worked together to carry out this entire process, and a consensus rate of almost 94.7% was reached after discussion and negotiation to clarify any emerging differences.

Eligibility criteria

Since not all the retrieved studies matched the criteria for this meta-analysis, eligibility criteria for both inclusion and exclusion were developed as follows:

The publication language of the included studies was limited to English and Chinese, and the full text could be obtained. Articles that did not meet the publication language and articles not published between 2000 and 2021 were excluded.

The research design of the included studies must be empirical and quantitative studies that can assess the effect of collaborative problem-solving on the development of critical thinking. Articles that could not identify the causal mechanisms by which collaborative problem-solving affects critical thinking, such as review articles and theoretical articles, were excluded.

The research method of the included studies must feature a randomized control experiment or a quasi-experiment, or a natural experiment, which have a higher degree of internal validity with strong experimental designs and can all plausibly provide evidence that critical thinking and collaborative problem-solving are causally related. Articles with non-experimental research methods, such as purely correlational or observational studies, were excluded.

The participants of the included studies were only students in school, including K-12 students and college students. Articles in which the participants were non-school students, such as social workers or adult learners, were excluded.

The research results of the included studies must mention definite signs that may be utilized to gauge critical thinking’s impact (e.g., sample size, mean value, or standard deviation). Articles that lacked specific measurement indicators for critical thinking and could not calculate the effect size were excluded.

Data coding design

In order to perform a meta-analysis, it is necessary to collect the most important information from the articles, codify that information’s properties, and convert descriptive data into quantitative data. Therefore, this study designed a data coding template (see Table 1 ). Ultimately, 16 coding fields were retained.

The designed data-coding template consisted of three pieces of information. Basic information about the papers was included in the descriptive information: the publishing year, author, serial number, and title of the paper.

The variable information for the experimental design had three variables: the independent variable (instruction method), the dependent variable (critical thinking), and the moderating variable (learning stage, teaching type, intervention duration, learning scaffold, group size, measuring tool, and subject area). Depending on the topic of this study, the intervention strategy, as the independent variable, was coded into collaborative and non-collaborative problem-solving. The dependent variable, critical thinking, was coded as a cognitive skill and an attitudinal tendency. And seven moderating variables were created by grouping and combining the experimental design variables discovered within the 36 studies (see Table 1 ), where learning stages were encoded as higher education, high school, middle school, and primary school or lower; teaching types were encoded as mixed courses, integrated courses, and independent courses; intervention durations were encoded as 0–1 weeks, 1–4 weeks, 4–12 weeks, and more than 12 weeks; group sizes were encoded as 2–3 persons, 4–6 persons, 7–10 persons, and more than 10 persons; learning scaffolds were encoded as teacher-supported learning scaffold, technique-supported learning scaffold, and resource-supported learning scaffold; measuring tools were encoded as standardized measurement tools (e.g., WGCTA, CCTT, CCTST, and CCTDI) and self-adapting measurement tools (e.g., modified or made by researchers); and subject areas were encoded according to the specific subjects used in the 36 included studies.

The data information contained three metrics for measuring critical thinking: sample size, average value, and standard deviation. It is vital to remember that studies with various experimental designs frequently adopt various formulas to determine the effect size. And this paper used Morris’ proposed standardized mean difference (SMD) calculation formula ( 2008 , p. 369; see Supplementary Table S3 ).

Procedure for extracting and coding data

According to the data coding template (see Table 1 ), the 36 papers’ information was retrieved by two researchers, who then entered them into Excel (see Supplementary Table S1 ). The results of each study were extracted separately in the data extraction procedure if an article contained numerous studies on critical thinking, or if a study assessed different critical thinking dimensions. For instance, Tiwari et al. ( 2010 ) used four time points, which were viewed as numerous different studies, to examine the outcomes of critical thinking, and Chen ( 2013 ) included the two outcome variables of attitudinal tendency and cognitive skills, which were regarded as two studies. After discussion and negotiation during data extraction, the two researchers’ consistency test coefficients were roughly 93.27%. Supplementary Table S2 details the key characteristics of the 36 included articles with 79 effect quantities, including descriptive information (e.g., the publishing year, author, serial number, and title of the paper), variable information (e.g., independent variables, dependent variables, and moderating variables), and data information (e.g., mean values, standard deviations, and sample size). Following that, testing for publication bias and heterogeneity was done on the sample data using the Rev-Man 5.4 software, and then the test results were used to conduct a meta-analysis.

Publication bias test

When the sample of studies included in a meta-analysis does not accurately reflect the general status of research on the relevant subject, publication bias is said to be exhibited in this research. The reliability and accuracy of the meta-analysis may be impacted by publication bias. Due to this, the meta-analysis needs to check the sample data for publication bias (Stewart et al., 2006 ). A popular method to check for publication bias is the funnel plot; and it is unlikely that there will be publishing bias when the data are equally dispersed on either side of the average effect size and targeted within the higher region. The data are equally dispersed within the higher portion of the efficient zone, consistent with the funnel plot connected with this analysis (see Fig. 2 ), indicating that publication bias is unlikely in this situation.

figure 2

This funnel plot shows the result of publication bias of 79 effect quantities across 36 studies.

Heterogeneity test

To select the appropriate effect models for the meta-analysis, one might use the results of a heterogeneity test on the data effect sizes. In a meta-analysis, it is common practice to gauge the degree of data heterogeneity using the I 2 value, and I 2  ≥ 50% is typically understood to denote medium-high heterogeneity, which calls for the adoption of a random effect model; if not, a fixed effect model ought to be applied (Lipsey and Wilson, 2001 ). The findings of the heterogeneity test in this paper (see Table 2 ) revealed that I 2 was 86% and displayed significant heterogeneity ( P  < 0.01). To ensure accuracy and reliability, the overall effect size ought to be calculated utilizing the random effect model.

The analysis of the overall effect size

This meta-analysis utilized a random effect model to examine 79 effect quantities from 36 studies after eliminating heterogeneity. In accordance with Cohen’s criterion (Cohen, 1992 ), it is abundantly clear from the analysis results, which are shown in the forest plot of the overall effect (see Fig. 3 ), that the cumulative impact size of cooperative problem-solving is 0.82, which is statistically significant ( z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]), and can encourage learners to practice critical thinking.

figure 3

This forest plot shows the analysis result of the overall effect size across 36 studies.

In addition, this study examined two distinct dimensions of critical thinking to better understand the precise contributions that collaborative problem-solving makes to the growth of critical thinking. The findings (see Table 3 ) indicate that collaborative problem-solving improves cognitive skills (ES = 0.70) and attitudinal tendency (ES = 1.17), with significant intergroup differences (chi 2  = 7.95, P  < 0.01). Although collaborative problem-solving improves both dimensions of critical thinking, it is essential to point out that the improvements in students’ attitudinal tendency are much more pronounced and have a significant comprehensive effect (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI [0.87, 1.47]), whereas gains in learners’ cognitive skill are slightly improved and are just above average. (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI [0.58, 0.82]).

The analysis of moderator effect size

The whole forest plot’s 79 effect quantities underwent a two-tailed test, which revealed significant heterogeneity ( I 2  = 86%, z  = 12.78, P  < 0.01), indicating differences between various effect sizes that may have been influenced by moderating factors other than sampling error. Therefore, exploring possible moderating factors that might produce considerable heterogeneity was done using subgroup analysis, such as the learning stage, learning scaffold, teaching type, group size, duration of the intervention, measuring tool, and the subject area included in the 36 experimental designs, in order to further explore the key factors that influence critical thinking. The findings (see Table 4 ) indicate that various moderating factors have advantageous effects on critical thinking. In this situation, the subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), learning scaffold (chi 2  = 9.03, P  < 0.01), and teaching type (chi 2  = 7.20, P  < 0.05) are all significant moderators that can be applied to support the cultivation of critical thinking. However, since the learning stage and the measuring tools did not significantly differ among intergroup (chi 2  = 3.15, P  = 0.21 > 0.05, and chi 2  = 0.08, P  = 0.78 > 0.05), we are unable to explain why these two factors are crucial in supporting the cultivation of critical thinking in the context of collaborative problem-solving. These are the precise outcomes, as follows:

Various learning stages influenced critical thinking positively, without significant intergroup differences (chi 2  = 3.15, P  = 0.21 > 0.05). High school was first on the list of effect sizes (ES = 1.36, P  < 0.01), then higher education (ES = 0.78, P  < 0.01), and middle school (ES = 0.73, P  < 0.01). These results show that, despite the learning stage’s beneficial influence on cultivating learners’ critical thinking, we are unable to explain why it is essential for cultivating critical thinking in the context of collaborative problem-solving.

Different teaching types had varying degrees of positive impact on critical thinking, with significant intergroup differences (chi 2  = 7.20, P  < 0.05). The effect size was ranked as follows: mixed courses (ES = 1.34, P  < 0.01), integrated courses (ES = 0.81, P  < 0.01), and independent courses (ES = 0.27, P  < 0.01). These results indicate that the most effective approach to cultivate critical thinking utilizing collaborative problem solving is through the teaching type of mixed courses.

Various intervention durations significantly improved critical thinking, and there were significant intergroup differences (chi 2  = 12.18, P  < 0.01). The effect sizes related to this variable showed a tendency to increase with longer intervention durations. The improvement in critical thinking reached a significant level (ES = 0.85, P  < 0.01) after more than 12 weeks of training. These findings indicate that the intervention duration and critical thinking’s impact are positively correlated, with a longer intervention duration having a greater effect.

Different learning scaffolds influenced critical thinking positively, with significant intergroup differences (chi 2  = 9.03, P  < 0.01). The resource-supported learning scaffold (ES = 0.69, P  < 0.01) acquired a medium-to-higher level of impact, the technique-supported learning scaffold (ES = 0.63, P  < 0.01) also attained a medium-to-higher level of impact, and the teacher-supported learning scaffold (ES = 0.92, P  < 0.01) displayed a high level of significant impact. These results show that the learning scaffold with teacher support has the greatest impact on cultivating critical thinking.

Various group sizes influenced critical thinking positively, and the intergroup differences were statistically significant (chi 2  = 8.77, P  < 0.05). Critical thinking showed a general declining trend with increasing group size. The overall effect size of 2–3 people in this situation was the biggest (ES = 0.99, P  < 0.01), and when the group size was greater than 7 people, the improvement in critical thinking was at the lower-middle level (ES < 0.5, P  < 0.01). These results show that the impact on critical thinking is positively connected with group size, and as group size grows, so does the overall impact.

Various measuring tools influenced critical thinking positively, with significant intergroup differences (chi 2  = 0.08, P  = 0.78 > 0.05). In this situation, the self-adapting measurement tools obtained an upper-medium level of effect (ES = 0.78), whereas the complete effect size of the standardized measurement tools was the largest, achieving a significant level of effect (ES = 0.84, P  < 0.01). These results show that, despite the beneficial influence of the measuring tool on cultivating critical thinking, we are unable to explain why it is crucial in fostering the growth of critical thinking by utilizing the approach of collaborative problem-solving.

Different subject areas had a greater impact on critical thinking, and the intergroup differences were statistically significant (chi 2  = 13.36, P  < 0.05). Mathematics had the greatest overall impact, achieving a significant level of effect (ES = 1.68, P  < 0.01), followed by science (ES = 1.25, P  < 0.01) and medical science (ES = 0.87, P  < 0.01), both of which also achieved a significant level of effect. Programming technology was the least effective (ES = 0.39, P  < 0.01), only having a medium-low degree of effect compared to education (ES = 0.72, P  < 0.01) and other fields (such as language, art, and social sciences) (ES = 0.58, P  < 0.01). These results suggest that scientific fields (e.g., mathematics, science) may be the most effective subject areas for cultivating critical thinking utilizing the approach of collaborative problem-solving.

The effectiveness of collaborative problem solving with regard to teaching critical thinking

According to this meta-analysis, using collaborative problem-solving as an intervention strategy in critical thinking teaching has a considerable amount of impact on cultivating learners’ critical thinking as a whole and has a favorable promotional effect on the two dimensions of critical thinking. According to certain studies, collaborative problem solving, the most frequently used critical thinking teaching strategy in curriculum instruction can considerably enhance students’ critical thinking (e.g., Liang et al., 2017 ; Liu et al., 2020 ; Cindy, 2004 ). This meta-analysis provides convergent data support for the above research views. Thus, the findings of this meta-analysis not only effectively address the first research query regarding the overall effect of cultivating critical thinking and its impact on the two dimensions of critical thinking (i.e., attitudinal tendency and cognitive skills) utilizing the approach of collaborative problem-solving, but also enhance our confidence in cultivating critical thinking by using collaborative problem-solving intervention approach in the context of classroom teaching.

Furthermore, the associated improvements in attitudinal tendency are much stronger, but the corresponding improvements in cognitive skill are only marginally better. According to certain studies, cognitive skill differs from the attitudinal tendency in classroom instruction; the cultivation and development of the former as a key ability is a process of gradual accumulation, while the latter as an attitude is affected by the context of the teaching situation (e.g., a novel and exciting teaching approach, challenging and rewarding tasks) (Halpern, 2001 ; Wei and Hong, 2022 ). Collaborative problem-solving as a teaching approach is exciting and interesting, as well as rewarding and challenging; because it takes the learners as the focus and examines problems with poor structure in real situations, and it can inspire students to fully realize their potential for problem-solving, which will significantly improve their attitudinal tendency toward solving problems (Liu et al., 2020 ). Similar to how collaborative problem-solving influences attitudinal tendency, attitudinal tendency impacts cognitive skill when attempting to solve a problem (Liu et al., 2020 ; Zhang et al., 2022 ), and stronger attitudinal tendencies are associated with improved learning achievement and cognitive ability in students (Sison, 2008 ; Zhang et al., 2022 ). It can be seen that the two specific dimensions of critical thinking as well as critical thinking as a whole are affected by collaborative problem-solving, and this study illuminates the nuanced links between cognitive skills and attitudinal tendencies with regard to these two dimensions of critical thinking. To fully develop students’ capacity for critical thinking, future empirical research should pay closer attention to cognitive skills.

The moderating effects of collaborative problem solving with regard to teaching critical thinking

In order to further explore the key factors that influence critical thinking, exploring possible moderating effects that might produce considerable heterogeneity was done using subgroup analysis. The findings show that the moderating factors, such as the teaching type, learning stage, group size, learning scaffold, duration of the intervention, measuring tool, and the subject area included in the 36 experimental designs, could all support the cultivation of collaborative problem-solving in critical thinking. Among them, the effect size differences between the learning stage and measuring tool are not significant, which does not explain why these two factors are crucial in supporting the cultivation of critical thinking utilizing the approach of collaborative problem-solving.

In terms of the learning stage, various learning stages influenced critical thinking positively without significant intergroup differences, indicating that we are unable to explain why it is crucial in fostering the growth of critical thinking.

Although high education accounts for 70.89% of all empirical studies performed by researchers, high school may be the appropriate learning stage to foster students’ critical thinking by utilizing the approach of collaborative problem-solving since it has the largest overall effect size. This phenomenon may be related to student’s cognitive development, which needs to be further studied in follow-up research.

With regard to teaching type, mixed course teaching may be the best teaching method to cultivate students’ critical thinking. Relevant studies have shown that in the actual teaching process if students are trained in thinking methods alone, the methods they learn are isolated and divorced from subject knowledge, which is not conducive to their transfer of thinking methods; therefore, if students’ thinking is trained only in subject teaching without systematic method training, it is challenging to apply to real-world circumstances (Ruggiero, 2012 ; Hu and Liu, 2015 ). Teaching critical thinking as mixed course teaching in parallel to other subject teachings can achieve the best effect on learners’ critical thinking, and explicit critical thinking instruction is more effective than less explicit critical thinking instruction (Bensley and Spero, 2014 ).

In terms of the intervention duration, with longer intervention times, the overall effect size shows an upward tendency. Thus, the intervention duration and critical thinking’s impact are positively correlated. Critical thinking, as a key competency for students in the 21st century, is difficult to get a meaningful improvement in a brief intervention duration. Instead, it could be developed over a lengthy period of time through consistent teaching and the progressive accumulation of knowledge (Halpern, 2001 ; Hu and Liu, 2015 ). Therefore, future empirical studies ought to take these restrictions into account throughout a longer period of critical thinking instruction.

With regard to group size, a group size of 2–3 persons has the highest effect size, and the comprehensive effect size decreases with increasing group size in general. This outcome is in line with some research findings; as an example, a group composed of two to four members is most appropriate for collaborative learning (Schellens and Valcke, 2006 ). However, the meta-analysis results also indicate that once the group size exceeds 7 people, small groups cannot produce better interaction and performance than large groups. This may be because the learning scaffolds of technique support, resource support, and teacher support improve the frequency and effectiveness of interaction among group members, and a collaborative group with more members may increase the diversity of views, which is helpful to cultivate critical thinking utilizing the approach of collaborative problem-solving.

With regard to the learning scaffold, the three different kinds of learning scaffolds can all enhance critical thinking. Among them, the teacher-supported learning scaffold has the largest overall effect size, demonstrating the interdependence of effective learning scaffolds and collaborative problem-solving. This outcome is in line with some research findings; as an example, a successful strategy is to encourage learners to collaborate, come up with solutions, and develop critical thinking skills by using learning scaffolds (Reiser, 2004 ; Xu et al., 2022 ); learning scaffolds can lower task complexity and unpleasant feelings while also enticing students to engage in learning activities (Wood et al., 2006 ); learning scaffolds are designed to assist students in using learning approaches more successfully to adapt the collaborative problem-solving process, and the teacher-supported learning scaffolds have the greatest influence on critical thinking in this process because they are more targeted, informative, and timely (Xu et al., 2022 ).

With respect to the measuring tool, despite the fact that standardized measurement tools (such as the WGCTA, CCTT, and CCTST) have been acknowledged as trustworthy and effective by worldwide experts, only 54.43% of the research included in this meta-analysis adopted them for assessment, and the results indicated no intergroup differences. These results suggest that not all teaching circumstances are appropriate for measuring critical thinking using standardized measurement tools. “The measuring tools for measuring thinking ability have limits in assessing learners in educational situations and should be adapted appropriately to accurately assess the changes in learners’ critical thinking.”, according to Simpson and Courtney ( 2002 , p. 91). As a result, in order to more fully and precisely gauge how learners’ critical thinking has evolved, we must properly modify standardized measuring tools based on collaborative problem-solving learning contexts.

With regard to the subject area, the comprehensive effect size of science departments (e.g., mathematics, science, medical science) is larger than that of language arts and social sciences. Some recent international education reforms have noted that critical thinking is a basic part of scientific literacy. Students with scientific literacy can prove the rationality of their judgment according to accurate evidence and reasonable standards when they face challenges or poorly structured problems (Kyndt et al., 2013 ), which makes critical thinking crucial for developing scientific understanding and applying this understanding to practical problem solving for problems related to science, technology, and society (Yore et al., 2007 ).

Suggestions for critical thinking teaching

Other than those stated in the discussion above, the following suggestions are offered for critical thinking instruction utilizing the approach of collaborative problem-solving.

First, teachers should put a special emphasis on the two core elements, which are collaboration and problem-solving, to design real problems based on collaborative situations. This meta-analysis provides evidence to support the view that collaborative problem-solving has a strong synergistic effect on promoting students’ critical thinking. Asking questions about real situations and allowing learners to take part in critical discussions on real problems during class instruction are key ways to teach critical thinking rather than simply reading speculative articles without practice (Mulnix, 2012 ). Furthermore, the improvement of students’ critical thinking is realized through cognitive conflict with other learners in the problem situation (Yang et al., 2008 ). Consequently, it is essential for teachers to put a special emphasis on the two core elements, which are collaboration and problem-solving, and design real problems and encourage students to discuss, negotiate, and argue based on collaborative problem-solving situations.

Second, teachers should design and implement mixed courses to cultivate learners’ critical thinking, utilizing the approach of collaborative problem-solving. Critical thinking can be taught through curriculum instruction (Kuncel, 2011 ; Leng and Lu, 2020 ), with the goal of cultivating learners’ critical thinking for flexible transfer and application in real problem-solving situations. This meta-analysis shows that mixed course teaching has a highly substantial impact on the cultivation and promotion of learners’ critical thinking. Therefore, teachers should design and implement mixed course teaching with real collaborative problem-solving situations in combination with the knowledge content of specific disciplines in conventional teaching, teach methods and strategies of critical thinking based on poorly structured problems to help students master critical thinking, and provide practical activities in which students can interact with each other to develop knowledge construction and critical thinking utilizing the approach of collaborative problem-solving.

Third, teachers should be more trained in critical thinking, particularly preservice teachers, and they also should be conscious of the ways in which teachers’ support for learning scaffolds can promote critical thinking. The learning scaffold supported by teachers had the greatest impact on learners’ critical thinking, in addition to being more directive, targeted, and timely (Wood et al., 2006 ). Critical thinking can only be effectively taught when teachers recognize the significance of critical thinking for students’ growth and use the proper approaches while designing instructional activities (Forawi, 2016 ). Therefore, with the intention of enabling teachers to create learning scaffolds to cultivate learners’ critical thinking utilizing the approach of collaborative problem solving, it is essential to concentrate on the teacher-supported learning scaffolds and enhance the instruction for teaching critical thinking to teachers, especially preservice teachers.

Implications and limitations

There are certain limitations in this meta-analysis, but future research can correct them. First, the search languages were restricted to English and Chinese, so it is possible that pertinent studies that were written in other languages were overlooked, resulting in an inadequate number of articles for review. Second, these data provided by the included studies are partially missing, such as whether teachers were trained in the theory and practice of critical thinking, the average age and gender of learners, and the differences in critical thinking among learners of various ages and genders. Third, as is typical for review articles, more studies were released while this meta-analysis was being done; therefore, it had a time limit. With the development of relevant research, future studies focusing on these issues are highly relevant and needed.

Conclusions

The subject of the magnitude of collaborative problem-solving’s impact on fostering students’ critical thinking, which received scant attention from other studies, was successfully addressed by this study. The question of the effectiveness of collaborative problem-solving in promoting students’ critical thinking was addressed in this study, which addressed a topic that had gotten little attention in earlier research. The following conclusions can be made:

Regarding the results obtained, collaborative problem solving is an effective teaching approach to foster learners’ critical thinking, with a significant overall effect size (ES = 0.82, z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]). With respect to the dimensions of critical thinking, collaborative problem-solving can significantly and effectively improve students’ attitudinal tendency, and the comprehensive effect is significant (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI [0.87, 1.47]); nevertheless, it falls short in terms of improving students’ cognitive skills, having only an upper-middle impact (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI [0.58, 0.82]).

As demonstrated by both the results and the discussion, there are varying degrees of beneficial effects on students’ critical thinking from all seven moderating factors, which were found across 36 studies. In this context, the teaching type (chi 2  = 7.20, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), and learning scaffold (chi 2  = 9.03, P  < 0.01) all have a positive impact on critical thinking, and they can be viewed as important moderating factors that affect how critical thinking develops. Since the learning stage (chi 2  = 3.15, P  = 0.21 > 0.05) and measuring tools (chi 2  = 0.08, P  = 0.78 > 0.05) did not demonstrate any significant intergroup differences, we are unable to explain why these two factors are crucial in supporting the cultivation of critical thinking in the context of collaborative problem-solving.

Data availability

All data generated or analyzed during this study are included within the article and its supplementary information files, and the supplementary information files are available in the Dataverse repository: https://doi.org/10.7910/DVN/IPFJO6 .

Bensley DA, Spero RA (2014) Improving critical thinking skills and meta-cognitive monitoring through direct infusion. Think Skills Creat 12:55–68. https://doi.org/10.1016/j.tsc.2014.02.001

Article   Google Scholar  

Castle A (2009) Defining and assessing critical thinking skills for student radiographers. Radiography 15(1):70–76. https://doi.org/10.1016/j.radi.2007.10.007

Chen XD (2013) An empirical study on the influence of PBL teaching model on critical thinking ability of non-English majors. J PLA Foreign Lang College 36 (04):68–72

Google Scholar  

Cohen A (1992) Antecedents of organizational commitment across occupational groups: a meta-analysis. J Organ Behav. https://doi.org/10.1002/job.4030130602

Cooper H (2010) Research synthesis and meta-analysis: a step-by-step approach, 4th edn. Sage, London, England

Cindy HS (2004) Problem-based learning: what and how do students learn? Educ Psychol Rev 51(1):31–39

Duch BJ, Gron SD, Allen DE (2001) The power of problem-based learning: a practical “how to” for teaching undergraduate courses in any discipline. Stylus Educ Sci 2:190–198

Ennis RH (1989) Critical thinking and subject specificity: clarification and needed research. Educ Res 18(3):4–10. https://doi.org/10.3102/0013189x018003004

Facione PA (1990) Critical thinking: a statement of expert consensus for purposes of educational assessment and instruction. Research findings and recommendations. Eric document reproduction service. https://eric.ed.gov/?id=ed315423

Facione PA, Facione NC (1992) The California Critical Thinking Dispositions Inventory (CCTDI) and the CCTDI test manual. California Academic Press, Millbrae, CA

Forawi SA (2016) Standard-based science education and critical thinking. Think Skills Creat 20:52–62. https://doi.org/10.1016/j.tsc.2016.02.005

Halpern DF (2001) Assessing the effectiveness of critical thinking instruction. J Gen Educ 50(4):270–286. https://doi.org/10.2307/27797889

Hu WP, Liu J (2015) Cultivation of pupils’ thinking ability: a five-year follow-up study. Psychol Behav Res 13(05):648–654. https://doi.org/10.3969/j.issn.1672-0628.2015.05.010

Huber K (2016) Does college teach critical thinking? A meta-analysis. Rev Educ Res 86(2):431–468. https://doi.org/10.3102/0034654315605917

Kek MYCA, Huijser H (2011) The power of problem-based learning in developing critical thinking skills: preparing students for tomorrow’s digital futures in today’s classrooms. High Educ Res Dev 30(3):329–341. https://doi.org/10.1080/07294360.2010.501074

Kuncel NR (2011) Measurement and meaning of critical thinking (Research report for the NRC 21st Century Skills Workshop). National Research Council, Washington, DC

Kyndt E, Raes E, Lismont B, Timmers F, Cascallar E, Dochy F (2013) A meta-analysis of the effects of face-to-face cooperative learning. Do recent studies falsify or verify earlier findings? Educ Res Rev 10(2):133–149. https://doi.org/10.1016/j.edurev.2013.02.002

Leng J, Lu XX (2020) Is critical thinking really teachable?—A meta-analysis based on 79 experimental or quasi experimental studies. Open Educ Res 26(06):110–118. https://doi.org/10.13966/j.cnki.kfjyyj.2020.06.011

Liang YZ, Zhu K, Zhao CL (2017) An empirical study on the depth of interaction promoted by collaborative problem solving learning activities. J E-educ Res 38(10):87–92. https://doi.org/10.13811/j.cnki.eer.2017.10.014

Lipsey M, Wilson D (2001) Practical meta-analysis. International Educational and Professional, London, pp. 92–160

Liu Z, Wu W, Jiang Q (2020) A study on the influence of problem based learning on college students’ critical thinking-based on a meta-analysis of 31 studies. Explor High Educ 03:43–49

Morris SB (2008) Estimating effect sizes from pretest-posttest-control group designs. Organ Res Methods 11(2):364–386. https://doi.org/10.1177/1094428106291059

Article   ADS   Google Scholar  

Mulnix JW (2012) Thinking critically about critical thinking. Educ Philos Theory 44(5):464–479. https://doi.org/10.1111/j.1469-5812.2010.00673.x

Naber J, Wyatt TH (2014) The effect of reflective writing interventions on the critical thinking skills and dispositions of baccalaureate nursing students. Nurse Educ Today 34(1):67–72. https://doi.org/10.1016/j.nedt.2013.04.002

National Research Council (2012) Education for life and work: developing transferable knowledge and skills in the 21st century. The National Academies Press, Washington, DC

Niu L, Behar HLS, Garvan CW (2013) Do instructional interventions influence college students’ critical thinking skills? A meta-analysis. Educ Res Rev 9(12):114–128. https://doi.org/10.1016/j.edurev.2012.12.002

Peng ZM, Deng L (2017) Towards the core of education reform: cultivating critical thinking skills as the core of skills in the 21st century. Res Educ Dev 24:57–63. https://doi.org/10.14121/j.cnki.1008-3855.2017.24.011

Reiser BJ (2004) Scaffolding complex learning: the mechanisms of structuring and problematizing student work. J Learn Sci 13(3):273–304. https://doi.org/10.1207/s15327809jls1303_2

Ruggiero VR (2012) The art of thinking: a guide to critical and creative thought, 4th edn. Harper Collins College Publishers, New York

Schellens T, Valcke M (2006) Fostering knowledge construction in university students through asynchronous discussion groups. Comput Educ 46(4):349–370. https://doi.org/10.1016/j.compedu.2004.07.010

Sendag S, Odabasi HF (2009) Effects of an online problem based learning course on content knowledge acquisition and critical thinking skills. Comput Educ 53(1):132–141. https://doi.org/10.1016/j.compedu.2009.01.008

Sison R (2008) Investigating Pair Programming in a Software Engineering Course in an Asian Setting. 2008 15th Asia-Pacific Software Engineering Conference, pp. 325–331. https://doi.org/10.1109/APSEC.2008.61

Simpson E, Courtney M (2002) Critical thinking in nursing education: literature review. Mary Courtney 8(2):89–98

Stewart L, Tierney J, Burdett S (2006) Do systematic reviews based on individual patient data offer a means of circumventing biases associated with trial publications? Publication bias in meta-analysis. John Wiley and Sons Inc, New York, pp. 261–286

Tiwari A, Lai P, So M, Yuen K (2010) A comparison of the effects of problem-based learning and lecturing on the development of students’ critical thinking. Med Educ 40(6):547–554. https://doi.org/10.1111/j.1365-2929.2006.02481.x

Wood D, Bruner JS, Ross G (2006) The role of tutoring in problem solving. J Child Psychol Psychiatry 17(2):89–100. https://doi.org/10.1111/j.1469-7610.1976.tb00381.x

Wei T, Hong S (2022) The meaning and realization of teachable critical thinking. Educ Theory Practice 10:51–57

Xu EW, Wang W, Wang QX (2022) A meta-analysis of the effectiveness of programming teaching in promoting K-12 students’ computational thinking. Educ Inf Technol. https://doi.org/10.1007/s10639-022-11445-2

Yang YC, Newby T, Bill R (2008) Facilitating interactions through structured web-based bulletin boards: a quasi-experimental study on promoting learners’ critical thinking skills. Comput Educ 50(4):1572–1585. https://doi.org/10.1016/j.compedu.2007.04.006

Yore LD, Pimm D, Tuan HL (2007) The literacy component of mathematical and scientific literacy. Int J Sci Math Educ 5(4):559–589. https://doi.org/10.1007/s10763-007-9089-4

Zhang T, Zhang S, Gao QQ, Wang JH (2022) Research on the development of learners’ critical thinking in online peer review. Audio Visual Educ Res 6:53–60. https://doi.org/10.13811/j.cnki.eer.2022.06.08

Download references

Acknowledgements

This research was supported by the graduate scientific research and innovation project of Xinjiang Uygur Autonomous Region named “Research on in-depth learning of high school information technology courses for the cultivation of computing thinking” (No. XJ2022G190) and the independent innovation fund project for doctoral students of the College of Educational Science of Xinjiang Normal University named “Research on project-based teaching of high school information technology courses from the perspective of discipline core literacy” (No. XJNUJKYA2003).

Author information

Authors and affiliations.

College of Educational Science, Xinjiang Normal University, 830017, Urumqi, Xinjiang, China

Enwei Xu, Wei Wang & Qingxia Wang

You can also search for this author in PubMed   Google Scholar

Corresponding authors

Correspondence to Enwei Xu or Wei Wang .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

Additional information.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary tables, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Xu, E., Wang, W. & Wang, Q. The effectiveness of collaborative problem solving in promoting students’ critical thinking: A meta-analysis based on empirical literature. Humanit Soc Sci Commun 10 , 16 (2023). https://doi.org/10.1057/s41599-023-01508-1

Download citation

Received : 07 August 2022

Accepted : 04 January 2023

Published : 11 January 2023

DOI : https://doi.org/10.1057/s41599-023-01508-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Impacts of online collaborative learning on students’ intercultural communication apprehension and intercultural communicative competence.

  • Hoa Thi Hoang Chau
  • Hung Phu Bui
  • Quynh Thi Huong Dinh

Education and Information Technologies (2024)

Exploring the effects of digital technology on deep learning: a meta-analysis

Sustainable electricity generation and farm-grid utilization from photovoltaic aquaculture: a bibliometric analysis.

  • A. A. Amusa
  • M. Alhassan

International Journal of Environmental Science and Technology (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

collaborative problem solving criticism

How to ace collaborative problem solving

April 30, 2023 They say two heads are better than one, but is that true when it comes to solving problems in the workplace? To solve any problem—whether personal (eg, deciding where to live), business-related (eg, raising product prices), or societal (eg, reversing the obesity epidemic)—it’s crucial to first define the problem. In a team setting, that translates to establishing a collective understanding of the problem, awareness of context, and alignment of stakeholders. “Both good strategy and good problem solving involve getting clarity about the problem at hand, being able to disaggregate it in some way, and setting priorities,” Rob McLean, McKinsey director emeritus, told McKinsey senior partner Chris Bradley  in an Inside the Strategy Room podcast episode . Check out these insights to uncover how your team can come up with the best solutions for the most complex challenges by adopting a methodical and collaborative approach. 

Want better strategies? Become a bulletproof problem solver

How to master the seven-step problem-solving process

Countering otherness: Fostering integration within teams

Psychological safety and the critical role of leadership development

If we’re all so busy, why isn’t anything getting done?

To weather a crisis, build a network of teams

Unleash your team’s full potential

Modern marketing: Six capabilities for multidisciplinary teams

Beyond collaboration overload

MORE FROM MCKINSEY

Take a step Forward

  • DOI: 10.1057/s41599-023-01508-1
  • Corpus ID: 255761783

The effectiveness of collaborative problem solving in promoting students’ critical thinking: A meta-analysis based on empirical literature

  • Enwei Xu , Wei Wang , Qingxia Wang
  • Published in Humanities and Social… 11 January 2023

30 Citations

Effectiveness of project-based online learning on 21st century thinking skills of indonesian students: a meta-analysis research from 2018-2023, evaluating the impact of simulation-based instruction on critical thinking in the colombian caribbean: an experimental study, be the change you want to see: problem-based learning to promote diversity, justice, equity, inclusion, belonging, and sustainability in the classroom and workplace, assessing the scholarship of curriculum practices and the lived experiences of postgraduate students in a higher learning space, exploring critical questioning among in-service esl teachers using socratic questioning technique (sqt), research trends on flipped classrooms of mathematics creative thinking, critical thinking, and problem-solving skills, effectiveness of iot integrated problem based learning model on students creative thinking skills abilities, grouping strategy effects on students' engagement in technology-enhanced collaborative learning, quantifying influence: propensity score matching unravels the true effect sizes of learning management models on students’ analytical thinking, characteristics of high school physics problem-solving skills and integrating the pbl model assisted by e-book, 40 references, the power of problem‐based learning in developing critical thinking skills: preparing students for tomorrow’s digital futures in today’s classrooms, a meta-analysis of the effectiveness of programming teaching in promoting k-12 students’ computational thinking.

  • Highly Influential

Assessing the Effectiveness of Critical Thinking Instruction

Problem-based learning: what and how do students learn, standard-based science education and critical thinking, the art of thinking : a guide to critical and creative thought, effects of an online problem based learning course on content knowledge acquisition and critical thinking skills, does college teach critical thinking a meta-analysis, the effect of reflective writing interventions on the critical thinking skills and dispositions of baccalaureate nursing students., facilitating interactions through structured web-based bulletin boards: a quasi-experimental study on promoting learners' critical thinking skills, related papers.

Showing 1 through 3 of 0 Related Papers

ORIGINAL RESEARCH article

Visual analysis of commognitive conflict in collaborative problem solving in classrooms.

Jijian Lu,

  • 1 Jinghengyi School of Education, Hangzhou Normal University, Hangzhou, China
  • 2 Chinese Education Modernization Research Institute, Hangzhou Normal University, Hangzhou, China

In today’s knowledge-intensive and digital society, collaborative problem-solving (CPS) is considered a critical skill for students to develop. Moreover, international education research has embraced a new paradigm of communication-focused inquiry, and the commognitive theory helps enhance the understanding of CPS work. This paper aims to enhance the CPS skills by identifying, diagnosing, and visualizing commognitive conflicts during the CPS process, thereby fostering a learning-oriented innovative approach and even giving the script of technology-assisted feedback practices. Specifically, we utilized open-ended mathematical tasks and multi-camera video recordings to analyze the commognitive conflicts in CPS among 32 pairs, comprising 64 Year 7 students. After selecting the high-quality, medium-quality, and low-quality student pairs based on the SOLO theory, further investigations were made in the discourse diagnosis and visual analysis for the knowledge dimensions of commognitive conflict. Finally, it was discovered that there is a need to encourage students to focus on and resolve commognitive conflicts while providing timely feedback. Visual studies of commognitive conflict can empower AI-assisted teaching, and the intelligent diagnosis and visual analysis of CPS provide innovative solutions for teaching feedback.

Introduction

In today’s knowledge-intensive and digital society, collaborative problem solving (CPS) has aroused increasing attention and is considered a critical skill for students to develop. It is defined as the capacity of an individual to effectively engage in a process whereby two or more agents attempt to solve a problem by sharing the understanding and effort required to come to a solution and pooling their knowledge, skills and efforts to reach that solution ( OECD, 2015 ). Based on the PISA 2015 results, it was indeed found that Chinese learners had relatively lower performance in CPS compared to students from some other countries. It showed that students in Beijing, Shanghai, Guangdong, and Jiangsu and other places from China performed significantly worse in CPS than in math, science, and reading, ranking only tied for 25th place (51 countries or regions participated in the measure). The researchers discovered that the CPS task is less approachable for Chinese students after evaluating the CPS tasks in the worldwide assessment items ( Zhou and Lu, 2017 ). As Chinese education system often focuses on standardized test, which may limit exposure to real-world problem solving and result in less development of CPS skills. However, it is worth noting that the efforts have been made to promote the students’ CPS skills in China. For example, the Chinese researchers had already positioned CPS measuring framework to evaluate the students’ development as soon as it was introduced in PISA 2015 ( Wang, 2016 ).

The international assessment of CPS skills was initiated by the Assessment and Teaching of 21st Century Skills (ATC21S) project in 2008 ( Yuan and Liu, 2016 ). After it, the Program for International Student Assessment (PISA), administered by OECD, introduced CPS as a component in its assessments. PISA 2015 marked the first large-scale assessment of CPS skills conducted within individual countries. Following PISA2015, Australia started a significant national-level evaluation ( Li, 2017 ). The emphasis on CPS abilities in international assessments is due, on the one hand, to the significance of CPS skills and, on the other hand, to the social interaction view of individual mental development developed in recent years by sociocultural theorists, which provides theoretical and research case support for this aspect of the assessment.

In the study of CPS between pairs or groups, some scholars promote the development of new communication-oriented research paradigms based on the perspective of individual mental development and social interaction ( Xu, 2018 ). Through communication, individuals in pairs or groups can share information, exchange ideas, negotiate solutions, and coordinate their efforts toward solving a problem. To better understand how social connections contribute to the development of personal mindfulness, communication-oriented research examines in-depth micro-behaviors within social interactions and communicative conversations. Professor Anna Sfard is a representative researcher in the new communication-orientation research paradigm. She put up the idea of commognitive, a theoretical presumption about how social interaction and individual cognition relates to one another: interpersonal communication and cognitive processes are essentially two sides of the same phenomenon ( Sfard, 2007 ). Commognitive conflict refers to the cognitive conflicts that arise during CPS interactions among individuals. It occurs when participants in a collaborative setting encounter different perspectives, interpretations, or strategies while working together to solve a problem.

In order to promote CPS skills, it is an effective way to analyze and diagnose the commognitive conflict by observing the students work in pairs or groups. Visualizing the analysis of commognitive conflict during CPS allow educators to provide targeted feedback, better teaching intervention to students, thus promoting cooperative learning behavior. In the meantime, the development of artificial intelligence (AI) has made it possible to apply advanced statistical measures (e.g., RSM theory) to the practice of an online intelligent cognitive diagnostic system based on a test bank without difficulty. However, it still needs real classroom students’ commognitive conflict analysis theory to be the basis for the more complex commognitive conflict diagnosis and visualization.

To better develop students’ ability of CPS skills, our research was conducted to study the intelligent diagnosis and visual analysis of commognitive conflict. The research was conducted based on video feedback data obtained from a Sino-Australian collaborative project team. We analyzed the observed commognitive conflict within the knowledge dimension and classified them into conceptual, procedural, and contextual conflict, following the cognitive conflict structure proposed by Lee and Yi (2013) . For conceptual knowledge, the sub-components include facts, conceptions, relations, and conceptual structure. These aspects pertain to understanding the fundamental principles, ideas, and relationships within a given subject area. Procedural knowledge encompasses thinking skills, ranging from simple to complex. These skills include description, selection, representation, inference, synthesis, and verification. Contextual knowledge focuses on specific contexts such as school, everyday life, and social/cultural/historical contexts. Understanding how knowledge is situated within different real-life situations allows for a more comprehensive and meaningful application of knowledge.

By identifying, diagnosing, and visualizing the commognitive conflict within knowledge dimensions during CPS, we can learn about students’ collaborative learning behaviors. This understanding promotes a learning-oriented innovative approach and even facilitates the creation of technology-assisted feedback practices. Moreover, the script of technology-assisted feedback practice can be to gain insights into the communication process, enable speech recognition for efficient feedback, and facilitate discourse diagnosis for improved instruction and learning outcomes.

Based on the background and purpose outlined above, the study focused on the identification, diagnosis, and visualization of commognitive conflict that arise during collaborative problem-solving (CPS) among student pairs. Firstly, we categorize the knowledge dimensions of commognitive conflict as conceptual, procedural, and contextual, so as to observe and analyze the commognitive conflict in students’ pairs. Secondly, three typical cases of high quality, medium quality, and low quality were selected through SOLO theory from 32 pairs of student peers for further case analysis. Finally, diagnosis and visual analysis of these cases are conducted to assist in cultivating students’ CPS abilities. We mainly study the following questions:

Q1: What is the profile and visual diagnostic for the knowledge dimensions of commognitive conflict among student pairs?

Q2: How can the commognitive conflict be diagnosed in the discourse of student pairs?

Q3: How to visualize the commognitive conflict by 3D block diagram?

By studying the students’ performance of commognitive conflict in CPS, it is possible to provide teachers with a theoretical framework and a visual case reference that enables them to provide innovative learning-oriented assessment and feedback practice in the classroom. It also gives guidelines and script materials for future speech recognition supported by artificial intelligence and commognitive conflict discourse diagnosis.

Literature review

The study of commognitive conflict.

In recent years, the research on commognitive conflict has tended to extend in a broad sense, viewing commognitive conflict as a state produced by discrepancies between an individual’s cognitive structure and the environment or between various components within that structure ( Lee et al., 2003 ). Commognitive conflicts, which are cognitive conflicts that arise during communication, exist in the communication of different vocabulary usage, rules of evidence, etc. ( Sfard, 2008 ). From a cognitive perspective, the heterogeneity of a team’s knowledge gives rise to diverse cognitive conflicts, which, in turn, facilitates the activation of more flexible cognitive mechanisms. These mechanisms enable the fusion of divergent cognitive schemata, ultimately leading to the creation of new cognitive constructions ( Perry-Smith and Shalley, 2014 ). As the communicative interaction between people with different knowledge structures generates new patterns of knowledge connection, creative friction or creative chaos emerges ( Zhang and Ni, 2006 ), stimulating various types of information exchange and the discovery of new solutions.

Sociocultural theory academics’ research on the social interaction view of individual mental development has offered a theoretical and empirical basis for the assessment of commognitive conflict. Vygotsky (1962) originally proposed that learners acquire knowledge most effectively through interaction, dialog, and negotiation in social, authentic learning situations that promote the holistic development. This not only improves the competency and learning performance of the students, but also stimulates the cognitive growth of the group through cooperation and interaction. When students are confronted with socially authentic problem situations, they participate in the CPS process through interaction, dialog, negotiation, and other learning styles. At this time, members’ heterogeneous knowledge structures communicate with each other, and while they build complementary knowledge within the team, they also generate varying degrees of commognitive conflicts. And the proportion of commognitive conflicts in CPS was significantly higher than the traditional cooperative learning ( Liang et al., 2017 ).

Sfard (2007) created the theory of commognition and categorized its levels and components based on various student-teacher and student–student communication dialogs. She developed the commognitive vision of mathematics as a type of discourse—as a defined form of communication, made distinct by its vocabulary, visual mediators, routines, and the narratives it produces. However, the theory has not yet codified the level of discourse and developed a more detailed description of the forms of conflicts, which is a challenging and innovative aspect of the study. Although commognitive theory has areas that need refinement, its applications are very broad and can serve as an effective research lens for different fields, and its potential has not yet been fully explored ( Presmeg, 2016 ).

Due to the widespread application of commognitive theory, which has also received considerable attention from academics, the theory has been refined in practice. Regarding knowledge constructs for commognitive conflict, Gyoungho (2007) proposed a structural map of knowledge and beliefs that points to the students’ cognitive conflict analysis. This serves as a reference for the classification of knowledge content for commognitive conflicts.

Therefore, this paper argues that the commognitive version of discourse can be classified into conceptual, procedural, and contextual knowledge dimensions. In this way, we would able to observe how students collaborate to solve problems through classroom and to record the commognitive conflicts that arise during the learning process of interaction, dialog, and negotiation among members. If students can effectively manage commognitive conflicts, they will be able to foster cognitive development at both individual and group level. Moreover, this ability will also enhance their critical thinking skills and creative abilities.

The study of commognitive conflict in CPS

In terms of problem-solving research and practice, scholars have constructed mature models to study and understand the process of problem solving. These models provide frameworks and guidelines for approaching problems effectively. Polya (1973) proposed a problem-solving model consisted of four-step process, which emphasizes the importance of understanding the problem thoroughly, strategic thinking, and critical reflection and helps develop effective problem-solving skills. Subsequent scholars have adapted and expanded upon Polya’s problem-solving model to cater to various needs and situations ( Yu, 2008 ; Cao et al., 2016 ; Wei, 2019 ). For example, Schoenfeld (1985) divided the paradigm of problem-solving into six phases: preparation, exploration, strategy formulation, execution, evaluation and inquiry. In the “Inquiry” module, commognitive conflict refers to the cognitive conflicts that students may encounter while engaging in an inquiry-based learning process. When learners encounter these conflicts, they are presented with opportunities for deeper understanding and critical thinking. However, these researches have primarily focused on individual student problem solving of closed problems as the primary case study, and none of the models directly address commognitive conflict. When students’ pairs or groups solve mathematical problems in open environments, the cognitive model of CPS will become more sophisticated, and major commognitive conflict will occur.

In a study involving students’ commognitive conflict processes, Lewis and Mayer (1987) constructed a model on the process of comparing problem comprehension, arguing that students have a preference for the order of information provided in a problem and prefer problems that are in the same order. When students do not agree on the relational terms in solving comparison problems and the required arithmetic operations, comprehension errors occur and commognitive conflict arises. This type of conflict due to students’ preference for the order of problems will most likely present explicit commognitive conflict during CPS.

In terms of discourse analysis of commognitive conflict in CPS, Barron (2000) , on the other hand, focuses on group-level characteristics of CPS, providing targeted strategies for examining cooperative group learning and providing explanations for the variability in the outcomes of collaborative activities. By recording and coding the quality of communication in the study groups, the characteristics of group interaction, problem-solving goal congruence are analyzed and the groups are classified into high quality and low quality problem solving. Iiskala et al. (2011) explore how metacognition becomes a socially shared phenomenon in their study of conversational episodes and characteristics during collaborative mathematical problem solving among high achieving students’ pairs. These researches provide a powerful reference for the discourse analysis of commognitive conflict in CPS.

In the context of commognitive conflict visualization, Ding (2009) visualized the knowledge refinement in CPS work. The study used a behavioral sequential approach to map students’ pairs and personal knowledge refinement curves in CPS. This visualization study of commognitive conflict in CPS offers valuable insights and ideas.

Overall, the majority of research on commognitive conflict in CPS has focused on the cognitive processes of individuals in problem solving and discourse conflict, whereas the visualization of commognitive conflict content classification and discourse is lacking. Therefore, our research utilized the SOLO theory to select student pairs of high-quality, medium-quality, and low-quality, and conducted a comprehensive investigation into identifying, visualizing, and diagnosing the commognitive conflicts during CPS. The statistical profile, discourse diagnosis, and visual analysis of commognitive conflict in CPS enable teachers to gain a deeper understanding of how students encounter commognitive conflicts and how different quality student pairs approach problem-solving. As a result, this provides teachers with timely and targeted guidance to support their students effectively, thereby enhancing students’ CPS skills. Moreover, the process of discourse analysis and visualization also offers a learning-oriented and innovative approach, providing a script for technology-assisted feedback practices.

Research participants and cases

Year 7, which is the present focus of the international CPS assessment, was selected prior to Year 8 in consideration of the exploratory nature of the project. The research segment, problem tasks, and research environment are all mostly consistent with the Australian partner side. 32 student pairs, consisting of a total of 64 seventh-grade students from the LH middle school in an urban area of the TZ district in BJ city, with moderate educational quality, were selected as the sample for the CPS recordings.

At the same time, the student outcomes were evaluated using the SOLO (Structure of the Observed Learning Outcome) five-level classification evaluation method ( Biggs and Collis, 1982 ), which took into account the characteristics of the open-ended mathematical and contextual problems used in the project. The SOLO theory classifies observable learning outcomes into five levels: prestructural, unistructural, multistructural, relational and extended abstract structure. This resulted in the selection of the typical cases with the high quality, medium quality and low quality outcomes, as shown in Table 1 .

www.frontiersin.org

Table 1 . Selection of different student pairs’ CPS cases through SOLO.

Research task

The study utilized open-ended contextualized mathematical problems that better provoke commognitive conflict in communication ( Clarke and Helme, 1998 ) from the Sino-Australian SEL project. The mathematical problem task is “Households and Age,” in which student pairs will collaborate to solve the problem, calculate the age of each person, and associate the social relationships of five people, as shown in Table 2 . The student competencies examined in this task include the pedagogical problem-solving cycle involved in the International Institute for Frontier Mathematics Education, which promotes individual and group reflection on a dialectical cycle ( Lu, 2017 ). Additionally, the study categorized the knowledge dimensions of commognitive conflict as conceptual, procedural, and contextual, and then examined and visually presented the features of the knowledge dimensions of the students’ pairs.

www.frontiersin.org

Table 2 . Analysis of the “Household and Age” open-ended contextualized math problems.

Research environment

Conforming to the specifications of the data collection classroom environment for the Sino-Australian Student Collaborative Mathematics Problem Solving Project, the study environment was built in a school-filmed classroom with which the children were quite familiar. Each group of 4–6 students in the videotaped classroom sat around a multi-tabled collocation table. The participants performed three tasks: individually, in pairs and in groups. In this paper, we only analyze the task conducted in pairs. A video camera was set up to capture the entire activity, and wireless microphones, such as left and right channels, were used to gather sound. Each group of students received pens, task sheets, rough draft paper, and other tools so they could complete the mathematical tasks. The study utilized 12 min of data from the problem-solving session involving pair participation. Figure 1 depicts the information in detail.

www.frontiersin.org

Figure 1 . Classroom setup for data collecting for pairs math problem-solving projects.

Data analysis

During mathematical problem solving, student performance was videotaped, and the discourse was coded and analyzed. In this paper, we present a visual presentation and qualitative analysis of students’ performance in commognitive conflict during CPS. We analyze the differences in knowledge dimensions, the time and frequency of occurrences, as well as the diagnosis and visualization of commognitive conflict. The study coded the knowledge dimensions of commognitive conflict, identifying and classifying the types of conflict segments and recording the length of conflict for each segment. This resulted in statistics on the number, type, and average duration of conflict for 32 pairs of commognitive conflict groups. Two coders were utilized to confirm the validity of the coding results, and the consistency coefficient of the results was 0.913. Inconsistencies in coding were reviewed and deemed consistent by the coders.

To further analyze different quality student pairs’ commognitive conflict, the study used Nvivo12 software and 3D visualization block diagrams to visualize the data and thus present a more visual representation of commognitive conflict in CPS.

Visual diagnostic of commognitive conflict knowledge dimensions

The study first encoded the commognitive conflicts of 32 student pairs and conducted an overall statistical analysis of the number of conflicts, average conflict duration, proportion of different types of conflicts, and resolution rate among student pairs. By coding and counting students’ pairs commognitive discourse, it was found that students’ commognitive conflicts were mainly concentrated in procedural and contextual knowledge, accounting for 47.5% and 46.50%, respectively. Conceptual knowledge accounted for only 6%. In terms of problem solving percentage, the problem solving percentage of procedural knowledge was 31%, and the solving rate was 65.3%; the contextual knowledge was 26%, and the solving rate was 55.9%. The percentage of problem solving based on conceptual knowledge was 6%, and the resolution rate was 100%. In terms of average conflict duration, the longest time was required to solve procedural knowledge, with an average of 52.77 s for one conflict. In contextual knowledge, the time for unresolved conflicts was 49.61 s, in which students were aware of the differences in their respective different mathematical contexts and therefore chose to postpone the conflicts. The details are shown in Table 3 .

www.frontiersin.org

Table 3 . Visual diagnostic profile of commognitive conflict.

As it can be seen in Table 3 , it is evident that the conceptual knowledge conflicts have the lowest percentage and the highest resolution rate, which indicates that students have a good grasp of the basic concepts of such problems.

To further analyze the commognitive conflicts among student pairs in CPS, the study selected high-quality, medium-quality, and low-quality case pairs using the SOLO theory. Visual analysis was then conducted on the quantity, occurrence, duration, and resolution status of commognitive conflicts in student pairs, as shown in Table 4 .

www.frontiersin.org

Table 4 . Diagnosis and visualization cases of commognitive conflict.

Similarities and differences were discovered in the total number of occurrences, duration, and resolution status of commognitive conflicts among student pairs. The similarity lies in the total number of commognitive conflicts, with 8–9 conflicts occurring within a 12-min period. The difference is that each pair has its own characteristics in terms of the occurrence, duration, and resolution status of commognitive conflicts. The commognitive conflicts among high-quality student pairs emerged early and were resolved relatively quickly, with 7 out of 8 conflicts being resolved, while medium-quality pairs resolved 5 out of 8 and low-quality student pairs resolved only 3 out of 9, and the first two periods took longer. The information presented in the visualization diagram in Table 4 can also be expressed as the information shown in Table 5 .

www.frontiersin.org

Table 5 . Statistical comparison of different students’ pairs commognitive conflict.

In terms of the categories of commognitive conflicts, different student pairs have fewer conflicts in the conceptual knowledge dimension. This is because the conceptual knowledge content of this mathematical task is not difficult, such as ‘the total age of five people, the age of seventh grade, and the age of the remaining four people’. When conflicts arise, students are more likely to resolve them. The commognitive conflicts mainly focus on the procedural and contextual dimensions, which is consistent with the conclusions obtained in Table 3 . In high-quality student pairs, all procedural dimension conflicts have been resolved, while unresolved conflicts still exist in the middle and low-quality pairs. Therefore, the study will perform further discourse diagnosis and 3D block diagram visualization analysis on the procedural and contextual knowledge dimensions conflict, and reveal the characteristics and patterns associated with these conflicts.

Discourse diagnosis of commognitive conflict

After selecting procedural dimensional commognitive conflict fragments, such as high quality P14-4S, medium quality P4-3S and low quality P2-2S and performing discourse visualization diagnosis, it was discovered that the procedural commognitive conflict is primarily manifested in the discourse of mathematical results after calculation errors by pairs. Table 6 takes the commognitive conflict fragments of the procedural knowledge dimension as an example and performs discourse diagnose from the beginning to the end of the conflict. This approach is also applicable throughout the entire study.

www.frontiersin.org

Table 6 . Discourse diagnose of commognitive conflict in the procedural knowledge dimension.

Discourse diagnosis helps the research to identify and analyze the linguistic, semantic, and interaction features of discourse, and reveal the underlying patterns and dynamics of how student pairs solve the procedural commognitive conflict. For example, in the high-quality student pairs, the discourse diagnosis reveals that Girl 14B had a commognitive conflict with Girl 14A’s calculation of 34 and further questioned the follow-up operation, which diagnosed as the cause and found of conflict. Girl 14B corrected the commognitive conflict after Girl 14A pointed to the draft document and made her explanations clear. She also suggested using 125 minus 34 and continuing to move backward, which was accepted by Girl 14A. The entire procedure, which took only 40 s, resolved the problem and even reversed some forward progress.

Commognitive conflict visual diagnosis using 3D block diagram

The study selected different quality cases of contextual knowledge dimension commognitive conflict fragments and conducted 3D block diagram visualization analysis and diagnosis. This 3D block diagram is adapted from Lee’s structure of cognitive conflict. In research tasks, contextual knowledge is mainly divided into school mathematics knowledge and life experience. On the dimension of procedural knowledge, it is divided into arithmetic, mathematics quantitative certainty, and interval uncertainty. In terms of conceptual knowledge dimension, it involves basic mathematical concepts and other knowledge, consistent with the previous text. The path characteristics of commognitive conflicts can be intuitively seen from the 3D block diagram, as shown in Table 7 .

www.frontiersin.org

Table 7 . 3D block diagram of commognitive conflict in the contextual knowledge dimension.

The 3D diagram allows for an intuitive visualization of the paths taken by student pairs in commognitive conflicts. Taking high-quality P14-5S fragments as an example, the conflict paths are as follows: 1-BA → 2-B → 3-A → 4-BA → 5-A. The specific process is described as follows:

1-BA represents student 14B, who based their judgment on life experience and considered the possibility that children are 12 years old. Meanwhile, student 14A revised 13 years old. 2-B indicates that student 14B, while considering the reasonableness of the gap between parents’ age and child’s age, concluded that the other two children should be siblings. 3-A indicates that student 14A experienced a communication cognitive conflict in response to the 2-B judgment made by student 14B. This conflict arose due to student 14A’s narrow focus on mathematical quantification in school mathematics. 4-BA represents student 14B’s explanation for their 2-B judgment. After student 14A confirmed the explanation, they performed calculations to determine the validity. 5-A indicates that when student 14A was recording the results, they considered the uncertain nature of mathematics and added descriptive elements such as “examples” in the column.

Research has revealed distinct path characteristics for commognitive conflicts among student pairs of different quality levels. In the case of the high-quality student pair (P15-5S), the process path for the emergence, negotiation, and resolution of commognitive conflicts follows a path of “life experience → school mathematics → life experience.” They also consider both the certainty and uncertainty aspects of mathematical problems. Similarly, the medium-quality student pair (P4-8S) follows a path of “life experience → school mathematics → life experience.” They take into account the age range in real life, which is then transformed into mathematical certainty, leading them to conclude that the five individuals have a shared rental agreement. Although their thinking is somewhat biased toward the life context, their logic remains reasonable. On the other hand, the low-quality student pair (P2-8S) only experiences the path of “life experience → school mathematics.” They fail to fully convert the discussed problems into the realm of school mathematics. Despite considering the interval uncertainty, their analysis lacks accuracy and depth.

In the 3D visualization diagnosis of commognitive conflict, the study mainly draws on Gyoungho Lee’s three-dimensional viewable framework ( Lee and Yi, 2013 ); however, Lee’s related research mainly reveals static cognitive conflict and has not yet used the theory to analyze dynamic cooperative problem solving. This research uses the 3D analysis framework to present a dynamic path for commognitive conflicts and also refine the use of the analysis framework. Table 7 shows that the different quality commognitive conflict in CPS has a distinct pathway, the high- and medium-quality student pairs showing a “life experience-school mathematics-life experience” problem solving process. In this process, students make full use of their existing life experiences and mathematical knowledge, improve their own mathematical knowledge during the commognitive conflict with their pairs, and internalize and recreate based on their existing mathematical knowledge. This also coincides with Freudenthal’s theory of mathematics education—the idea of mathematical reality, mathematization and recreation ( Freudenthal, 1973 ). It demonstrates that students may increase mathematics learning and create their own mathematical and life experiences by effectively resolving commognitive conflicts in CPS.

International assessments recognize CPS skills as critical for student growth, and commognitive conflict theory provides a theoretical foundation for individual knowledge creation and social engagement in collaborative challenges. Although established commognitive theories give a comprehensive description of conflict levels and elements, visualization studies of content classification and discourse levels are lacking. This research provides a visual representation of the knowledge dimension classification of commognitive conflict in CPS, as well as a discourse analysis of different quality student pairs. The Nvivo12 software was utilized in the study to visualize commognitive conflict with sound waves, as well as to present three-dimensional routes in a 3D analytic framework. This innovation presents commognitive conflict in a more concrete and visual manner. Specifically, intelligent diagnosis and visualization of student pairs’ CPS behavior can enable teachers to provide timely feedback, which provides new solutions for the observation of students’ engagement during CPS.

In this research, we choose the participates of pairs in the seventh grade, who have already learned addition, subtraction, multiplication, and division, were chosen to participate in the study. As open-ended contextualized mathematical life situations are added, and students focus on both arithmetic and character relationships, as well as each person’s age and total age. Table 1 presents the overall statistics of commognitive conflicts for the 32 student pairs. The researchers discovered a low percentage of conceptual knowledge conflicts, while the procedural and contextual knowledge commognitive conflicts were the primary challenges. Thus, the instructional interventions in these areas should be strengthened. Therefore, we conducted further investigations into commognitive conflicts in procedural and contextual knowledge dimensions through discourse diagnosis and visual analysis.

In the discourse diagnosis of commognitive conflict ( Table 3 ), the study identified the origination, detection, interpretation, modification, and response of commognitive conflict. These specific linguistic features help us better understand when and where the commognitive conflict begins and ends, similar to the work done by Zhao et al. (2022) . This analysis allows for a deeper understanding of the processes involved in commognitive conflict, enabling researchers and educators to effectively address and intervene in these conflicts to enhance student learning outcomes.

For visual analysis, Table 4 presents the visualization of commognitive conflicts using sound waves, displaying the occurrence, quantity, duration, and whether the conflicts are solved. Additionally, Table 7 utilizes 3D block diagrams to illustrate the path of commognitive conflict in the contextual knowledge dimension. This dynamic visualization shows how the conflicts occurred and possible resolutions. For example, in the case of the low-quality student pairs (P2-8S) mentioned in Table 7 , if the teacher uses the visual diagram and identifies that these student pairs are experiencing difficulty in translating life experiences into school mathematics, timely intervention can be implemented. Recognizing this specific challenge allows the teacher to address it directly and provide targeted support or guidance to help these students overcome this hurdle.

To summarize, we diagnose and visualize the commognitive conflicts in student pairs’ CPS, which innovates the evaluation of learning-oriented feedback practices. Discourse diagnosis and visual analysis play a crucial role in enhancing the impact of feedback on student learning. As highlighted by Er et al. (2021) , pairs’ feedback can be particularly effective in providing valuable solutions. As problem solving is a tool, a skill, and a process, the effective identification of commognitive conflicts is needed to improve CPS skills and even lead to creative solutions. However, due to limitations in research time and effort, further research is needed to analyze group work, employ additional visualization diagnoses, and explore the feedback of teachers on student cooperation issues, among other aspects. Addressing these shortcomings will be the focus of future research in this study.

Based on the above studies, conclusions can be drawn from the need to encourage students to focus and resolve commognitive conflicts and make timely feedback; visualization studies of commognitive conflict can empower AI-assisted teaching as well as the intelligent diagnosis and visual analysis of CPS provide innovative solutions for teaching feedback.

Encourage students to focus and resolve commognitive conflicts and make timely feedback

Existing commognitive theories give a detailed classification of commognitive conflict levels and elements, but research on content classification and visualization of discourse levels is lacking. In this study, we classify and visualize the knowledge dimensions of commognitive conflict in order to provide a theoretical and practical foundation for broader application. Future advancements can be made in the precise classification of commognitive conflict, the refinement of discourse analysis, and the development of visualization tools.

The visual analysis of the commognitive conflicts of different quality student pairs revealed significant differences in the duration, amount, and resolution of conflict, especially in the procedural and contextual knowledge dimensions. Thus, teachers need to encourage students to focus and resolve commognitive conflicts and make appropriate instructional interventions, which facilitate the development of students’ CPS from low to high quality. The commognitive conflict in CPS, when coupled with timely and targeted feedback, empowers teachers by fostering student engagement, deepening understanding, enabling personalized instruction, and so on. When students receive immediate responses to their contributions, it reinforces their engagement and participation. Furthermore, timely and personalized feedback provides guidance and clarifies misconceptions, helping students refine their understanding and address their unique challenges. Therefore, the dynamic coding and visualization of commognitive conflict in CPS can more effectively identify the types and manifestations of commognitive conflict in problem solving and thus provide a basis for teachers to provide targeted instructional interventions.

Visual studies of commognitive conflict can empower AI-assisted teaching

During the research, we investigate explicit indicators of CPS, such as commognitive conflict categories, the occurrence and the average conflict duration etc., which are conducive to automatic identification and data analysis in the context of the rapid development of artificial intelligence (AI) for speech recognition. As AI technologies have the potential to reduce the workload for teachers and test developers ( Yunjiu et al., 2022 ), their further development in the application of commognitive theory can provide directions and scripts for AI-assisted teaching, particularly in commognitive conflict discourse recognition and diagnosis in the future.

At the same time, computerized automated evaluation, at a time when AI development such as speech recognition is becoming more and more mature, has been able to begin to intelligently perform automated diagnostic analysis. Therefore, a more in-depth visualization study of commognitive conflict in CPS can provide ideas and references for future AI-empowered teaching and learning.

Intelligent diagnosis and visual analysis of CPS provide innovative solutions for teaching feedback

In this paper, intelligent diagnosis and visual analysis are carried out according to the commognitive conflict of students’ pairs in CPS, which provides an innovative solution for visual presentation of teaching feedback. Visual studies of commognitive conflict can provide teachers with rich data that informs their decision-making. For example, when the visual diagram shows the low quality student pairs, by analyzing visual data, teachers gain insights into the quality of students’ interactions, and determine when and how to provide targeted instructional interventions. Its further development can provide analysis framework and case reference for teachers or even computer automation to evaluate students’ pairs problem solving level. This solution of intelligent diagnosis and visual analysis is intended to provide a deeper understanding of how students respond to feedback practices of commognitive conflicts in CPS in the future teaching, and to shift to more applicable results. This, in turn, promotes the development of individuals in the social interaction and communication. Therefore, in the future, it is necessary to strengthen research on the diagnosis and visualization of commognitive conflict in CPS which will provide scripts for future artificial intelligence, offer data support for targeted, timely, and personalized assistance.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Ethics statement

The studies involving human participants are reviewed and approved by the Hangzhou Normal University Ethics Committee. Written informed consent to participate in this study was provided by the participants’ legal guardian/next of kin. Written informed consent was obtained from the individual(s), and minor(s)’ legal guardian/next of kin, for the publication of any potentially identifiable images or data included in this article.

Author contributions

JL designed this study and drafted the original manuscript. YZ collected the data. JL, YZ, and YL had full access to the data and analysis. YL finalized the data analysis results. All authors contributed to the article and approved the submitted version.

This research was conducted with the NSFC project (62077041) and Ministry of Education Industry-University Cooperation Project (no. 202101078019).

Acknowledgments

The authors would like to thank the students, parents, classroom teachers, platform staff, and the Teaching Academic Program of Jinghengyi School of Education, Hangzhou Normal University in Hangzhou, China for their invaluable support of this project.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Barron, B. (2000). Achieving coordination in collaborative problem-solving groups. J. Learn. Sci. 9, 403–436. doi: 10.1207/S15327809JLS0904_2

CrossRef Full Text | Google Scholar

Biggs, J. B., and Collis, K. F. (1982). Evaluating the quality of learning: The SOLO taxonomy (structure of the observed learning outcome) . New York: Academic Press.

Google Scholar

Cao, Y., Liu, X., and Guo, K. (2016). Research on the Critcal level of the critical pedagogical Behaviors in middle school mathematics classroom. J Maths Educ 12, 73–78. doi: 10.14082/j.cnki.1673-1298.2016.04.011

Clarke, D., and Helme, S. (1998). Context as construction. Mathematics teaching from a constructivist point of view , 129–147.

Ding, N. (2009). Visualizing the sequential process of knowledge elaboration in computer-supported collaborative problem solving. Comput. Educ. 52, 509–519. doi: 10.1016/j.compedu.2008.10.009

Er, E., Dimitriadis, Y., and Gašević, D. (2021). Collaborative peer feedback and learning analytics: theory-oriented design for supporting class-wide interventions. Assess. Eval. High. Educ. 46, 169–190. doi: 10.1080/02602938.2020.1764490

Freudenthal, H. (1973). Mathematics as an educational task . Dordrecht, Holland: D.Reidel Publishing Company.

Gyoungho, L. E. E. (2007). Why do students have difficulties in learning physics?: toward a structural analysis of student difficulty via a framework of knowledge and belief. Origin New Phys 54, 284–295.

Iiskala, T., Vauras, M., Lehtinen, E., and Salonen, P. (2011). Socially shared metacognition of dyads of pupils in collaborative mathematical problem-solving processes. Learn. Instr. 21, 379–393. doi: 10.1016/j.learninstruc.2010.05.002

Lee, G., Kwon, J., Park, S.-S., Kim, J.-W., Kwon, H.-G., and Park, H.-K. (2003). Development of an instrument for measuring cognitive conflict in secondary-level science class. J. Res. Sci. Teach. 40, 585–603. doi: 10.1002/tea.10099

Lee, G., and Yi, J. (2013). Where cognitive conflict arises from?: the structure of creating cognitive conflict. Int. J. Sci. Math. Educ. 11, 601–623. doi: 10.1007/s10763-012-9356-x

Lewis, A. B., and Mayer, R. E. (1987). Students' miscomprehension of relational statements in arithmetic word problems. J. Educ. Psychol. 79, 363–371. doi: 10.1037/0022-0663.79.4.363

Li, Y. (2017). The research on the project of 'Collaborative problem-solving online Assessment' in Australia —— an analysis based on the 'Conceptual assessment Framework' of ECD model. Primary and Secondary Schooling Abroad , 31–38.

Liang, Y., Zhu, K., and Zhao, C. (2017). An empirical research on improving the depth of interaction through collaborative problem-solving learning activities. e-Educ Res 38:87-92+99. doi: 10.13811/j.cnki.eer.2017.10.014

Lu, J. (2017). Progress and trend of the research of international mathematics curriculum and teaching based on PME40. J Maths Educ 26, 77–81.

OECD. (2015). PISA 2015 RELEASED field trial cognitive items. OECD Publishing. Available at: https://cnki.com.cn/Article/CJFDTOTAL-SXYB201705015.htm

Perry-Smith, J. E., and Shalley, C. E. (2014). A social composition view of team creativity: the role of member nationality-heterogeneous ties outside of the team. Organ. Sci. 25, 1434–1452. doi: 10.1287/orsc.2014.0912

Polya, G. (1973). How to solve it . Princeton, NJ: Princeton university press.

Presmeg, N. (2016). Commognition as a lens for research. Educ. Stud. Math. 91, 423–430. doi: 10.1007/s10649-015-9676-1

Schoenfeld, A. H. (1985). Mathematical problem solving . New York, NY: Academice Press.

Sfard, A. (2007). When the rules of discourse change, but nobody tells you: making sense of mathematics learning from a commognitive standpoint. J. Learn. Sci. 16, 565–613. doi: 10.1080/10508400701525253

Sfard, A. (2008). Thinking as communicating: Human development, the growth of discourses, and mathematizing . Cambridge, UK: Cambridge university press.

Vygotsky, L. (1962). Thought and language . Cambridge, MA: MIT Press.

Wang, L. (2016). Accessment and evaluation of key competences for student development-lessons from collaborative problem solving of PISA2015. Glob Educ 45, 24–30.

Wei, X. (2019). Problem solving and cognitive simulation—the example of mathematical problems . Beijing, China: China Social Sciences Press.

Xu, Z. (2018). 'Communicational Approach' to the study of mathematics education: background, hypothesis and conceptual framework. Stud For Educ 45, 98–110.

Yu, P. (2008). Theory of CPFS frame in mathematics learning . Nanning, China: Guangxi Education Press.

Yuan, J., and Liu, H. (2016). The measurement of collaborative problem solving: Analyzing of the measuring principle of PISA2015 and ATC21S. Stud For Educ 43, 45–56.

Yunjiu, L., Wei, W., and Zheng, Y. (2022). Artificial intelligence-generated and human expert-designed vocabulary tests: a comparative study. SAGE Open 12:21582440221082130. doi: 10.1177/21582440221082130

Zhang, G., and Ni, X. (2006). Knowledge conflict process: a case study. R&D Manage 18, 66–73.

Zhao, J., Song, T., Song, X., and Bai, Y. (2022). Analysis on the linguistic features of conflict discourse in mathematical cooperation problem solving in China [original research]. Front. Psychol. 13:945909. doi: 10.3389/fpsyg.2022.945909

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhou, J., and Lu, J. (2017). Collaborative problem solving task of PISA2015 and ATC21s and its enlightement to mathematics education. Maths Teaching in Middle Schools (in Chinese), 64–66.

Keywords: commognitive conflict, collaborative problem solving, cognitive diagnosis, visualization, learning-oriented feedback

Citation: Lu J, Zhang Y and Li Y (2023) Visual analysis of commognitive conflict in collaborative problem solving in classrooms. Front. Psychol . 14:1216652. doi: 10.3389/fpsyg.2023.1216652

Received: 04 May 2023; Accepted: 21 November 2023; Published: 20 December 2023.

Reviewed by:

Copyright © 2023 Lu, Zhang and Li. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Yangjie Li, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

PISA Research, Development and Innovation (RDI) Programme

The Research, Development and Innovation (RDI) programme established by the PISA Governing Board in 2018 explores how different areas of the assessment programme (e.g. test design, scoring methodologies) can be improved.

collaborative problem solving criticism

Select a language

Since 2012, PISA has assessed students in an innovative domain in every cycle. The innovative domain assessments target interdisciplinary, 21st century competences (e.g. creative thinking), providing PISA countries/economies with a more comprehensive outlook on their students’ ‘readiness for life’. This work drives innovation not only in what PISA assesses, but also in the test format and in how results are reported.

PISA Innovative Domain Assessments, by cycle/year

In addition to assessing students’ literacies in reading, mathematics and science, PISA develops an innovative domain assessment for each cycle that targets a new and relevant 21st century competence. The innovative domains to-date are: 

PISA 2025 Learning in the Digital World

The PISA 2025 Learning in the Digital World assessment will provide international data on students’ capacity to engage in an iterative process of knowledge building and problem solving using computational tools. The data will strengthen our understanding of the skills and attitudes students need to become autonomous learners in increasingly digital education and work environments.

PISA 2022 Creative Thinking

The PISA 2022 Creative Thinking assessment examines students' capacity to generate diverse and original ideas, and to evaluate and improve ideas, across a range of contexts through open-ended communication and problem solving tasks.

PISA 2018 Global Competence

The PISA 2018 Global Competence assessment provides an overview of education systems’ efforts to create learning environments that invite young people to understand the world beyond their immediate environment, interact with others with respect for their rights and dignity, and take action towards building sustainable and thriving communities.

PISA 2015 Collaborative Problem Solving

The PISA 2015 collaborative problem solving assessment provides internationally comparable data on students' ability to be collaborative team players when solving problems in an increasingly interconnected world.

PISA 2012 Creative Problem Solving

The PISA 2012 Creative Problem Solving assessment provides internationally comparable data on students' capacity to understand and resolve problem situations where a method of solution is not immediately obvious. It includes the willingness to engage with such situations in order to achieve one’s potential as a constructive and reflective citizen.

Ongoing RDI projects

PISA is currently undertaking several RDI projects: improving accessibility, measuring ESCS, automatic coding of text responses, assessing critical media literacy, developing PISA quality standards.

Improving accessibility

In order to ensure the validity and comparability of the results of the assessment, PISA has maintained strict guidelines on the participation of students with special education needs, providing limited possibilities to accommodate them. As a result, some students are currently excluded from PISA and, in some countries, exclusion rates are growing as more and more students are recognised as having disabilities or needs. Given the breadth and policy relevance of PISA, it is important that it gives every student the opportunity to demonstrate their skills, and that it generates information on the learning outcomes and context that represents all students.

This project aims to improve accessibility in PISA by i) taking stock of the situation and developing instruments to assess PISA’s accessibility; ii) identifying and testing promising accommodations to address special education needs; iii) improving the existing instruments through the definition of inclusive design principles to reduce the need for accommodations over time.

Contributors

  • Elodie Persem (Head of the Accessibility, Innovation and Research Pole at the DEPP, France)
  • Countries and economies: Brasil, Colombia, Denmark, Finland, France, Norway, Peru, Portugal, Spain, Sweden, United Kingdom

Measuring ESCS

Students’ economic, cultural, and social status (ESCS) represents a key variable for PISA and its intended use as a comparative assessment of the performance of education systems. It allows to contextualize results of the student assessment within broader societal differences, and to address, within each system, questions about differences in educational opportunity, inequalities in learning outcomes, and how these differences and inequalities have evolved over time.

Given its importance to understanding PISA results, this project aims to improve the measurement of students' socio-economic status, while maintaining the comparability of the measure available for previous cycles and to construct new indicators of equity in education and of material deprivation (and material well-being) of children.

Automatic coding of text responses in the PISA test and questionnaire

Constructed-response (‘open-ended’) items constitute an integral part of the PISA assessment: they allow assessing higher-order cognitive skills in some domains, present a change from closed-response (e.g. multiple-choice) items, allow for partial grading, prevent students from guessing right, and permit to gather information that would not be possible using a closed-response format. With the exception of some short, numeric answers, the resulting text responses require human coding before the data can be analysed. The process of coding is, however, prone to error, time consuming, and expensive.

This project aims to introduce a system for automatically coding open-text responses into the operational procedures in PISA. It will also evaluate the feasibility of using artificial intelligence (AI), and in particular natural-language processing (NLP) methods for coding longer text responses. 

  • Gisele Alves (Instituto Ayrton Senna, Brazil)
  • Nico Andersen (DIPF, Germany)
  • Roger Beaty (Pennsylvania State University, United States)
  • Mathias Benedek (Universität Graz, Austria)
  • Denis Dumas (University of Georgia, United States)
  • Peter Organisciak (University of Denver, United States)
  • John D Patterson (Pennsylvania State University, United States)
  • Ricardo Primi (Universidade São Francisco, Brazil)
  • Ricelli Silva (University of São Paulo, Brazil)
  • Fabian Zehner (DIPF, Germany)

Assessing critical media literacy

Every day, people access the web to stay informed on what is happening around them, to learn about new topics and to interact with their peers. However, information on the web (and particularly social media platforms) is not always reliable and could reinforce biased perspectives. In this context, it becomes essential that students develop the skills needed to critically evaluate and use online information, as well as learn to use social media in a critical and responsible way. 

This project aims to begin the development of a new assessment of online media literacy. This complex construct includes active inquiry processes in open and interactive environments and aspects of responsible decision-making, such as taking decisions on what information to share and understanding consequences of these decisions. Knowledge of media tools and networks are also important element of the construct.

  • Magdalena Pokropek (Uniwersytet Warszawski)
  • Luis Francisco Vargas-Madriz (McGill University)

Defining comprehensive PISA standards

The PISA technical standards focus on the procedures that ensure the consistent implementation of PISA in different countries, but do not address other phases of the project cycle (e.g. instrument development, analysis and reporting) or broader quality dimensions such as validity, reliability, cross-national and cross-cycle comparability, and fairness.

This project aims to develop additional guidelines to guide the development of high-quality instruments and ensure that PISA results are accurately interpreted and used. 

  • Nina Jude (University of Heidelberg, Institute for Educational Science)
  • Peter van Rijn (Educational Testing Service)
  • Leslie Rutkowski (Indiana University, School of Education)
  • Stephen G. Sireci (University of Massachusetts at Amherst, College of Education)
  • Javier Suarez-Alvarez (University of Massachusetts at Amherst, College of Education)
  • Megan Welsh (UC Davis, School of Education)
  • Sabine Meinck (International Association for the Evaluation of Educational Achievement – IEA)
  • Christian Monseur (University of Liège, Department of Education Sciences)
  • Elica Krajceva (cApStAn)
  • Jonas Bertling (Educational Testing Service)
  • Ketan (University of Massachusetts at Amherst, College of Education)
  • Lucia Tramonte (University of New Brunswick, Canadian Research Institute for Social Policy)

Completed RDI projects

In previous years, the Programme has conducted research on multi-stage adaptive testing, measuring test engagement, and using process data.

Increasing efficiency through multi-stage adaptive testing

Starting with the 2018 cycle, PISA uses a multi-stage adaptive testing (MSAT) algorithm to assign different test forms to students with different abilities. This helped PISA address test-fairness concerns and reduced measurement error, especially for students with exceptionally low or high performance.

This project explored PISA’s current approach to adaptive testing, identified other adaptive designs that can be incorporated into the instrument, and provided recommendations regarding future test designs.

Related outputs

  • Invited Symposium
  • Buchholz J., Shin H. J. and M. Bolsinova (forthcoming); “Test engagement in multistage adaptive testing - A causal analysis based on PISA 2018”
  • Chang H. and Y. Zheng (forthcoming); “On-the-fly Multistage Testing: An Alternative Design for PISA”
  • Frey A., C. König and A.  Fink (forthcoming); “ A highly adaptive testing design for PISA ”
  • Shin H. J., C. König, F. Robin, K. Yamamoto and A. Frey (forthcoming); “Robustness of Multistage Adaptive Testing Designs in Educational Large-Scale Assessments”
  • Van Rijn P. , U. Ali , H. J. Shin  and F. Robin (forthcoming); “Stepwise Assembly for Multistage Adaptive Testing: An Application to PISA 2022 Mathematics”

Developing measures of engagement

Because performance in PISA has limited consequences for students who sit the test, their motivation to show what they know and can do might not always conform to the expectations of test developers and policy makers who rely on PISA data for decision making. Student motivation, effort, and engagement might influence not only their performance on the PISA test, but also the reliability of the responses in the student questionnaire. Systematic differences in student engagement between groups of students thus hold the potential to distort valid comparisons between these groups.

This project aimed to enhance our understanding about the phenomenon based on previous cycles of PISA, and to inform the development and validation of PISA instruments as well as the use and communication of PISA data in the future.

  • Avvisati F., J.  Buchholz, M. Piacentini and L. F. Vargas Madrid (2023); “ Item Characteristics and Test-Taker Disengagement in PISA ”
  • Buchholz J. , M. Cignetti and M. Piacentini (2022); “ Developing measures of engagement in PISA ”. A subset of the results was also published in the PISA in Focus series (no.  119 ).
  • Buchholz J., H.J. Shin and M. Bolsinova (forthcoming); “Test engagement in multistage adaptive testing - A causal analysis based on PISA 2018”
  • Uliztzch E., J. Buchhoz, H.J. Shin, O. Lüdtke, J. Bertling (forthcoming);“Scale format and careless responding"

Using process data to augment measures of student performance

The transition to technology-based assessments has introduced new opportunities to incorporate more open, interactive and engaging tasks that require students to engage in complex response processes. These opportunities come with the possibility to digitally capture fine-grained data on students’ actions throughout the tasks, thus broadening the possibilities to make inferences on students’ thinking processes and behaviours.

This project aimed to illustrate the different possible uses of process data in PISA and to demonstrate best practices in the design of tasks and scoring models that integrate process data as a source of evidence.

  • Bobrowicz K., S. Greiff, A. Han and A. Weber (2023)  “Self-regulated learning in the PISA 2025 Learning in the Digital World Assessment: From conceptual background to data analysis”
  • Bumbacher E. and R. Davis (2022) “Exploring the use of process data in a new assessment of computational problem solving”
  • Ercikan K., H. Guo, and H. H. Por (2023)  “ Use of process data in advancing the practice and science of technology-rich assessments ”
  • Guez, A., E. Linsenmayer and M. Piacentini (2023)  “ The uses of process data in PISA ”
  • Maddox B. (2023) “ The uses of process data in large-scale educational assessments ”
  • Sabatini J., X. Hu, M. Piacentini and N. Foster (2023)  “ Designing innovative tasks and test environments ”,
  • Scalise K., C. Malcom and E. Kaylor (2023)  “ Analysing and integrating new sources of data reliably in innovative assessments ”

Platform for Innovative Learning Assessments

PILA illustration

The open-source  Platform for Innovative Learning Assessments  (PILA) offers students and teachers interactive, customisable learning experiences with built-in assessment. PILA supports personalised learning of important 21st century competences, as well as research on digital learning and assessment. 

To use PILA in your classroom, email  edu.pila@oecd.org  with your full name and the name of the school at which you work for an access code. 

How to get involved and support innovation in PISA

Our experts.

Find out more  about the community of experts that support innovation in PISA.

Sanna Järvelä photo

Our RDI partners

Innovation in PISA is supported by PISA participating countries and the following institutions:

PISA RDI partners

More facts, key findings and policy recommendations

collaborative problem solving criticism

Create customised data profiles and compare countries

collaborative problem solving criticism

We welcome any feedback and/or questions regarding the innovative work in PISA from education stakeholders, scholars, students or other interested individuals. For more information, please feel free to reach out to the PISA innovative assessments team at  edu.pisainnovation@oecd.org . 

Logo

  • Collaborative Problem Solving in Schools »

Collaborative Problem Solving in Schools

Collaborative Problem Solving ® (CPS) is an evidence-based, trauma-informed practice that helps students meet expectations, reduces concerning behavior, builds students’ skills, and strengthens their relationships with educators.

Collaborative Problem Solving is designed to meet the needs of all children, including those with social, emotional, and behavioral challenges. It promotes the understanding that students who have trouble meeting expectations or managing their behavior lack the skill—not the will—to do so. These students struggle with skills related to problem-solving, flexibility, and frustration tolerance. Collaborative Problem Solving has been shown to help build these skills.

Collaborative Problem Solving avoids using power, control, and motivational procedures. Instead, it focuses on collaborating with students to solve the problems leading to them not meeting expectations and displaying concerning behavior. This trauma-informed approach provides staff with actionable strategies for trauma-sensitive education and aims to mitigate implicit bias’s impact on school discipline . It integrates with MTSS frameworks, PBIS, restorative practices, and SEL approaches, such as RULER. Collaborative Problem Solving reduces challenging behavior and teacher stress while building future-ready skills and relationships between educators and students.

Transform School Discipline

Traditional school discipline is broken, it doesn’t result in improved behavior or improved relationships between educators and students. In addition, it has been shown to be disproportionately applied to students of color. The Collaborative Problem Solving approach is an equitable and effective form of relational discipline that reduces concerning behavior and teacher stress while building skills and relationships between educators and students. Learn more >>

A Client’s Story

CPS SEL

Collaborative Problem Solving and SEL

Collaborative Problem Solving aligns with CASEL’s five core competencies by building relationships between teachers and students using everyday situations. Students develop the skills they need to prepare for the real world, including problem-solving, collaboration and communication, flexibility, perspective-taking, and empathy. Collaborative Problem Solving makes social-emotional learning actionable.

Collaborative Problem Solving and MTSS

The Collaborative Problem Solving approach integrates with Multi-Tiered Systems of Support (MTSS) in educational settings. CPS benefits all students and can be implemented across the three tiers of support within an MTSS framework to effectively identify and meet the diverse social emotional and behavioral needs of students in schools. Learn More >>

CPS and MTSS

The Results

Our research has shown that the Collaborative Problem Solving approach helps kids and adults build crucial social-emotional skills and leads to dramatic decreases in behavior problems across various settings. Results in schools include remarkable reductions in time spent out of class, detentions, suspensions, injuries, teacher stress, and alternative placements as well as increases in emotional safety, attendance, academic growth, and family participation.

Academic growth

Educators, join us in this introductory course and develop your behavioral growth mindset!

This 2-hour, self-paced course introduces the principles of Collaborative Problem Solving ®  while outlining how the approach is uniquely suited to the needs of today's educators and students. Tuition: $39 Enroll Now

Bring CPS to Your School

We can help you bring a more accurate, compassionate, and effective approach to working with children to your school or district.

What Our Clients Say

Education insights, corporal punishment ban in new york sparks awareness of practice, to fix students’ bad behavior, stop punishing them, behaviors charts: helpful or harmful, bringing collaborative problem solving to marshalltown, ia community school district, the benefits of changing school discipline, eliminating the school-to-prison pipeline, ending restraint and seclusion in schools: podcast, a skill-building approach to reducing students’ anxiety and challenging behavior, the school discipline fix book club, what can we do about post-pandemic school violence, sos: our schools are in crisis and we need to act now, talking to kids about the tiktok bathroom destruction challenge, north dakota governor’s summit on innovative education 2021, kids of color suffer from both explicit and implicit bias, school discipline is trauma-insensitive and trauma-uninformed, privacy overview.

CookieDurationDescription
__cf_bm1 hourThis cookie, set by Cloudflare, is used to support Cloudflare Bot Management.
__hssc1 hourHubSpot sets this cookie to keep track of sessions and to determine if HubSpot should increment the session number and timestamps in the __hstc cookie.
__hssrcsessionThis cookie is set by Hubspot whenever it changes the session cookie. The __hssrc cookie set to 1 indicates that the user has restarted the browser, and if the cookie does not exist, it is assumed to be a new session.
cookielawinfo-checkbox-advertisement1 yearSet by the GDPR Cookie Consent plugin, this cookie records the user consent for the cookies in the "Advertisement" category.
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
CookieLawInfoConsent1 yearCookieYes sets this cookie to record the default button state of the corresponding category and the status of CCPA. It works only in coordination with the primary cookie.
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
CookieDurationDescription
li_gc6 monthsLinkedin set this cookie for storing visitor's consent regarding using cookies for non-essential purposes.
lidc1 dayLinkedIn sets the lidc cookie to facilitate data center selection.
UserMatchHistory1 monthLinkedIn sets this cookie for LinkedIn Ads ID syncing.
CookieDurationDescription
__hstc6 monthsHubspot set this main cookie for tracking visitors. It contains the domain, initial timestamp (first visit), last timestamp (last visit), current timestamp (this visit), and session number (increments for each subsequent session).
_ga1 year 1 month 4 daysGoogle Analytics sets this cookie to calculate visitor, session and campaign data and track site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognise unique visitors.
_ga_*1 year 1 month 4 daysGoogle Analytics sets this cookie to store and count page views.
_gat_gtag_UA_*1 minuteGoogle Analytics sets this cookie to store a unique user ID.
_gid1 dayGoogle Analytics sets this cookie to store information on how visitors use a website while also creating an analytics report of the website's performance. Some of the collected data includes the number of visitors, their source, and the pages they visit anonymously.
AnalyticsSyncHistory1 monthLinkedin set this cookie to store information about the time a sync took place with the lms_analytics cookie.
CONSENT2 yearsYouTube sets this cookie via embedded YouTube videos and registers anonymous statistical data.
hubspotutk6 monthsHubSpot sets this cookie to keep track of the visitors to the website. This cookie is passed to HubSpot on form submission and used when deduplicating contacts.
vuid1 year 1 month 4 daysVimeo installs this cookie to collect tracking information by setting a unique ID to embed videos on the website.
CookieDurationDescription
bcookie1 yearLinkedIn sets this cookie from LinkedIn share buttons and ad tags to recognize browser IDs.
bscookie1 yearLinkedIn sets this cookie to store performed actions on the website.
li_sugr3 monthsLinkedIn sets this cookie to collect user behaviour data to optimise the website and make advertisements on the website more relevant.
NID6 monthsGoogle sets the cookie for advertising purposes; to limit the number of times the user sees an ad, to unwanted mute ads, and to measure the effectiveness of ads.
test_cookie15 minutesdoubleclick.net sets this cookie to determine if the user's browser supports cookies.
VISITOR_INFO1_LIVE6 monthsYouTube sets this cookie to measure bandwidth, determining whether the user gets the new or old player interface.
YSCsessionYoutube sets this cookie to track the views of embedded videos on Youtube pages.
yt-remote-connected-devicesneverYouTube sets this cookie to store the user's video preferences using embedded YouTube videos.
yt-remote-device-idneverYouTube sets this cookie to store the user's video preferences using embedded YouTube videos.
yt.innertube::nextIdneverYouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen.
yt.innertube::requestsneverYouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen.
CookieDurationDescription
__Secure-YEC1 year 1 monthDescription is currently not available.
_cfuvidsessionDescription is currently not available.
_pk_id.d014234b-506e-4c9f-8f74-9ecfcde5874f.838e1 hourDescription is currently not available.
_pk_ses.d014234b-506e-4c9f-8f74-9ecfcde5874f.838e1 hourDescription is currently not available.
cf_clearance1 yearDescription is currently not available.
ppms_privacy_d014234b-506e-4c9f-8f74-9ecfcde5874f1 yearDescription is currently not available.
VISITOR_PRIVACY_METADATA6 monthsDescription is currently not available.

collaborative problem solving criticism

Collaborative Problem Solving: What It Is and How to Do It

What is collaborative problem solving, how to solve problems as a team, celebrating success as a team.

Problems arise. That's a well-known fact of life and business. When they do, it may seem more straightforward to take individual ownership of the problem and immediately run with trying to solve it. However, the most effective problem-solving solutions often come through collaborative problem solving.

As defined by Webster's Dictionary , the word collaborate is to work jointly with others or together, especially in an intellectual endeavor. Therefore, collaborative problem solving (CPS) is essentially solving problems by working together as a team. While problems can and are solved individually, CPS often brings about the best resolution to a problem while also developing a team atmosphere and encouraging creative thinking.

Because collaborative problem solving involves multiple people and ideas, there are some techniques that can help you stay on track, engage efficiently, and communicate effectively during collaboration.

  • Set Expectations. From the very beginning, expectations for openness and respect must be established for CPS to be effective. Everyone participating should feel that their ideas will be heard and valued.
  • Provide Variety. Another way of providing variety can be by eliciting individuals outside the organization but affected by the problem. This may mean involving various levels of leadership from the ground floor to the top of the organization. It may be that you involve someone from bookkeeping in a marketing problem-solving session. A perspective from someone not involved in the day-to-day of the problem can often provide valuable insight.
  • Communicate Clearly.  If the problem is not well-defined, the solution can't be. By clearly defining the problem, the framework for collaborative problem solving is narrowed and more effective.
  • Expand the Possibilities.  Think beyond what is offered. Take a discarded idea and expand upon it. Turn it upside down and inside out. What is good about it? What needs improvement? Sometimes the best ideas are those that have been discarded rather than reworked.
  • Encourage Creativity.  Out-of-the-box thinking is one of the great benefits of collaborative problem-solving. This may mean that solutions are proposed that have no way of working, but a small nugget makes its way from that creative thought to evolution into the perfect solution.
  • Provide Positive Feedback. There are many reasons participants may hold back in a collaborative problem-solving meeting. Fear of performance evaluation, lack of confidence, lack of clarity, and hierarchy concerns are just a few of the reasons people may not initially participate in a meeting. Positive public feedback early on in the meeting will eliminate some of these concerns and create more participation and more possible solutions.
  • Consider Solutions. Once several possible ideas have been identified, discuss the advantages and drawbacks of each one until a consensus is made.
  • Assign Tasks.  A problem identified and a solution selected is not a problem solved. Once a solution is determined, assign tasks to work towards a resolution. A team that has been invested in the creation of the solution will be invested in its resolution. The best time to act is now.
  • Evaluate the Solution. Reconnect as a team once the solution is implemented and the problem is solved. What went well? What didn't? Why? Collaboration doesn't necessarily end when the problem is solved. The solution to the problem is often the next step towards a new collaboration.

The burden that is lifted when a problem is solved is enough victory for some. However, a team that plays together should celebrate together. It's not only collaboration that brings unity to a team. It's also the combined celebration of a unified victory—the moment you look around and realize the collectiveness of your success.

We can help

Check out MindManager to learn more about how you can ignite teamwork and innovation by providing a clearer perspective on the big picture with a suite of sharing options and collaborative tools.

Need to Download MindManager?

Try the full version of mindmanager free for 30 days.

Critical Thinking for Team Collaboration: A Guide to Effective Problem-Solving

Critical Thinking for Team Collaboration

Critical thinking is an essential skill that enhances a team’s ability to collaborate efficiently and effectively. By honing their critical thinking skills, team members can analyze information, solve problems, and make well-informed decisions. In the context of teamwork, critical thinking also plays a crucial role in improving communication, generating creativity, and fostering a shared understanding among members.

Furthermore, critical thinking in a team setting goes beyond addressing complex problems. Incorporating these skills in everyday communication and decision-making processes can yield significant benefits for professional development and remote work environments. Encouraging a culture that values critical thinking will not only promote enhanced collaboration but also prepare individuals for future challenges and opportunities within their respective fields.

Key Takeaways

Understanding critical thinking.

Critical thinking is a vital skill for effective team collaboration. It involves the ability to analyze information, question assumptions and biases, and reflect on one’s beliefs in order to make informed decisions and foster innovation. This skill set can greatly enhance a team’s ability to solve problems and reach their goals.

One important aspect of critical thinking is recognizing and challenging one’s own biases and assumptions. All individuals possess a unique set of beliefs that can potentially cloud their judgment and decision-making. Within a team, acknowledging and addressing these biases can lead to more effective collaboration, as team members learn to consider diverse perspectives and views.

Another key component of critical thinking is the ability to analyze information. Conducting a thorough analysis of information enables teams to evaluate the relevance, validity, and reliability of facts. This helps the team make informed conclusions, ensuring that decisions are based on accurate and trustworthy data.

In addition, critical thinkers excel at drawing inferences from available data. Making accurate inferences is an essential skill for problem-solving and decision-making, as it allows team members to make connections between seemingly unrelated information in order to generate new ideas or solutions.

The Role of Critical Thinking in Team Collaboration

In a collaborative environment, teamwork and cooperation are key factors that contribute to the overall success of the team. Team members should be able to think critically to evaluate different options, prioritize tasks, and allocate resources efficiently. This way, they can optimize their efforts and time to achieve the set goals in a timely manner.

In conclusion, the integration of critical thinking in team collaboration not only enhances productivity but also promotes innovation, effective decision-making, and open communication. By developing these essential skills, teams can work together more cohesively, ultimately achieving their desired goals.

Communication and Critical Thinking

Information and communication technologies, such as collaborative tools and digital platforms, play a significant role in facilitating communication and critical thinking in team settings. They help streamline processes, enable the sharing of resources, and support remote team members in staying connected. Utilizing such technologies can lead to more efficient decision-making and problem-solving, ultimately enhancing overall team performance.

In summary, nurturing both communication and critical thinking skills within a team leads to more effective collaboration and increased productivity. By incorporating open dialogue, constructive feedback, and the use of information and communication technologies, team members can create a supportive environment that fosters growth and promotes success.

Generating Creativity in Team Collaboration

During brainstorming sessions, it’s important for participants to keep an open mind and be willing to explore different paths before settling on a specific strategy. This process of exploration allows for the emergence of unique and unconventional ideas, which are key ingredients of creativity. Encouraging team members to think divergently and approach problems from various angles can lead to more effective and innovative solutions.

While exploring different ideas, it’s also crucial to ensure that team members maintain a neutral and objective mindset. This helps in critically evaluating each idea and selecting the most viable option, while keeping biases and personal preferences at bay.

Tools and Resources for Critical Thinking

Technology plays a significant role in enhancing critical thinking within a team. Online platforms such as LinkedIn offer various resources on how to encourage critical thinking, though the use of peer reviews, surveys, polls, brainstorming sessions, debriefs, and retrospectives. These tools enable team members to exchange ideas, evaluate different approaches, and draw conclusions based on the collective wisdom of the group.

Organizational infrastructure also plays a crucial role in fostering critical thinking. Creating a culture of open communication and collaboration is essential in enabling team members to engage in constructive debate, express their thoughts, and evaluate different perspectives. Establishing channels for feedback, such as regular team meetings and one-on-one sessions, can help reinforce critical thinking behaviors.

In conclusion, leveraging technology, education, knowledge management systems, and the right organizational infrastructure can significantly impact a team’s ability to think critically and collaborate effectively. By providing the necessary tools and resources, organizations can foster a culture that values critical thinking and ultimately improve team performance.

Experience and Perspective in Critical Thinking

In a collaborative setting, considering multiple perspectives allows the team to weigh different options and contemplate a range of possible outcomes. Each team member’s unique background and personal experiences can provide new insights that might not have been considered otherwise. As individuals synthesize information and share their opinions, they effectively expand the entire team’s collective knowledge base.

Collaborative critical thinking thus greatly benefits from the richness of team member experiences and the varied opinions they bring to the table. By thoroughly examining these perspectives and objectively synthesizing the information, teams can ensure that their decisions are both robust and well-considered.

Decision Making and Problem Solving Through Critical Thinking

One essential aspect of critical thinking in decision-making is the evaluation of pros and cons. By thoroughly examining the strengths and weaknesses of different alternatives, teams can make informed decisions aligned with their objectives. They can also anticipate and mitigate potential negative consequences, thereby supporting a stronger and more effective collaboration.

These techniques enable teams to gather diverse perspectives, analyze information, and decide on the most appropriate course of action.

Critical Thinking in Remote Work

A crucial aspect of fostering critical thinking in remote teams is ensuring that team members have a clear understanding of their roles and responsibilities. With increased autonomy, remote workers must be able to analyze tasks, identify potential challenges, and make informed decisions. Open communication channels, regular check-ins, and performance evaluations can support this process.

By focusing on these aspects of remote work, employers can create an environment where critical thinking flourishes. Teams with strong critical thinking abilities tend to produce better quality work, make more informed decisions, and collaborate more effectively, ultimately benefiting both the individual team members and the organization as a whole.

Benefits of Critical Thinking for Professional Development

In terms of productivity, incorporating critical thinking in team collaboration leads to streamlined operations and reduces time spent on unnecessary tasks. Collaborative learning and critical thinking go hand in hand, fostering an environment where team members effectively communicate, share ideas, and work together to solve problems. This increased efficiency leads to higher overall productivity.

Finally, critical thinking enhances individual accountability by encouraging a thoughtful, reflective approach to work. This mindset promotes taking responsibility for one’s actions and decisions, and understanding the impact on the team and organization as a whole. Engaging in critical thinking practices keeps professionals grounded and focused on their actions’ consequences.

Frequently Asked Questions

What skills are essential for collaborative critical thinking, how can critical thinking be applied in a team setting.

Applying critical thinking in a team setting involves asking the right questions, challenging assumptions, evaluating evidence, and fostering a culture of open-mindedness. Teams must encourage members to think critically by creating an environment that promotes the sharing of diverse perspectives, fosters openness and curiosity, and emphasizes clear and concise reasoning.

How does collaboration promote critical thinking?

Why is critical thinking important for teamwork, what are some effective critical thinking training activities for teams.

Effective critical thinking training activities for teams may include workshops on problem-solving and decision-making strategies, group brainstorming sessions, role playing exercises, and team building activities that promote problem-solving and decision-making skills . These activities encourage team members to think critically, collaborate, and learn from one another in a supportive environment.

Can you recommend any books or resources on critical thinking for team collaboration?

You may also like, critical thinking for self-improvement: a guide to unlocking your potential, how to answer critical thinking questions, unleashing your creativity: associative thinking techniques explained, the connection between associative thinking and entrepreneurship: exploring the link, download this free ebook.

Collaborative Problem-Solving in Knowledge-Rich Domains: A Multi-Study Structural Equation Model

  • Open access
  • Published: 24 June 2024

Cite this article

You have full access to this open access article

collaborative problem solving criticism

  • Laura Brandl   ORCID: orcid.org/0000-0001-7974-7892 1 ,
  • Matthias Stadler 1 , 2 ,
  • Constanze Richters 1 ,
  • Anika Radkowitsch 3 ,
  • Martin R. Fischer 2 ,
  • Ralf Schmidmaier 4 &
  • Frank Fischer 1  

233 Accesses

Explore all metrics

Collaborative skills are crucial in knowledge-rich domains, such as medical diagnosing. The Collaborative Diagnostic Reasoning (CDR) model emphasizes the importance of high-quality collaborative diagnostic activities (CDAs; e.g., evidence elicitation and sharing), influenced by content and collaboration knowledge as well as more general social skills, to achieve accurate, justified, and efficient diagnostic outcomes (Radkowitsch et al., 2022). However, it has not yet been empirically tested, and the relationships between individual characteristics, CDAs, and diagnostic outcomes remain largely unexplored. The aim of this study was to test the CDR model by analyzing data from three studies in a simulation-based environment and to better understand the construct and the processes involved ( N = 504 intermediate medical students) using a structural equation model including indirect effects. We found various stable relationships between individual characteristics and CDAs, and between CDAs and diagnostic outcome, highlighting the multidimensional nature of CDR. While both content and collaboration knowledge were important for CDAs, none of the individual characteristics directly related to diagnostic outcome. The study suggests that CDAs are important factors in achieving successful diagnoses in collaborative contexts, particularly in simulation-based settings. CDAs are influenced by content and collaboration knowledge, highlighting the importance of understanding collaboration partners’ knowledge. We propose revising the CDR model by assigning higher priority to collaboration knowledge compared with social skills, and dividing the CDAs into information elicitation and sharing, with sharing being more transactive. Training should focus on the development of CDAs to improve CDR skills.

Similar content being viewed by others

collaborative problem solving criticism

Diagnosing Collaboratively: A Theoretical Model and a Simulation-Based Learning Environment

collaborative problem solving criticism

Diagnosing virtual patients: the interplay between knowledge and diagnostic activities

collaborative problem solving criticism

Medical Students’ Clinical Reasoning During a Simulated Viral Pandemic: Evidence of Cognitive Integration and Insights on Novices’ Approach to Diagnostic Reasoning

Avoid common mistakes on your manuscript.

Introduction

Collaborative skills are highly relevant in many situations, ranging from computer-supported collaborative learning to collaborative problem-solving in professional practice (Fiore et al., 2018 ). While several broad collaborative problem-solving frameworks exist (OECD, 2017 ), most of them are situated in knowledge-lean settings. However, one example of collaborative problem-solving of knowledge-rich domains is collaborative diagnostic reasoning (CDR; Radkowitsch et al., 2022 )—which aligns closely with medical practice—as this is a prototypical knowledge-rich domain requiring high collaboration skills in daily practice. In daily professional practice, physicians from different specialties often need to collaborate with different subdisciplines to solve complex problems, such as diagnosing, that is, determining the causes of a patient’s problem. Moreover, research in medical education and computer-supported collaborative learning suggests that the acquisition of medical knowledge and skills is significantly enhanced by collaborative problem-solving (Hautz et al., 2015 ; Koschmann et al., 1992 ). For problem-solving and learning, it is crucial that all relevant information (e.g., evidence and hypotheses) is elicited from and shared with the collaboration partner (Schmidt & Mamede, 2015 ). However, CDR is not unique to the medical field but also relevant in other domains, such as teacher education (Heitzmann et al., 2019 ).

The CDR model has been the basis of empirical studies and describes how individual characteristics and the diagnostic process are related to the diagnostic outcome. However, it has not yet been empirically tested, and the relationships between individual characteristics, diagnostic process, and diagnostic outcome remain mostly unexplored (Fink et al., 2023 ). The aim of this study is to test the CDR model by analyzing data from three studies with similar samples and tasks investigating CDR in a simulation-based environment. By undertaking these conceptual replications, we aspire to better understand the construct and the processes involved. As prior research has shown, collaboration needs to be performed at a high quality to achieve accurate problem solutions respectively learning outcomes (Pickal et al., 2023 ).

Collaborative Diagnostic Reasoning (CDR) Model

Diagnosing can be understood as the process of solving complex diagnostic problems through “goal-oriented collection and interpretation of case-specific or problem-specific information to reduce uncertainty” in decision-making through performing diagnostic activities at a high quality (Heitzmann et al., 2019 , p. 4). To solve diagnostic problems, that is, to identify the causes of an undesired state, it is increasingly important to collaborate with experts from different fields, as these problems become too complex to be solved individually (Abele, 2018 ; Fiore et al., 2018 ). Collaboration provides advantages such as the division of labor, access to diverse perspectives and expertise, and enhanced solution quality through collaborative sharing of knowledge and skills (Graesser et al., 2018 ).

The CDR model is a theoretical model focusing on the diagnostic process in collaborative settings within knowledge-rich domains (Radkowitsch et al., 2022 ). The CDR model is based on scientific discovery as a dual-search model (SDDS; Klahr & Dunbar, 1988 ) and its further development by van Joolingen and Jong ( 1997 ). The SDDS model describes individual reasoning as the coordinated search through hypothetical evidence and hypotheses spaces and indicates that for successful reasoning it is important not only that high-quality cognitive activities within these spaces are performed but also that one is able to coordinate between them (Klahr & Dunbar, 1988 ). In the extended SDDS model (van Joolingen & Jong, 1997 ) focusing on learning in knowledge-rich domains, a learner hypothesis space was added including all the hypotheses one can search for without additional knowledge. Although Dunbar ( 1995 ) found that conceptual change occurs more often in groups than in individual work, emphasizing the importance of collaborative processes in scientific thinking and knowledge construction, the SDDS model has hardly been systematically applied in computer-supported collaborative learning and collaborative problem-solving.

Thus, the CDR model builds upon these considerations and describes the relationship between individual characteristics, the diagnostic process, and the diagnostic outcome. As in the SDDS model we assume that CDR involves activities within an evidence and hypotheses space; however, unlike the SDDS in the CDR model, these spaces are understood as cognitive storages of information. Which aligns more to the extended dual search space model of scientific discovery learning (van Joolingen & Jong, 1997 ). In summary we assume that coordinating between evidence (data) and hypothesis (theory) is essential for successful diagnosing. Further, the CDR model is extended to not only individual but also collaborative cognitive activities and describes the interaction of epistemic activities (F. Fischer et al., 2014 ) and collaborative activities (Liu et al., 2016 ) to construct a shared problem representation (Rochelle & Teasley, 1995 ) and effectively collaborate. Thus, we define CDR as a set of skills for solving a complex problem collaboratively “by generating and evaluating evidence and hypotheses that can be shared with, elicited from, or negotiated among collaborators” (Radkowitsch et al., 2020 , p. 2). The CDR model also makes assumptions about the factors necessary for successful CDR. First, we look at what successful CDR means, why people differ, and what the mediating processes are.

Diagnostic Outcome: Accuracy, Justification, and Efficiency

The primary outcome of diagnostic processes, such as CDR, is the accuracy of the given diagnosis, which indicates problem-solving performance or expertise (Boshuizen et al., 2020 ). However, competence in diagnostic reasoning, whether it is done individually or collaboratively, also includes justifying the diagnosis and reaching it effectively. This is why, in addition to diagnostic accuracy, diagnostic justification and diagnostic efficiency should also be considered as secondary outcomes of the diagnostic reasoning process (Chernikova et al., 2022 ; Daniel et al., 2019 ). Diagnostic justification makes the reasoning behind the decision transparent and understandable for others (Bauer et al., 2022 ). Good reasoning entails a justification including evidence, which supports the reasoning (Hitchcock, 2005 ). Diagnostic efficiency is related to how much time and effort is needed to reach the correct diagnosis; this is important for CDR, as diagnosticians in practice are usually under time pressure (Braun et al., 2017 ). Both diagnostic justification and diagnostic efficiency are thus indicators of a structured and high-quality reasoning process. So, while in many studies, the focus of assessments regarding diagnostic reasoning is on the accuracy of the given diagnosis (Daniel et al., 2019 ), the CDR model considers all three facets of the diagnostic outcome as relevant factors.

Individual Characteristics: Knowledge and Social Skills

Research has shown that content knowledge, social skills, and, in particular, collaboration knowledge are important prerequisites for, and outcomes of, computer-supported collaborative learning (Jeong et al., 2019 ; Vogel et al., 2017 ). CDR has integrated these dependencies into its model structure. Thus, the CDR model assumes that people engaging in CDR differ with respect to their content knowledge, collaboration knowledge, and domain general social skills.

Content knowledge refers to conceptual and strategic knowledge in a specific domain (Förtsch et al., 2018 ). Conceptual knowledge encompasses factual understanding of domain-specific concepts and their interrelations. Strategic knowledge entails contextualized knowledge regarding problem-solving during the diagnostic process (Stark et al., 2011 ). During expertise development, large amounts of content knowledge are acquired and restructured through experience with problem-solving procedures and routines (Boshuizen et al., 2020 ). Research has repeatedly shown that having high conceptual and strategic knowledge is associated with the diagnostic outcome (e.g., Kiesewetter et al., 2020 ; cf. Fink et al., 2023 ).

In addition to content knowledge, the CDR model assumes that collaborators need collaboration knowledge. A key aspect of collaboration knowledge (i.e., being aware of knowledge distribution in the group; Noroozi et al., 2013 ) is the pooling and processing of non-shared information, as research shows that a lack of collaboration knowledge has a negative impact on information sharing, which in turn has a negative impact on performance (Stasser & Titus, 1985 ).

Finally, general social skills influence the CDR process. These skills mainly influence the collaborative aspect of collaborative problem-solving and less the problem-solving aspect (Graesser et al., 2018 ). Social skills are considered particularly important when collaboration knowledge is low (F. Fischer et al., 2013 ). CDR assumes that in particular the abilities to share and negotiate ideas, to coordinate, and to take the perspective are relevant for the diagnostic process and the diagnostic outcome (Radkowitsch et al., 2022 ; see also Liu et al., 2016 , and Hesse et al., 2015 ).

Diagnostic Process: Collaborative Diagnostic Activities

The diagnostic process is thought to mediate the effect of the individual characteristics on the diagnostic outcome and is described in the CDR model using collaborative diagnostic activities (CDAs), such as evidence elicitation, evidence sharing, and hypotheses sharing (Heitzmann et al., 2019 ; Radkowitsch et al., 2022 ). One of the main functions of CDAs is to construct a shared problem representation (Rochelle & Teasley, 1995 ) by sharing and eliciting relevant information, as information may not be equally distributed among all collaborators initially. To perform these CDAs at a high quality, each collaborator needs to identify information relevant to be shared with the collaboration partner as well as information they need from the collaboration partner (OECD, 2017 ).

Evidence elicitation involves requesting information from a collaboration partner to access additional knowledge resources (Weinberger & Fischer, 2006 ). Evidence sharing and hypothesis sharing involve identifying the information needed by the collaborator to build a shared problem representation. This externalization of relevant information can be understood as the novelty aspect of transactivity (Vogel et al., 2023 ). However, challenges arise from a lack of relevant information due to deficient sharing, which can result from imprecise justification and insufficient clustering of information. In particular, research has shown that collaborators often lack essential information-sharing skills, such as identifying information relevant for others from available data, especially in the medical domain (Kiesewetter et al., 2017 ; Tschan et al., 2009 ).

It is crucial for the diagnostic outcome that all relevant evidence and hypotheses are elicited and shared for the specific collaborators (Tschan et al., 2009 ). However, diagnostic outcomes seem to be influenced more by the relevance and quality of the shared information than by their quantity (Kiesewetter et al., 2017 ; Tschan et al., 2009 ). In addition, recent research has shown that the diagnostic process is not only an embodiment of individual characteristics but also adds a unique contribution to diagnostic outcome (Fink et al., 2023 ). However, it remains difficult to assess and foster CDAs.

Collaboration in Knowledge-Rich Domains: Agent-Based Simulations

There are several challenges when it comes to modelling collaborative settings in knowledge-rich domains for both learning and research endeavors. First, many situations are not easily accessible, as they may be scarce (e.g., natural disasters) or too critical or overwhelming to be approached by novices (e.g., some medical procedures). In these cases, the use of simulation-based environments allows authentic situations approximating real-life diagnostic problems to be provided (Cook et al., 2013 ; Heitzmann et al., 2019 ). Further, the use of technology-enhanced simulations allows data from the ongoing CDR process to be collected in log files. This enables researchers to analyze process data without the need for additional assessments with dedicated tests. Analyzing process data instead of only product data (the assessment’s outcome) permits insights into the problem-solving processes leading to the eventual outcome (e.g., Goldhammer et al., 2017 ). Second, when using human-to-human collaboration, the results of one individual are typically influenced by factors such as group composition or motivation of the collaboration partner (Radkowitsch et al., 2022 ). However, we understand CDR as an individual set of skills enabling collaboration, as indicated by the broader definition of collaborative problem-solving (OECD, 2017 ). Thus, the use of simulated agents as collaboration partners allows a standardized and controlled setting to be created that would otherwise be hard to establish in collaborations among humans (Rosen, 2015 ). There is initial research showing that performance in simulations using computerized agents is moderately related to collaborative skills in other operationalizations (Stadler & Herborn et al., 2020 ). Thus, computerized agents allow for enhanced control over the collaborative process without significantly diverging from human-to-human interaction (Graesser et al., 2018 ; Herborn et al., 2020 ). Third, in less controlled settings it is hard to ensure a specific process is taking place during collaborative problem-solving. For example, when using a human-to-human setting, it is possible that, even though we envision measuring or fostering a specific activity (i.e. hypotheses sharing), it is not performed by the student. Through using an agent-based simulated collaboration partner, we can ensure that all required processes are taking place while solving the problem (Rosen, 2015 ).

Summarizing, by fostering a consistent and controlled setting, simulated agents facilitate the accurate measurement and enhancement of collaborative problem-solving. Evidential support for the application of simulated agents spans a variety of contexts, including tutoring, collaborative learning, knowledge co-construction, and collaborative problem-solving itself, emphasizing their versatility and effectiveness in educational settings (Graesser et al., 2018 ; Rosen, 2015 ).

Research Question and Current Study

In computer-supported collaborative learning there has been the distinction between approaches addressing collaboration to learn and approaches focusing on learning to collaborate. Our study is best understood as addressing the second approach, learning to collaborate. We want to better understand CDR to be able to facilitate collaborative problem-solving skills in learners. Thus, in this paper, we examine what it takes to be able to collaborate in professional practice of knowledge-rich domains, such as medical diagnosing.

When solving diagnostic problems, such as diagnosing a patient, it is often necessary to collaborate with experts from different fields (Radkowitsch et al., 2022 ). In CDR, the diagnostic outcome depends on effectively eliciting and sharing relevant evidence and hypotheses among collaborators, who often lack information-sharing skills (Tschan et al., 2009 ). Thus, the CDR model emphasizes the importance of high-quality CDAs influenced by content and collaboration knowledge as well as social skills to achieve accurate, justified, and efficient diagnostic outcomes (Radkowitsch et al., 2022 ).

This study reviews the relationships postulated in CDR model across three studies to test them empirically and investigate the extent to which the relationships in the CDR model are applicable across studies . By addressing this research question, the current study contributes to a better understanding of the underlying processes in collaborative problem-solving.

We derived a model (Fig. 1 ) from the postulated relationships made by the CDR model. We assume that the individual characteristics are positively related to the CDAs (Hypotheses 1–3), as well as that the CDAs are positively related to the diagnostic outcome (Hypotheses 4–6). Further, we expect that the relationship between the individual characteristics and the diagnostic outcome is partially mediated by the CDAs (Hypotheses 7–15).

figure 1

Visualization of hypothesized relationships between individual characteristics, collaborative diagnostic activities, and diagnostic outcome

We used data from three studies with similar samples and tasks investigating CDR in an agent-based simulation in the medical domain. The studies can therefore be considered conceptual replication studies. Furthermore, we decided to use an agent-based simulation of a typical collaboration setting in diagnostic reasoning, namely the interdisciplinary collaboration between an internist and a radiologist (Radkowitsch et al., 2022 ).

To test the hypotheses, three studies were analyzed. Footnote 1 Study A was carried out in a laboratory setting in 2019 and included medical students in their third to sixth years. Study B included medical students in their fifth to sixth years. Data collection for this study was online due to the pandemic situation in 2020 and 2021. In both studies, participation was voluntary, and participants were paid 10 per hour. Study C was embedded as an online session in the curriculum of the third year of medical school in 2022. Participation was mandatory, but permission to use the data for research purposes was given voluntarily. All participants took part in only one of the three studies. All three studies received ethical approval from LMU Munich (approval numbers 18-261, 18-262 & 22-0436). For a sample description of each study, see Table 1 . We would like to emphasize that none of the students were specializing in internal medicine, ensuring that the study results reflect the competencies of regular medical students without specialized expertise.

Each of the three studies was organized in the same way, with participants first completing a pretest that included a prior knowledge test, socio-demographic questions, and questions about individual motivational-affective characteristics (e.g., social skills, interest, and motivation). Participants then moved on to the CDR simulation and worked on the patient case. The patient case was the same for studies B and C, but was different for study A. The complexity and difficulty of the patient case did not vary systematically between the patient cases.

Simulation and Task

In the CDR simulation, which is also used as a learning environment, the task was to take over the role of an internist and to collaborate with an agent-based radiologist to obtain further information by performing radiological examinations to diagnose fictitious patient cases with the chief symptom of fever. Medical experts from internal medicine, radiology, and general medicine constructed the patient cases. Each case was structured in the same way: by studying the medical record individually, then collaborating with an agent-based radiologist, and finally reporting the final diagnosis and its justification again individually. For a detailed description on the development and validation of the simulation, see Radkowitsch and colleagues ( 2020 ).

Before working within the simulation, participants were presented with an instruction for the simulated scenario and informed what they were to do with it. Then, we instructed participants how to access further information in the medical record by clicking on hyperlinks, as well as how they could use the toolbar to make notes for the later in the process. Furthermore, we acquainted the students with how they could request further information through collaborating with a radiologist.

During the collaboration with an agent-based radiologist, participants were asked to fill out request forms to obtain further evidence from radiological examinations needed to diagnose the patient case. To effectively collaborate with radiologists, it is crucial for internists to clearly communicate the type of evidence required to reduce uncertainty (referred to as “evidence elicitation”) and share any relevant patient information such as signs, symptoms, and medical history (referred to as “evidence sharing”) as well as suspected diagnoses under investigation (referred to as “hypotheses sharing”) that may impact the radiologists’ diagnostic process. Only when participants shared evidence and hypotheses appropriately for their requested examination did they receive a description and evaluation of the radiologist’s radiologic findings. What was considered appropriate was determined by medical experts for each case and examination in preparation of the cases. Therefore, this scenario involves more than a simple division of tasks, as the quality of one person’s activity (i.e., description and evaluation of the radiologic findings) depends on the collaborative efforts (i.e., CDAs) of the other person (OECD, 2017 )

Measures—Individual Characteristics

The individual characteristics were measured in the pretest. The internal consistencies of each measure per study are displayed in Table 4 in the Results section. We want to point out that the internal consistency of knowledge as a construct—determined by the intercorrelations among knowledge pieces—typically exhibits a moderate level. Importantly, recent research argues that a moderate level of internal consistency does not undermine the constructs’ capacity to explain a significant amount of variance (Edelsbrunner, 2024 ; Stadler et al., 2021 ; Taber, 2018 ).

Content knowledge was separated into radiology and internal medicine knowledge, as these two disciplines play a major role in the diagnosis of the simulated patient cases. For each discipline, conceptual and strategic knowledge was assessed (Kiesewetter et al., 2020 ; Stark et al., 2011 ). The items in each construct were presented in a randomized way in each study. However, the items for study C were shortened due to the embedding of the data collection in the curriculum. Therefore, items with a very high or low item difficulty in previous studies were excluded (Table 2 ).

Conceptual knowledge was measured using single-choice questions including five options adapted from a database of examination questions from the Medical Faculty of the LMU Munich, focusing on relevant and closely related diagnoses of the patient cases used in the simulation. A mean score of 0–1 was calculated, representing the percentage of correct answers and indicating the average conceptual knowledge of the participant per medical knowledge domain.

Strategic content knowledge was measured contextually using key features questions (M. R. Fischer et al., 2005 ). Short cases were introduced followed by two to three follow up questions (e.g., What is your most likely suspected diagnosis?, What is your next examination?, What treatment do you choose?). Each question had eight possible answers, from which the learners were asked to choose one. Again, a mean score of 0–1 was calculated, representing the percentage of correct responses, indicating the average strategic content knowledge of the participant per domain.

The measure of collaboration knowledge was consistent across the three studies and specific to the simulated task. Participants were asked to select all relevant information for seven different patient cases with the cardinal symptom of fever (internal medicine). The patient cases were presented in a randomized order and always included 12 pieces of information regarding the chief complaints, medical history, and physical examination of the patient cases. We then assessed whether each piece of information was shared correctly (i.e. whether relevant information was shared and irrelevant information was not shared) and assigned 1 point and divided it by the maximum of 12 points to standardized the range of measure to 0–1. Then we calculated a mean score for each case and then across all cases, resulting in a range of 0–1 indicating the participants’ collaboration knowledge

The construct of social skills was consistent across the three data collections and was measured on the basis of self-report on a 6-point Likert scale ranging from total disagreement to total agreement. The construct was measured using 23 questions divided into five subscales; for example items, see Table 3 . Five questions aimed to measure the overall construct, and the other four subscales were identified using the complex problem-solving frameworks of Liu et al. ( 2016 ) and Hesse et al. ( 2015 ): perspective taking (four questions), information sharing (five questions), negotiation (four questions), and coordination (five questions). For the final score, the mean of all subcategories was calculated, ranging from 1 to 6, representing general social skills.

Measures—Collaborative Diagnostic Activities (CDA)

We operationalize CDAs in the pretest case in terms of quality of evidence elicitation, evidence sharing, and hypotheses sharing. The internal consistencies of each measure per study are displayed in Table 4 in the Results section.

The quality of evidence elicitation was measured by assessing the appropriateness of the requested radiological examination for the indicated diagnosis. An expert solution was developed to indicate which radiological examinations were appropriate for each of the possible diagnoses. If participants requested an appropriate radiological examination for the indicated diagnoses, they received 1 point for that request attempt. Finally, a mean score across all request attempts (maximum of 3) was calculated and scored. The final mean score was transformed into a binary indicator, with 1 indicating that all requested radiological examinations were appropriated and 0 indicating that inappropriate radiological examinations were also requested, due to the categorical nature of the original data and its skewed distribution, with a majority of responses concentrated in a single category.

The quality of evidence sharing was measured using a precision indicator. This was calculated as the proportion of shared relevant evidence out of all shared evidence. Relevant evidence is defined per case and per diagnosis and indicated by the expert solution. The precision indicator was first calculated per radiological request. We then calculated the mean score, summarizing all attempts in that patient case. This resulted in a range from 0 points, indicating that only irrelevant evidence was shared, to 1 point, indicating that only relevant evidence was shared.

The quality of hypotheses sharing was also measured using a precision indicator. For each patient case, the proportion of diagnoses relevant for the respective patient case to all shared diagnoses was calculated. Which diagnoses were considered relevant for a specific case was determined by an expert solution. As with evidence elicitation, this score was evaluated and converted into a binary variable, where 1 indicated that only relevant diagnoses were shared and 0 indicated that also irrelevant diagnoses were shared, due to the categorical nature of the original data and its skewed distribution, with a majority of responses concentrated in a single category.

Measures—Diagnostic Outcome

We operationalize diagnostic outcome in the pretest case in terms of diagnostic accuracy, diagnostic justification, and diagnostic efficiency.

For diagnostic accuracy, a main diagnosis was assigned to each patient case as expert solution. After working on the patient case and requesting the radiological examination, participants indicated their final diagnosis. To do this, they typed in the first three letters of their desired diagnosis and then received suggestions from a list of 249 possible diagnoses. Diagnostic accuracy was then calculated by coding the agreement between the final diagnosis given and the expert solution. Accurate diagnoses (e.g., hospital-acquired pneumonia) were coded as 1, correct but inaccurate diagnoses (e.g., pneumonia) were coded as 0.5, and incorrect diagnoses were coded as 0. A binary indicator was used for the final diagnostic accuracy score, with 0 indicating an incorrect diagnosis and 1 indicating an at least inaccurate diagnosis, due to the categorical nature of the original data and its skewed distribution, with a majority of responses concentrated in a single category.

A prerequisite for diagnostic justification and diagnostic efficiency is the provision of at least an inaccurate diagnosis. If a participant provided an incorrect diagnosis (coded as 0), diagnostic justification and diagnostic efficiency were immediately scored as 0.

After choosing a final diagnosis, participants were asked to justify their decision in an open text field. Diagnostic justification was then calculated as the proportion of relevant reported information out of all relevant information that would have fully justified the final accurate diagnosis. Again, medical experts agreed on an expert solution that included all relevant information to justify the correct diagnosis. The participants’ solution was coded by two independent coders, each coding the full data, and differences in coding were discussed until the coders agreed. We obtained a range from 0 points, indicating a completely inadequate justification, to 1 point, indicating a completely adequately justified final diagnosis.

Diagnostic efficiency was defined as diagnostic accuracy (non-binary version) divided by the minutes required to solve the case.

Statistical Analyses

To answer the research question, a structural equation model (SEM) was estimated using MPlus Editor, version 8 (Muthén & Muthén, 2017 ). We decided to use a SEM, as it is a comprehensive statistical approach widely used in psychology and educational sciences for its ability to model complex relationships among observed and latent variables while accounting for measurement error (Hilbert & Stadler, 2017 ). SEM support the development and verification of theoretical models, enabling scholars to refine theories and interventions in psychology and education based on empirical evidence, as not only can one relationship be investigated but a system of regressions is also considered simultaneously (Nachtigall et al., 2003 ).

We included all links between the variables and applied a two-step approach, using mean-adjusted and variance-adjusted unweighted least squares (ULSMV, Savalei & Rhemtulla, 2013 ) as the estimator and THETA for parametrization, first examining the measurement model and then the structural model. The assessment of model fit was based on chi-square (χ2), root mean square error of approximation (RMSEA), and comparative fit index (CFI). Model fit is generally indicated by small chi-squared values; RMSEA values of < 0.08 (acceptable) and < 0.06 (excellent), and CFI values ≥ 0.90. We do not consider standardized root mean squared residual (SRMR), because, according to the definition used in MPlus, this index is not appropriate when the sample size is 200 or less, as natural variation in such small samples contributes to larger SRMR values (Asparouhov & Muthén, 2018 ). For hypotheses 1–6, we excluded path coefficients < 0.1 from our interpretation, as they are relatively small. In addition, at least two interpretable path coefficients, of which at least one is statistically significant, are required to find support for the hypothesis. For hypotheses 7–15, specific indirect effects (effect of an individual characteristic on diagnostic outcome through a specific CDA) and total indirect effects (mediation of the effect of an individual characteristic on diagnostic outcome through all mediators) were estimated.

We reported all measures in the study and outlined differences between the three samples. All data and analysis code have been made publicly available at the Open Science Framework (OSF) and can be accessed at https://osf.io/u8t62 . Materials for this study are available by email through the corresponding author. This study’s design and its analysis were not pre-registered.

The descriptive statistics of each measure per study are displayed in Table 4 . The intercorrelations between the measures per study can be found in Appendix Table 7 .

Overall Results of the SEM

All loadings were in the expected directions and statistically significant, except for conceptual knowledge in internal medicine in study C (λ = 0.241, p  = .120), conceptual knowledge in radiology in study A (λ = 0.398, p  = .018), and strategic knowledge in internal medicine (λ = 0.387, p  = .206) and radiology (λ = -0.166, p  = .302) in study B. Standardized factor loadings of the measurement model are shown in Appendix Table 8 .

The SEM has a good fit for study A [ X 2 (75) = 74.086, p = .508, RMSEA = 0.00, CFI = 1.00], study B [ X 2 (75) = 68.309, p  = .695, RMSEA = 0.000, CFI = 1.00], and study C [ X 2 (75) = 93.816, p  = .070, RMSEA = 0.036, CFI = 1.00].

Paths between Individual Characteristics, CDAs, and Diagnostic Outcome

The standardized path coefficients and hypotheses tests for the theoretical model are reported in Table 5 . An overview of the paths supported by the data is shown in Fig. 2 .

figure 2

Evidence on supported relationships between individual characteristics, collaborative diagnostic activities, and diagnostic outcome

Overall, the R 2 for the CDAs ranged from medium to high for evidence elicitation and evidence sharing, depending on the study, and were consistently low for hypotheses sharing across all three studies. Looking at diagnostic outcome, R 2 is consistently large for diagnostic accuracy and medium to large for diagnostic justification and diagnostic efficiency (Table 6 ).

The path from content knowledge to evidence elicitation was positive and > 0.1 in all three studies, as well as statistically significant in two of them; therefore, we consider Hypothesis 1a supported. The path from content knowledge to evidence sharing was positive and > 0.1 in two studies, as well as statistically significant in one of them; therefore, Hypothesis 1b is also supported. In contrast, the path from content knowledge to hypotheses sharing was indeed also positive in two studies, but as neither was statistically significant, we conclude that Hypothesis 1c was not supported. The path from collaboration knowledge to evidence elicitation was positive and > 0.1 in only one study, but also not statistically significant. Thus, we found that Hypothesis 2a was not supported. For the path from collaboration knowledge to evidence sharing, we found relevant positive and statistically significant coefficients in all three studies. Hypothesis 2b is therefore fully supported by the data. This is not the case for Hypothesis 2c, for which we found no coefficient > 0.1 for the path from collaboration knowledge to hypotheses sharing. For the path from social skills to evidence elicitation, we found positive coefficients > 0.1 in two out of three studies, of which one was also statistically significant. Thus, we consider Hypothesis 3a to be supported. For the path from social skills to evidence sharing, we again found one statistically significant positive coefficient, but in the other two studies it was < 0.1. Therefore, we do not consider Hypothesis 3b to be supported by the data. The same applies to the path from social skills to hypotheses sharing, where the coefficient is < 0.1 in two studies. We therefore do not consider Hypothesis 3c to be supported.

The path from evidence elicitation to diagnostic accuracy was statistically significant and large in magnitude in two out of three studies. Hypothesis 4a is therefore supported. The path from evidence elicitation to diagnostic justification was only positive and > 0.1 in one study, which was also not statistically significant. Therefore, we find no support for Hypothesis 4b. In contrast, the path from evidence elicitation to diagnostic efficiency was positive and statistically significant in two out of three studies, with one large effect. Hypothesis 4c is therefore supported. The path from evidence sharing to diagnostic accuracy was only positive and reasonably large in one study. Therefore, we do not find support for Hypothesis 5a. The path from evidence sharing to diagnostic justification was positive and > 0.1 in two studies as well as statistically significant in one of them, so Hypothesis 5b is supported. In contrast, we did not find a positive coefficient > 0.1 for the path from evidence sharing to diagnostic efficiency. Therefore, Hypothesis 5c is not supported by the data. Although we found coefficients > 0.1 in two studies for the path from hypotheses sharing to diagnostic accuracy, we found no support for Hypothesis 6a, as none of these was statistically significant. This is different for Hypothesis 6b, as we found two positive paths from hypotheses sharing to diagnostic justification, one of which was statistically significant and large. Finally, we found two positive paths from evidence sharing to diagnostic efficiency in three studies, one of which was statistically significant. Hypothesis 6c is therefore supported.

Indirect Effects between Individual Characteristics, CDA, and Diagnostic Outcome

Indirect effects of CDAs on the effect of individual characteristics on the diagnostic outcome in CDR were estimated to test hypotheses 7–15. Although we found a mediating effect of all CDAs (β = .31, p = .008), and specifically for evidence elicitation (β = .27, p = .021) from content knowledge on diagnostic accuracy in study C, and some significant overall and direct effects for other relationships (Appendix Table 9 ), none of these were consistent across all of the studies. Thus, we conclude no consistent support for any of the Hypotheses 7–15.

The aim of the current study was to investigate the extent to which the relationships specified in the CDR model (Radkowitsch et al., 2022 ) are applicable across studies, to better understand the processes underlying CDR in knowledge-rich domains. Not only is this exploration crucial for the medical field or collaborative problem-solving in knowledge-rich domains, but it also offers valuable insights for computer-supported collaborative learning research. Despite CDR’s specific focus, the principles and findings have relevant implications for understanding and enhancing collaborative processes in various educational and professional settings.

Specifically, we investigated how individual learner characteristics, the CDAs, and the diagnostic outcome are related. We therefore analyzed data from three independent studies, all from the same context, a simulation-based environment in the medical domain. Our study found positive relationships between content knowledge and the quality of evidence elicitation as well as the quality of evidence sharing, but not for the quality of hypotheses sharing. Furthermore, collaboration knowledge is positively related to the quality of evidence sharing, but not to the quality of evidence elicitation and the quality of hypotheses sharing. Social skills are only positively related to the quality of evidence elicitation. This underscores the multifaceted nature of collaborative problem-solving situations. Thus, effective CDR, a form of collaborative problem-solving, necessitates a nuanced understanding of the interplay between individual characteristics and CDAs.

The relevance of content knowledge for diagnostic competence is well established in research (Chernikova et al., 2020 ). To develop any diagnostic skills in knowledge-rich domains, learners need to acquire large amounts of knowledge and to restructure it through experience with problem-solving procedures and routines (Boshuizen et al., 2020 ). In the case of CDR this enables the diagnostician to come up with an initial suspected diagnosis, which is likely to be relevant information for the collaboration partner and to guide the further CDAs effectively. The finding that content knowledge only has a relation to the quality of evidence elicitation but none of the other CDAs can be explained by the fact that evidence elicitation is the least transactive CDA within the collaborative decision-making process. When eliciting evidence, the collaboration partner is used as an external knowledge resource (Weinberger & Fischer, 2006 ). So, despite being a collaborative activity, evidence elicitation is about what information from the collaboration partner is needed rather than what the collaboration partner needs. Thus, elicitation is less transactive than sharing, which is focused at what the collaboration partner needs.

Not only content knowledge but also collaboration knowledge is related to the quality of evidence sharing. This finding implies that collaboration knowledge may influence the CDR above and beyond individual content knowledge. It also supports the differentiation of knowledge types made in the CDR model (Radkowitsch et al., 2022 ). Thus, it is important to learn not only the conceptual and strategic medical knowledge that is required for diagnosing but also knowledge about what information is relevant for specific collaboration partners when diagnosing collaboratively. This finding underpins the importance of being aware of the knowledge distribution among collaboration partners and the relevance of the transactive memory (Wegner, 1987 ). Thus, for collaborative problem-solving in knowledge-rich domains—as for computer-supported collaborative learning more generally—knowledge and information awareness is crucial (Engelmann & Hesse, 2010 ).

Thus, the relevance of collaboration knowledge in collaborative problem-solving is an important finding of our study, highlighting that it is critical in facilitating effective collaborative processes and outcomes. The current findings emphasize the need for educational strategies that explicitly target the development of collaborative knowledge to ensure that learners have the knowledge and skills necessary to participate in productive collaborative problem-solving and computer-supported collaborative learning processes. In doing so, the CDR model emphasizes the need for learners to master collaborative skills and build shared problem representations to take full advantage of collaborative learning opportunities.

As CDR is conceptualized to be an interplay of cognitive and social skills (Hesse et al., 2015 ), we also assumed that social skills are related to CDAs. However, we only found evidence of the expected relationship between social skills and CDAs for the quality of evidence elicitation. One explanation could be that collaboration knowledge was relatively high in all three samples, outweighing the influences of general skills. This is consistent with the assumption of the CDR model that the influence of more general social skills is reduced with an increasing level of professional collaboration knowledge (Radkowitsch et al., 2022 ). When collaboration knowledge is available to the diagnosticians, it becomes more important than social skills. This finding again underlines the importance of collaboration knowledge, which can be seen as a domain- and profession-specific development of social skills. However, another explanation could be that, when collaborating with an agent, the effect of social skills decreases, as the agent was not programmed to respond to social nuances. The design of the simulation would thus buffer against the effect of social skills. Although the study by Herborn et al. ( 2020 ) found no differences between human-to-human and human-to-agent collaboration, this does not necessarily invalidate the potential variability in outcomes associated with the social skills incorporated into the agent. For a thorough investigation into the impact of social skills, the agent would need variable social abilities, enabling the variation of the importance of basic social skills for successful collaboration.

Further, we need to conclude that there is no support for a relationship between the individual characteristics and hypotheses sharing, as we found no stable support for the relationship between any of the individual characteristics and the quality of hypotheses sharing. One possible explanation could be that the binary precision measure used to operationalize quality in hypotheses sharing is not sensitive enough or is not capturing the relevant aspect of quality in that activity. Another explanation could be that there is no direct relationship between the individual characteristics and hypotheses sharing, as this relationship is mediated by evidence sharing and thus influenced by the activated knowledge scripts (Schmidt & Rikers, 2007 ).

Looking at the relationships between CDAs and the diagnostic outcome, the current results highlight the need to distinguish between primary (diagnostic accuracy) and secondary (diagnostic justification and efficiency) outcomes of diagnostic reasoning (Daniel et al., 2019 ). Achieving diagnostic accuracy, a purely quantitative outcome measure, is less transactive than other aspects of the diagnostic outcome. This is also where we find the link to evidence elicitation, as we consider this to be the least transactive CDA within the collaborative decision-making process. However, the ability to justify and reach this decision efficiently is then highly dependent on evidence sharing and hypotheses sharing, activities that are more focused on transactivity within CDR (Weinberger & Fischer, 2006 ).

Although individual learner characteristics are found to have an effect on CDAs, and CDAs impact the diagnostic outcome, the effect is not mediated by CDAs across studies. Thus, we assume that, for effective collaborative problem-solving in knowledge-rich domains, such as CDR, it is not enough to have sufficient content and collaboration knowledge; it is also necessary to be able to engage in high quality CDAs to achieve a high-quality diagnostic outcome. This is consistent with research on individual diagnostic reasoning, which shows that diagnostic activities have a unique contribution to the diagnostic outcome after controlling for content knowledge (Fink et al., 2023 ).

In summary, we explored evidence elicitation, evidence sharing, and hypotheses sharing as crucial CDAs. The findings revealed diverse associations of these CDAs with individual characteristics and facets of the diagnostic outcome, supporting the notion that the CDR-process involves a variety of different skills (instead of being one overarching skill). On the basis of these results, we propose categorizing CDAs into activities primarily focused on individual goals and needs (e.g., elicitation) and more transactive activities directly targeted at the collaborator (e.g., sharing). To enhance quality in CDAs, instructional support should be considered. For instance, providing learners with an adaptive collaboration script has been shown to improve evidence sharing quality and promote the internalization of collaboration scripts, fostering the development of collaboration knowledge (Radkowitsch et al., 2021 ). Further, group awareness tools, such as shared concept maps, should be considered to compensate for deficits in one’s collaboration knowledge (Engelmann & Hesse, 2010 ). However, what is required to engage in high-quality CDAs remains an open question. One starting point is domain-general cognitive skills. These could influence CDAs, particularly in the early stages of skill development (Hetmanek et al., 2018 ). Previous research showed that, in diagnostic reasoning, instructional support is more beneficial when being domain-specific than domain-general (Schons et al., 2022 ). Thus, there is still a need for further research on how such instructional support might look like.

Future Research

Although we used data from three studies, all of them were in the same domain; thus, it remains an open question whether these findings are applicable across domains. The CDR model claims that the described relationships are not limited to the medical domain, but rather are valid across domains for collaboratively solving complex problems in knowledge-rich domains. Future research should explore generalizability, for example, for teacher education, which is a distinct field that also requires diagnosing and complex problem-solving (Heitzmann et al., 2019 ).

Regardless of domain, the non-mediating relationship of CDAs between individual characteristics and diagnostic outcomes, as well as the found effects of the CDAs in the current study, suggests that an isolated analysis of CDAs does not fully represent the complex interactions and relationships among activities, individual characteristics, and diagnostic outcomes. Future studies might assess CDAs as a bundle of necessary activities, including a focus on their possible non-linear interactions. We propose to use process data analysis to account for the inherent complexity of the data, as different activities in different sequences can lead to the same outcome (Y. Chen et al., 2019 ). More exploratory analyses of fine-grained, theory-based sequence data are needed to provide insights into more general and more specific processes involved in successful solving complex problems collaboratively (Stadler et al., 2020 ).

As our results have shown, collaboration knowledge and thus awareness of the knowledge distribution among collaboration partners is highly relevant. While a recent meta-analyses showed a moderate effect of group awareness of students’ performance in computer-supported collaborative learning (D. Chen et al., 2024 ), it has so far not been systematically investigated in collaborative problem-solving. Thus, more research on the influence collaboration knowledge in collaborative problem-solving is needed.

Further, additional factors associated with success in collaborative problem-solving—not yet incorporated into the model and thus not yet investigated systematically—include communication skills (OECD, 2017 ), the self-concept of problem-solving ability (Scalise et al., 2016 ), and positive activating emotions during problem-solving tasks (Camacho-Morles et al., 2019 ).

Limitations

There are, however, some limitations to be considered. One is that we have only considered CDAs and how they relate to individual characteristics and outcomes. However, the CDR model also introduces individual diagnostic activities, such as the generation of evidence and the drawing of conclusions. These occur before and after the CDAs and may therefore also have an impact on the described relationships. However, we decided to focus on the CDAs within the CDR process because they are particularly relevant for constructing a shared problem representation, being central to CDR. Future research might consider these individual diagnostic activities, as they could, for example, further explain the how content knowledge is related to the diagnostic outcome.

Another limitation of the current analyses is the operationalization of quality for the CDAs. We chose the appropriateness of radiological examination for the indicated diagnosis for quality of evidence elicitation and precision for quality of evidence sharing and hypotheses sharing. However, all of these only shed light on one perspective of each activity, while possibly obscuring others. For example, it may be that content knowledge is not related to the precision of hypotheses sharing, but this may be different when looking at other quality indicators, such as sensitivity or specificity. However, we decided to use the precision aspect of activities, as research shows that collaborators often fail to identify relevant information, and the amount of information is not related to performance (Tschan et al., 2009 ). Future research may explore a broader variety of quality indicators to be able to assess the quality of CDAs as comprehensively as possible. It should also be noted that in study B a suppression effect (Horst, 1941 ) between hypothesis sharing and evidence elicitation artificially inflated the observed effect size. This is to be expected with process data that can be highly correlated and needs to be considered when interpreting the effect sizes.

In addition, it should be noted that the omega values obtained for the conceptual and strategic knowledge measures were below the commonly accepted threshold of 0.7. While we chose to use omega values as a more appropriate measure of reliability in our context, given the complex and multifaceted nature of the knowledge constructs, these lower-than-expected values raise important questions about the quality of the data and the robustness of the findings. Thus, it is important to understand that knowledge constructs, by their very nature, may not always exhibit high levels of internal consistency due to the diverse and interrelated components they encompass (Edelsbrunner, 2024 ; Stadler et al., 2021 ; Taber, 2018 ). This complexity may be reflected in the moderate omega values observed, which, while seemingly counterintuitive, does not invalidate the potential of the constructs to account for substantial variance in related outcomes. However, findings related to these constructs should be interpreted with caution, and the results presented should be considered tentative. Future research should further explore the implications of using different reliability coefficients in assessing complex constructs within the learning sciences, potentially providing deeper insights into the nuanced nature of knowledge and its measurement.

Another limitation of this study is related to the agent-based collaboration, as a predictive validation of collaborative problem-solving for later human-to-human collaboration in comparable contexts has not yet been systematically conducted. Although the agent-based collaboration situation used has been validated in terms of perceived authenticity, it still does not fully correspond to a real collaboration situation (Rosen, 2015 ). This could be an explanation for the low influence of social skills, as the setting might not require the application of a broad set of social skills (Hesse et al., 2015 ; Radkowitsch et al., 2020 ). In a real-life collaboration, the effects of social skills might be more pronounced. However, research showed that the human-to-agent approach did not lead to different results in collaborative problem-solving than the human-to-human approach in the 2015 PISA study, and correlations with other measures of collaborative skills have been found (Herborn et al., 2020 ; Stadler, Herborn et al., 2020 ). Future studies should specifically test the relevance of social skills for CDR in a human-to-human setting to strengthen the generalizability of our findings.

In conclusion, the current study highlights the importance of individual characteristics and CDAs as independent predictors for achieving good diagnoses in collaborative contexts, at least in the simulation-based settings we used in the studies included in our analysis. Collaboration knowledge emerged as a critical factor, demonstrating its importance over early acquired, general social skills. Therefore, it is imperative to revise the CDR approach by giving higher priority to the proficiency of collaboration knowledge compared with social skills. Furthermore, we conclude that, in simulation-based CDR, content knowledge does not play such a crucial role in predicting diagnostic success compared with many other educational settings, most probably because of the endless opportunities for retrying and revising in simulation-based learning environments.

With respect to CDAs, we suggest refining the perspective on the quality of CDAs and consider revising the CDR model by summarizing CDAs as information elicitation and information sharing, with the former being less transactive, and thus, less demanding than the latter. Adequate performance in both types of CDA is presumed to result in a high-quality shared problem representation, resulting in good diagnostic outcome. Collaborative problem-solving skills are highly relevant in professional practice of knowledge-rich domains, highlighting the need to strengthen these skills in students engaged in CDR and to provide learning opportunities accordingly. Further, the ability to effectively collaborate and construct shared problem representations is important, not only in CDR but also in collaborative problem-solving and computer-supported collaborative learning more in general, highlighting the need for integrating such skills into curricula and instructional design.

By emphasizing these aspects, we can improve the diagnostic skills of individuals in collaborative settings. Through advancing our understanding of CDR, we are taking a key step forward in optimizing collaborative problem-solving and ultimately contributing to improved diagnostic outcomes in various professional domains beyond CDR in medical education. In particular, integrating collaboration knowledge and skills into computer-supported collaborative learning environments can enrich learning experiences and outcomes in various knowledge-rich domains.

Please note that the data employed in this study have been used in previous publications (e.g., Brandl et al., 2021 ; Radkowitsch, et al., 2021 ; Richters et al., 2022 ). However, the research question and the results reported in this study are completely unique to this study.

Abele, S. (2018). Diagnostic problem-solving process in professional contexts: theory and empirical investigation in the context of car mechatronics using computer-generated log-files. Vocations and Learning, 11 (1), 133–159. https://doi.org/10.1007/s12186-017-9183-x

Article   Google Scholar  

Asparouhov, T., & Muthe´n, B. (2018). SRMR in Mplus . https://www.statmodel.com/download/SRMR2.pdf

Bauer, E., Sailer, M., Kiesewetter, J., Fischer, M. R., & Fischer, F. (2022). Diagnostic argumentation in teacher education: Making the case for justification, disconfirmation, and transparency. Frontiers in Education , 7 , Article 977631. https://doi.org/10.3389/feduc.2022.977631

Boshuizen, H. P., Gruber, H., & Strasser, J. (2020). Knowledge restructuring through case processing: the key to generalise expertise development theory across domains? Educational Research Review, 29 , 100310. https://doi.org/10.1016/j.edurev.2020.100310

Brandl, L., Richters, C., Radkowitsch, A., Obersteiner, A., Fischer, M. R., Schmidmaier, R., Fischer, F., & Stadler, M. (2021). Simulation-based learning of complex skills: Predicting performance with theoretically derived process features. Psychological Test and Assessment Modeling, 63 (4), 542–560. https://www.psychologie-aktuell.com/fileadmin/Redaktion/Journale/ptam-2021-4/PTAM__4-2021_6_kor.pdf

Braun, L. T., Zottmann, J. M., Adolf, C., Lottspeich, C., Then, C., Wirth, S., Fischer, M. R., & Schmidmaier, R. (2017). Representation scaffolds improve diagnostic efficiency in medical students. Medical Education, 51 (11), 1118–1126. https://doi.org/10.1111/medu.13355

Camacho-Morles, J., Slemp, G. R., Oades, L. G., Morrish, L., & Scoular, C. (2019). The role of achievement emotions in the collaborative problem-solving performance of adolescents. Learning and Individual Differences, 70 , 169–181. https://doi.org/10.1016/j.lindif.2019.02.005

Chen, D., Zhang, Y., Luo, H., Zhu, Z., Ma, J., & Lin, Y. (2024). Effects of group awareness support in CSCL on students’ learning performance: a three-level meta-analysis. International Journal of Computer-Supported Collaborative Learning, 19 (1), 97–129. https://doi.org/10.1007/s11412-024-09418-3

Chen, Y., Li, X., Liu, J., & Ying, Z. (2019). Statistical analysis of complex problem-solving process data: an event history analysis approach. Frontiers in Psychology , 10 , Article 486. https://doi.org/10.3389/fpsyg.2019.00486

Chernikova, O., Heitzmann, N., Fink, M. C., Timothy, V., Seidel, T., & Fischer, F. (2020). Facilitating diagnostic competences in higher education—a meta-analysis in medical and teacher education. Educational Psychology Review, 32 (1), 157–196. https://doi.org/10.1007/s10648-019-09492-2

Chernikova, O., Heitzmann, N., Opitz, A., Seidel, T., & Fischer, F. (2022). A theoretical framework for fostering diagnostic competences with simulations in higher education. In F. Fischer & A. Opitz (Eds.),  Learning to Diagnose with Simulations . Springer, Cham. https://doi.org/10.1007/978-3-030-89147-3_2

Cook, D. A., Brydges, R., Zendejas, B., Hamstra, S. J., & Hatala, R. M. (2013). Technology-enhanced simulation to assess health professionals: a systematic review of validity evidence, research methods, and reporting quality. Academic Medicine: Journal of the Association of American Medical Colleges, 88 (6), 872–883. https://doi.org/10.1097/ACM.0b013e31828ffdcf

Daniel, M., Rencic, J., Durning, S. J., Holmboe, E. S., Santen, S. A., Lang, V., Ratcliffe, T., Gordon, D., Heist, B., Lubarsky, S., Estrada, C. A., Ballard, T., Artino, A. R., Da Sergio Silva, A., Cleary, T., Stojan, J., & Gruppen, L. D. (2019). Clinical reasoning assessment methods: a scoping review and practical guidance. Academic Medicine: Journal of the Association of American Medical Colleges, 94 (6), 902–912. https://doi.org/10.1097/ACM.0000000000002618

Dunbar, K. (1995). How scientists really reason: scientific reasoning in real-world laboratories. In R. J. Sternberg & J. E. Davidson (Eds.), The nature of insight (pp. 365–395). MIT Press.

Google Scholar  

Edelsbrunner, P. A. (2024). Does interference between intuitive conceptions and scientific concepts produce reliable inter-individual differences? Science & Education. Advance online publication. https://doi.org/10.1007/s11191-024-00500-8

Book   Google Scholar  

Engelmann, T., & Hesse, F. W. (2010). How digital concept maps about the collaborators’ knowledge and information influence computer-supported collaborative problem solving. International Journal of Computer-Supported Collaborative Learning, 5 (3), 299–319. https://doi.org/10.1007/s11412-010-9089-1

Fink, M. C., Heitzmann, N., Reitmeier, V., Siebeck, M., Fischer, F., & Fischer, M. R. (2023). Diagnosing virtual patients: the interplay between knowledge and diagnostic activities. Advances in Health Sciences Education : Theory and Practice , 1–20. https://doi.org/10.1007/s10459-023-10211-4

Fiore, S. M., Graesser, A. C., & Greiff, S. (2018). Collaborative problem-solving education for the twenty-first-century workforce. Nature Human Behaviour, 2 (6), 367–369. https://doi.org/10.1038/s41562-018-0363-y

Fischer, F., Kollar, I., Stegmann, K., & Wecker, C. (2013). Toward a script theory of guidance in computer-supported collaborative learning. Educational Psychologist, 48 (1), 56–66. https://doi.org/10.1080/00461520.2012.748005

Fischer, F., Kollar, I., Ufer, S., Sodian, B., Hussmann, H., Pekrun, R., Neuhaus, B. J., Dorner, B., Pankofer, S., Fischer, M. R., Strijbos, J.‑W., Heene, M., & Eberle, J. (2014). Scientific reasoning and argumentation: advancing an interdisciplinary research agenda in education. Frontline Learning Research , 2 (3), 28–45. https://doi.org/10.14786/flr.v2i2.96

Fischer, M. R., Kopp, V., Holzer, M., Ruderich, F., & Jünger, J. (2005). A modified electronic key feature examination for undergraduate medical students: validation threats and opportunities. Medical Teacher, 27 (5), 450–455. https://doi.org/10.1080/01421590500078471

Förtsch, C., Sommerhoff, D., Fischer, F., Fischer, M. R., Girwidz, R., Obersteiner, A., Reiss, K., Stürmer, K., Siebeck, M., Schmidmaier, R., Seidel, T., Ufer, S., Wecker, C., & Neuhaus, B. J. (2018). Systematizing professional knowledge of medical doctors and teachers: development of an interdisciplinary framework in the context of diagnostic competences. Education Sciences, 8 (4), 207. https://doi.org/10.3390/educsci8040207

Goldhammer, F., Naumann, J., Rölke, H., Stelter, A., & Tóth, K. (2017). Relating product data to process data from computer-based competency assessment. In D. Leutner, J. Fleischer, J. Grünkorn, & E. Klieme (Eds.), Methodology of educational measurement and assessment. competence assessment in education (pp. 407–425). Springer International Publishing. https://doi.org/10.1007/978-3-319-50030-0_24

Graesser, A. C., Fiore, S. M., Greiff, S., Andrews-Todd, J., Foltz, P. W., & Hesse, F. W. (2018). Advancing the science of collaborative problem solving. Psychological Science in the Public Interest: A Journal of the American Psychological Society, 19 (2), 59–92. https://doi.org/10.1177/1529100618808244

Hautz, W. E., Kämmer, J. E., Schauber, S. K., Spies, C. D., & Gaissmaier, W. (2015). Diagnostic performance by medical students working individually or in teams. JAMA, 313 (3), 303–304. https://doi.org/10.1001/jama.2014.15770

Heitzmann, N., Seidel, T., Hetmanek, A., Wecker, C., Fischer, M. R., Ufer, S., Schmidmaier, R., Neuhaus, B. J., Siebeck, M., Stürmer, K., Obersteiner, A., Reiss, K., Girwidz, R., Fischer, F., & Opitz, A. (2019). Facilitating diagnostic competences in simulations in higher education: a framework and a research agenda. Frontline Learning Research , 1–24. https://doi.org/10.14786/flr.v7i4.384

Herborn, K., Stadler, M., Mustafić, M., & Greiff, S. (2020). The assessment of collaborative problem solving in PISA 2015: can computer agents replace humans? Computers in Human Behavior, 104 , 105624. https://doi.org/10.1016/j.chb.2018.07.035

Hesse, F. W., Care, E., Buder, J., Sassenberg, K., & Griffin, P. (2015). A framework for teachable collaborative problem solving skills. In P. Griffin & E. Care (Eds.), Assessment and Teaching of 21 st Century Skills (pp. 37–56). Springer.

Chapter   Google Scholar  

Hetmanek, A., Engelmann, K., Opitz, A., & Fischer, F. (2018). Beyond intelligence and domain knowledge. In F. Frank, C. Clark A., E. Katharina, O. Jonathan, F. Fischer, C. A. Chinn, K. Engelmann, & J. Osborne (Eds.), Scientific reasoning and argumentation (pp. 203–226). Routledge. https://doi.org/10.4324/9780203731826-12

Hilbert, S., & Stadler, M. (2017). Structural equation models. In V. Zeigler-Hill & T. K. Shackelford (Eds.), Encyclopedia of Personality and Individual differences (pp. 1–9). Springer International Publishing. https://doi.org/10.1007/978-3-319-28099-8_1285-1

Hitchcock, D. (2005). Good reasoning on the Toulmin model. Argumentation, 19 (3), 373–391. https://doi.org/10.1007/s10503-005-4422-y

Horst, P. (1941). The prediction of personnel adjustment. Socia LScience Research and Council Bulletin  (48), 431–436.

Jeong, H., Hmelo-Silver, C. E., & Jo, K. (2019). Ten years of computer-supported collaborative learning: a meta-analysis of CSCL in STEM education during 2005–2014. Educational Research Review, 28 , 100284. https://doi.org/10.1016/j.edurev.2019.100284

Kiesewetter, J., Fischer, F., & Fischer, M. R. (2017). Collaborative clinical reasoning—a systematic review of empirical studies. The Journal of Continuing Education in the Health Professions, 37 (2), 123–128. https://doi.org/10.1097/CEH.0000000000000158

Kiesewetter, J., Sailer, M., Jung, V. M., Schönberger, R., Bauer, E., Zottmann, J. M., Hege, I., Zimmermann, H., Fischer, F., & Fischer, M. R. (2020). Learning clinical reasoning: how virtual patient case format and prior knowledge interact. BMC Medical Education, 20 (1), 73–83. https://doi.org/10.1186/s12909-020-1987-y

Klahr, D., & Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12 (1), 1–48. https://doi.org/10.1207/s15516709cog1201_1

Koschmann, T. D., Feltovich, P. J., Myers, A. C., & Barrows, H. S. (1992). Implications of CSCL for problem-based learning. ACM SIGCUE Outlook, 21 (3), 32–35. https://doi.org/10.1145/130893.130902

Liu, L., Hao, J., Davier, A. A. von, Kyllonen, P., & Zapata-Rivera, J.‑D. (2016). A tough nut to crack: measuring collaborative problem solving. In J. Keengwe, Y. Rosen, S. Ferrara, & M. Mosharraf (Eds.), Handbook of Research on Technology Tools for Real-World Skill Development (pp. 344–359). IGI Global. https://doi.org/10.4018/978-1-4666-9441-5.ch013

Muthén, L. K., & Muthén, B. O. (2017). Mplus: Statistical Analysis with Latent Variables: User’s Guide (Version 8) [Computer software]. Authors.

Nachtigall, C., Kroehne, U., Funke, F., & Steyer, R. (2003). (Why) should we use SEM? Pros and cons of structural equation modeling. Methods of Psychological Research Online, 8 (2), 1–22.

Noroozi, O., Biemans, H. J., Weinberger, A., Mulder, M., & Chizari, M. (2013). Scripting for construction of a transactive memory system in multidisciplinary CSCL environments. Learning and Instruction, 25 , 1–12. https://doi.org/10.1016/j.learninstruc.2012.10.002

OECD. (2017). PISA 2015 Assessment and Analytical Framework: Science, Reading, Mathematic, Financial Literacy and Collaborative Problem Solving, revised edition. PISA, OECD Publishing. https://doi.org/10.1787/9789264281820-en

Pickal, A. J., Engelmann, K., Chinn, C. A., Girwidz, R., Neuhaus, B. J., & Wecker, C. (2023). Fostering the collaborative diagnosis of cross-domain skills in video-based simulations. In Proceedings of the International Conference on Computer-supported for Collaborative Learning, Proceedings of the 16 th International Conference on Computer-Supported Collaborative Learning — CSCL 2023 (pp. 139–146). International Society of the Learning Sciences. https://doi.org/10.22318/cscl2023.638463

Radkowitsch, A., Fischer, M. R., Schmidmaier, R., & Fischer, F. (2020). Learning to diagnose collaboratively: Validating a simulation for medical students. GMS Journal for Medical Education, 37(5), Doc51. https://doi.org/10.3205/zma001344

Radkowitsch, A., Sailer, M., Fischer, M. R., Schmidmaier, R., & Fischer, F. (2022). Diagnosing collaboratively: A theoretical model and a simulation-based learning environment. In F. Fischer & A. Opitz (Eds.),  Learning to Diagnose with Simulations . Springer, Cham. https://doi.org/10.1007/978-3-030-89147-3_10

Radkowitsch, A., Sailer, M., Schmidmaier, R., Fischer, M. R., & Fischer, F. (2021). Learning to diagnose collaboratively—effects of adaptive collaboration scripts in agent-based medical simulations. Learning and Instruction, 75 , 101487. https://doi.org/10.1016/j.learninstruc.2021.101487

Richters, C., Stadler, M., Radkowitsch, A., Behrmann, F., Weidenbusch, M., Fischer, M. R., Schmidmaier, R., & Fischer, F. (2022). Making the rich even richer? Interaction of structured reflection with prior knowledge in collaborative medical simulations. In A. Weinberger, W. Chen, D. Hernández-Leo, & B.Che (Chair), International Society of the Learning Sciences. Hiroshima, Japan.

Rochelle, J., & Teasley, S. (1995). The construction of shared knowledge in collaborative problem solving. In C. O’Malley (Ed.), Computer-Supported Collaborative Learning (pp. 66–97). Springer.

Rosen, Y. (2015). Computer-based assessment of collaborative problem solving: exploring the feasibility of human-to-agent approach. International Journal of Artificial Intelligence in Education, 25 (3), 380–406. https://doi.org/10.1007/s40593-015-0042-3

Savalei, V., & Rhemtulla, M. (2013). The performance of robust test statistics with categorical data. British Journal of Mathematical and Statistical Psychology, 66 (2), 201–223. https://doi.org/10.1111/j.2044-8317.2012.02049.x

Scalise, K., Mustafic, M., & Greiff, S. (2016). Dispositions for collaborative problem solving. In S. Kuger, E. Klieme, N. Jude, & D. Kaplan (Eds.), Methodology of Educational Measurement and Assessment. Assessing Contexts of Learning (pp. 283–299). Springer International Publishing. https://doi.org/10.1007/978-3-319-45357-6_11

Schmidt, H. G., & Mamede, S. (2015). How to improve the teaching of clinical reasoning: a narrative review and a proposal. Medical Education, 49 (10), 961–973. https://doi.org/10.1111/medu.12775

Schmidt, H. G., & Rikers, R. M. J. P. (2007). How expertise develops in medicine: knowledge encapsulation and illness script formation. Medical Education, 41 (12), 1133–1139. https://doi.org/10.1111/j.1365-2923.2007.02915.x

Schons, C., Obersteiner, A., Reinhold, F., Fischer, F., & Reiss, K. (2022). Developing a simulation to foster prospective mathematics teachers’ diagnostic competencies: the effects of scaffolding . Advance online publication. https://doi.org/10.1007/s13138-022-00210-0

Stadler, M., Herborn, K., Mustafić, M., & Greiff, S. (2020). The assessment of collaborative problem solving in PISA 2015: an investigation of the validity of the PISA 2015 CPS tasks. Computers & Education, 157 , 103964. https://doi.org/10.1016/j.compedu.2020.103964

Stadler, M., Hofer, S., & Greiff, S. (2020). First among equals: log data indicates ability differences despite equal scores. Computers in Human Behavior, 111 , 106442. https://doi.org/10.1016/j.chb.2020.106442

Stadler, M., Sailer, M., & Fischer, F. (2021). Knowledge as a formative construct: a good alpha is not always better. New Ideas in Psychology , 60. https://doi.org/10.1016/j.newideapsych.2020.100832

Stark, R., Kopp, V., & Fischer, M. R. (2011). Case-based learning with worked examples in complex domains: two experimental studies in undergraduate medical education. Learning and Instruction, 21 (1), 22–33. https://doi.org/10.1016/j.learninstruc.2009.10.001

Stasser, G., & Titus, W. (1985). Pooling of unshared infomration in group decision making: biased information sampling during discussion. Journal of Personality and Social Psychology, 48 (6), 1467–1478.

Taber, K. S. (2018). The use of Cronbach’s alpha when developing and reporting research instruments in science education. Research in Science Education, 48 (6), 1273–1296. https://doi.org/10.1007/s11165-016-9602-2

Tschan, F., Semmer, N. K., Gurtner, A., Bizzari, L., Spychiger, M., Breuer, M., & Marsch, S. U. (2009). Explicit reasoning, confirmation bias, and illusory transactive memory: a simulation study of group medical decision making. Small Group Research, 40 (3), 271–300. https://doi.org/10.1177/1046496409332928

van Joolingen, W. R., & de Jong, T. (1997). An extended dual search space model of scientific discovery learning. Instructional Science, 25 (5), 307–346. https://doi.org/10.1023/A:1002993406499

Vogel, F., Wecker, C., Kollar, I., & Fischer, F. (2017). Socio-cognitive scaffolding with computer-supported collaboration scripts: a meta-analysis. Educational Psychology Review, 29 (3), 477–511. https://doi.org/10.1007/s10648-016-9361-7

Vogel, F., Weinberger, A., Hong, D., Wang, T., Glazewski, K., Hmelo-Silver, C. E., Uttamchandani, S., Mott, B., Lester, J., Oshima, J., Oshima, R., Yamashita, S., Lu, J., Brandl, L., Richters, C., Stadler, M., Fischer, F., Radkowitsch, A., Schmidmaier, R., . . . Noroozi, O. (2023). Transactivity and knowledge co-construction in collaborative problem solving. In Proceedings of the International Conference on Computer-supported for Collaborative Learning, Proceedings of the 16 th International Conference on Computer-Supported Collaborative Learning — CSCL 2023 (pp. 337–346). International Society of the Learning Sciences. https://doi.org/10.22318/cscl2023.646214

Wegner, D. M. (1987). transactive memory: a contemporary analysis of the group mind. In B. Mullen & G. R. Goethals (Eds.), Theories of Group Behavior (pp. 185–208). Springer New York. https://doi.org/10.1007/978-1-4612-4634-3_9

Weinberger, A., & Fischer, F. (2006). A framework to analyze argumentative knowledge construction in computer-supported collaborative learning. Computers & Education, 46 (1), 71–95. https://doi.org/10.1016/j.compedu.2005.04.003

Download references

Open Access funding enabled and organized by Projekt DEAL. The research presented in this contribution was funded by a grant of the Deutsche Forschungsgemeinschaft (DFG, FOR 2385) to Frank Fischer, Martin R. Fischer and Ralf Schmidmaier (FI 792/11-1 & FI 792/11-2) 

Author information

Authors and affiliations.

Department of Psychology, Ludwig-Maximilians-Universität München, Leopoldstr. 13, 80802, Munich, Germany

Laura Brandl, Matthias Stadler, Constanze Richters & Frank Fischer

Institute of Medical Education, LMU University Hospital, Ludwig-Maximilians-Universität München, Munich, Germany

Matthias Stadler & Martin R. Fischer

IPN Leibniz Institute for Science and Mathematics Education, Department of Mathematics Education, Kiel, Germany

Anika Radkowitsch

Medizinische Klinik und Poliklinik IV, LMU University Hospital, Ludwig-Maximilians-Universität München, Munich, Germany

Ralf Schmidmaier

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Laura Brandl .

Ethics declarations

Conflict of interest statement.

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Please note that the data employed in this study have been used in previous publications (e.g., Brandl et al., 2021; Radkowitsch, et al., 2021; Richters et al., 2022 ). However, the research question and the results reported in this study are completely unique to this study. An initial version of this article is presented as a poster at ISLS 2024.

see Tables 7 , 8 and 9

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Brandl, L., Stadler, M., Richters, C. et al. Collaborative Problem-Solving in Knowledge-Rich Domains: A Multi-Study Structural Equation Model. Intern. J. Comput.-Support. Collab. Learn (2024). https://doi.org/10.1007/s11412-024-09425-4

Download citation

Received : 18 September 2023

Accepted : 09 May 2024

Published : 24 June 2024

DOI : https://doi.org/10.1007/s11412-024-09425-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Collaborative Problem-solving
  • Simulation-based Learning Environment
  • Diagnostic Activities
  • Diagnostic Reasoning
  • Medical Education
  • Find a journal
  • Publish with us
  • Track your research

More From Forbes

Unlocking better solutions through reframing.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

If you’re looking to solve problems more effectively, the key might not be finding the right solution but asking the right question. Reframing how we see problems can unlock radically better solutions and foster innovation, as emphasized by Thomas Wedell-Wedellsborg , a globally recognized expert and author on problem-solving and innovation.

Framing vs. Analysis: Understanding the Difference

Wedell-Wedellsborg has discovered that even experienced problem solvers can confuse an important point: everyone needs to differentiate between framing and analysis.

“The mistake of jumping very quickly to the solution is common,” Wedell-Wedellsborg says, “and you might be doing a careful analysis of the wrong problem.”

He shared an example: tenants in an office building complained about the elevator speed. Instead of focusing on making the elevator faster, a simple mirror next to the elevator solved the problem by making the wait seem shorter. It highlights that a nuanced understanding of a problem can lead to more effective solutions.

The importance of framing cannot be overstated. Wedell-Wedellsborg suggests there is a misconception that deep thinking and careful problem analysis require a lot of time.

“It’s super dangerous to think framing has to take a lot of time,” he warns. His research suggests that mastering rapid reframing can integrate it into everyday problem-solving, making it a powerful tool that can be used frequently and effectively.

Overcoming Biases with Collaborative Problem-Solving

Wedell-Wedellsborg emphasizes a critical bias, which he calls the ‘law of the hammer.’ It’s where people tend to frame problems to match their preferred solutions. Ultimately, it’s about using the tools you are comfortable with.

After Initial Success, Helldiver’s 2 Has Lost 90% Of Its Players With No Signs Of Recovery

Dana white says former champion just had ‘his last fight’, netflix’s best new show has a perfect 100% critic score.

“We all have our own hammers because we’ve experienced them working well for many problems,” he said. The danger lies in stubbornly using the same approach even when it’s ineffective and not being collaborative. After all, Abraham Maslow wrote in 1966, “If the only tool you have is a hammer, it is tempting to treat everything as if it were a nail.”

“Biases are very hard to overcome even if you are aware of them,” Wedell-Wedellsborg notes. “Involving other people in the problem-definition stage, not just in finding the solution, can help.”

This collective approach transforms problem-solving into a team effort. For example, in a technology company, engineers might approach a software bug with their technical expertise. However, involving customer service representatives who interact with users daily can provide new perspectives that lead to more effective solutions.

Psychological Safety and Resistance to Reframing

Wedell-Wedellsborg has identified a significant obstacle to effective problem-solving: resistance, often stemming from a lack of psychological safety.

“The role of psychological safety cannot be ignored,” he asserts. Creating an environment where team members feel comfortable challenging ideas is vital.

“If you can’t get full psychological safety with your entire team, at least ensure that one colleague can oppose you behind the scenes.” This small step can significantly impact the team’s ability to engage in honest and constructive problem discussions.

Furthermore, stealth tactics can be effective when facing resistance from more senior team members.

Wedell-Wedellsborg shared a story from his excellent book, “What’s Your Problem?” about a consultant who faced resistance from a client team. The consultant gathered evidence that the client’s diagnosis was incorrect by conducting anonymous interviews, eventually convincing the team without direct confrontation. This approach emphasizes the importance of creating safe spaces for honest feedback and validation.

Reframing Goals: Flexibility and Adaptability

Wedell-Wedellsborg shared the story of creativity scholar Robert Sternberg, who once told the story of an executive who loved his job but hated his boss.

The executive’s contempt for his boss was so strong that he contacted a headhunter who said that finding a similar job elsewhere would be easy. The same evening, the executive spoke to his wife, who happened to be an expert on reframing. This led to a better approach.

In Sternberg’s words: “He returned to the headhunter and gave the headhunter his boss’s name. The headhunter found a new job for the executive’s boss, which the boss—having no idea of what was going on—accepted. The executive then got his boss’s job.”

It seems some of us in leadership roles could all do with a little bit of reframing.

“You need to understand what a good outcome looks like and be willing to rethink that outcome,” Wedell-Wedellsborg emphasized.

Flexibility in problem-solving is crucial, and it involves moving from fixed theories to working hypotheses. “You shouldn’t just have a single hypothesis; you should develop multiple explanations for what you’re seeing,” he advised. This approach, backed by research, increases the chances of getting it right and allows for adaptability and innovation.

Evidence and Experience: The Role of Past Events

Wedell-Wedellsborg also stresses the importance of analyzing past events and systemic factors when reframing problems. He shared an example of a couple who solved 80% of their conflicts by recognizing that discussions after 10:00 PM led to fights.

By implementing a rule to address contentious topics in the morning, they dramatically improved their relationship. “Past experiences can provide valuable insights into current problems,” he noted. (Contributor’s comment: I’ve tried it and it works!)

To Reframe Is To Transform

Reframing is a powerful tool for leaders seeking innovative solutions to complex problems. By understanding the distinction between framing and analysis, overcoming biases through collaboration, ensuring psychological safety, and maintaining flexibility in goals, you can unlock new pathways to success.

Watch the full interview with Thomas Wedell-Wedellsborg and Dan Pontefract on the Leadership NOW program below, or listen to it on your favorite podcast .

Dan Pontefract

  • Editorial Standards
  • Reprints & Permissions

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

Skip to Content

Other ways to search:

  • Events Calendar

Improving collaborative problem-solving skills via automated feedback and scaffolding: a quasi-experimental study with CPSCoach 2.0

Authors: D’Mello, S.K., Duran, N., Michaels, A., Stewart, A.E.B.

Publication Date: February, 2024

Type: Journal Article 

  • Publications
  • Institute-Wide
  • Journal Articles

Header Logo

How Do Private Online Middle Schools Promote Critical Thinking and Problem-Solving Skills?

 alt=

Private online middle schools promote critical thinking and problem-solving skills through interactive, personalized curricula, project-based learning, and real-world applications. They utilize digital tools and collaborative platforms to engage students in analytical discussions, fostering independent thought and innovative solutions. This approach nurtures intellectual curiosity and adaptability in diverse learning environments.

Inside Expeditions: The Philosophy of Marvel Thumbnail

Michael Granado is one of the early faculty members at Sora who leads a popular course on The Philosophy of Marvel. We sat down to discuss one of the school's most-requested learning expeditions.

Why I Chose Sora Schools Thumbnail

Other Schools Didn’t Work My name is Ava. From age seven, I worked in multiple homeschool groups. Unfortunately, none of them were hands-on enough for me. I did not fit into typical schools since I did not enjoy learning topics that I would immediately forget or never use, and I despised learning topics that were […]

High School Graduation: Hannah’s Story at Sora Thumbnail

Graduation is right around the corner for high school seniors. We at Sora are proud of our own graduating seniors for their hard work, passion, and devotion to making Sora an amazing school. So we decided to interview them to give you a firsthand look at what it’s like to be a Sora student. This […]

From Memorization to Mindsets Image

From Memorization to Mindsets

Transforming education: From memorization to mindsets. Empower students with critical thinking and curiosity. Make a positive impact on themselves and society at Sora.

Garrett

The Purpose of Education in the Age of Artificial Intelligence

Prepare students for the AI-driven future. Explore critical thinking, empathy, and AI's impact on education. Redefine education for critical thinkers. Read more!

COMMENTS

  1. The effectiveness of collaborative problem solving in promoting

    Collaborative problem-solving has been widely embraced in the classroom instruction of critical thinking, which is regarded as the core of curriculum reform based on key competencies in the field ...

  2. Advancing the Science of Collaborative Problem Solving

    Collaborative problem solving (CPS) is an important 21st century skill that is increasingly recognized as being critical to efficiency, effectiveness, and innovation in the modern global economy (Fiore, Graesser, & Greiff, 2018; Organisation for Economic Co-operation and Development [OECD], 2017a).

  3. Comparative analysis of student performance in collaborative problem

    In 2015, the Organisation for Economic Cooperation and Development, in their Programme for International Student Assessment assessed 15-year-old students' collaborative problem solving achievement, with the use of computer-simulated agents, aiming to address the lack of internationally comparable data in this field.

  4. Full article: Measuring collaborative problem solving: research agenda

    Defining collaborative problem solving. Collaborative problem solving refers to "problem-solving activities that involve interactions among a group of individuals" (O'Neil et al., Citation 2003, p. 4; Zhang, Citation 1998, p. 1).In a more detailed definition, "CPS in educational setting is a process in which two or more collaborative parties interact with each other to share and ...

  5. A Literature Review on Collaborative Problem Solving for College and

    The literature and the employee and workforce surveys rank collaborative problem solving (CPS) among the top 5 most critical skills necessary for success in college and the workforce. This paper provides a review of the literature on CPS and related terms, including a discussion of their definitions, importance to higher education and workforce ...

  6. Understanding student teachers' collaborative problem solving: Insights

    Collaborative problem solving, as a key competency in the 21st century, includes both social and cognitive processes with interactive, interdependent, and periodic characteristics, so it is difficult to analyze collaborative problem solving by traditional coding and counting methods. ... An analysis of student collaborative problem solving ...

  7. What makes peer collaborative problem solving productive or

    Global demands for collaborative problem solving (CPS) have sparked investigations of peer collaboration in the educational context. The aim of this systematic review was to identify and systematize research findings on (a) characteristics of productive and unproductive face-to-face (f2f) or synchronous CPS via digital devices among adolescents in the educational context, (b) training and ...

  8. PDF The effectiveness of collaborative problem solving in promoting

    collaborative problem solving can result in a rise or decrease in critical thinking. The ndings show that (1) collaborative problem solving is an effective teaching approach to foster stu-

  9. How to ace collaborative problem solving

    To solve any problem—whether personal (eg, deciding where to live), business-related (eg, raising product prices), or societal (eg, reversing the obesity epidemic)—it's crucial to first define the problem. In a team setting, that translates to establishing a collective understanding of the problem, awareness of context, and alignment of ...

  10. Effectiveness of online collaborative problem‐solving method on

    The findings show that (1) collaborative problem solving is an effective teaching approach to foster students' critical thinking, with a significant overall effect size (ES = 0.82, z = 12.78, P ...

  11. An analysis of collaborative problem‐solving activities mediated by

    Researchers have indicated that the collaborative problem-solving space afforded by the collaborative systems significantly impact the problem-solving process. However, recent investigations into collaborative simulations, which allow a group of students to jointly manipulate a problem in a shared problem space, have yielded divergent results ...

  12. PDF Advancing the Science of Collaborative Problem Solving

    Collaborative problem solving (CPS) is an important 21st century skill that is increasingly recognized as being critical to efficiency, effectiveness, and innovation in the

  13. Assessing and Teaching 21st Century Skills: Collaborative Problem

    A combination of collaboration, critical thinking, communication, and problem solving can be thought of as collaborative problem solving (CPS). The Assessment and Teaching of 21st Century Skills (ATC21S) project (Griffin, Care, & McGaw, 2012 ) set about defining ways of measuring individual person skills in collaborative problem solving.

  14. What Is Collaborative Problem Solving and Why Use the Approach?

    The Collaborative Problem Solving Approach. The Collaborative Problem Solving (CPS) approach represents a novel, practical, compassionate, and highly effective model for helping challenging children and those who work and live with them. The CPS approach was first articulated in the widely read book, The Explosive Child [ 3 ], and subsequently ...

  15. The effectiveness of collaborative problem solving in promoting

    Collaborative problem-solving has been widely embraced in the classroom instruction of critical thinking, which is regarded as the core of curriculum reform based on key competencies in the field of education as well as a key competence for learners in the 21st century. However, the effectiveness of collaborative problem-solving in promoting students' critical thinking remains uncertain.

  16. Visual analysis of commognitive conflict in collaborative problem

    In today's knowledge-intensive and digital society, collaborative problem-solving (CPS) is considered a critical skill for students to develop. Moreover, international education research has embraced a new paradigm of communication-focused inquiry, and the commognitive theory helps enhance the understanding of CPS work.

  17. Collaborative Problem Solving

    The PISA 2015 Collaborative Problem Solving assessment was the first large-scale, international assessment to evaluate students' competency in collaborative problem solving. It required students to interact with simulated (computer) in order to solve problems. These dynamic, simulated agents were designed to represent different profiles of ...

  18. Collaborative Problem Solving

    Abstract. Collaborative problem solving (CPS) has been deemed a competency critical for success in today's world given that many of the challenges of today require individuals to come together to find solutions to novel problems. This has made developing and implementing ways to assess CPS an important endeavor.

  19. Think:Kids : Collaborative Problem Solving in Schools

    The Results. Our research has shown that the Collaborative Problem Solving approach helps kids and adults build crucial social-emotional skills and leads to dramatic decreases in behavior problems across various settings. Results in schools include remarkable reductions in time spent out of class, detentions, suspensions, injuries, teacher ...

  20. Collaborative Problem Solving: The Ultimate Guide

    Because collaborative problem solving involves multiple people and ideas, there are some techniques that can help you stay on track, engage efficiently, and communicate effectively during collaboration. Set Expectations. From the very beginning, expectations for openness and respect must be established for CPS to be effective.

  21. Critical Thinking for Team Collaboration: A Guide to Effective Problem

    Various skills are necessary for collaborative critical thinking, including effective communication, active listening, empathy, open-mindedness, problem-solving, and decision-making. These skills help team members share diverse perspectives, identify biases, and address issues from multiple angles, fostering well-rounded and effective ...

  22. Collaborative Problem-Solving in Knowledge-Rich Domains: A ...

    Collaborative problem-solving skills are highly relevant in professional practice of knowledge-rich domains, highlighting the need to strengthen these skills in students engaged in CDR and to provide learning opportunities accordingly. ... Statistical analysis of complex problem-solving process data: an event history analysis approach ...

  23. PDF Behavior Sequence Patterns in Collaborative Problem Solving: A Multiple

    Collaborative problem solving (CPS) has been recognized as a pivotal skill, serving as a catalyst for successful job performance and active learning . The acknowledgment of CPS's significance has spurre d researchers to delve into its assessment, analysis, and development. Regarding analysis, some existing studies have used machine

  24. Assessing individual contributions to Collaborative Problem Solving: A

    Collaborative Problem Solving (CPS) is an interactive, interdependent, and temporal process. However, current methods for measuring the CPS processes of individuals, such as coding and counting, treat these processes as sets of isolated and independent events. In contrast, Epistemic Network Analysis (ENA) models how the contributions of a given ...

  25. Unlocking Better Solutions Through Reframing

    Overcoming Biases with Collaborative Problem-Solving Wedell-Wedellsborg emphasizes a critical bias, which he calls the 'law of the hammer.' It's where people tend to frame problems to match ...

  26. Improving collaborative problem-solving skills via automated feedback

    Improving collaborative problem-solving skills via automated feedback and scaffolding: a quasi-experimental study with CPSCoach 2.0. Authors: D'Mello, S.K., Duran ...

  27. How Do Private Online Middle Schools Promote Critical Thinking and

    Private online middle schools promote critical thinking and problem-solving skills through interactive, personalized curricula, project-based learning, and real-world applications. They utilize digital tools and collaborative platforms to engage students in analytical discussions, fostering independent thought and innovative solutions.