• Utility Menu

University Logo

GA4 Tracking Code

Home

fa51e2b1dc8cca8f7467da564e77b5ea

  • Make a Gift
  • Join Our Email List
  • Problem Solving in STEM

Solving problems is a key component of many science, math, and engineering classes.  If a goal of a class is for students to emerge with the ability to solve new kinds of problems or to use new problem-solving techniques, then students need numerous opportunities to develop the skills necessary to approach and answer different types of problems.  Problem solving during section or class allows students to develop their confidence in these skills under your guidance, better preparing them to succeed on their homework and exams. This page offers advice about strategies for facilitating problem solving during class.

How do I decide which problems to cover in section or class?

In-class problem solving should reinforce the major concepts from the class and provide the opportunity for theoretical concepts to become more concrete. If students have a problem set for homework, then in-class problem solving should prepare students for the types of problems that they will see on their homework. You may wish to include some simpler problems both in the interest of time and to help students gain confidence, but it is ideal if the complexity of at least some of the in-class problems mirrors the level of difficulty of the homework. You may also want to ask your students ahead of time which skills or concepts they find confusing, and include some problems that are directly targeted to their concerns.

You have given your students a problem to solve in class. What are some strategies to work through it?

  • Try to give your students a chance to grapple with the problems as much as possible.  Offering them the chance to do the problem themselves allows them to learn from their mistakes in the presence of your expertise as their teacher. (If time is limited, they may not be able to get all the way through multi-step problems, in which case it can help to prioritize giving them a chance to tackle the most challenging steps.)
  • When you do want to teach by solving the problem yourself at the board, talk through the logic of how you choose to apply certain approaches to solve certain problems.  This way you can externalize the type of thinking you hope your students internalize when they solve similar problems themselves.
  • Start by setting up the problem on the board (e.g you might write down key variables and equations; draw a figure illustrating the question).  Ask students to start solving the problem, either independently or in small groups.  As they are working on the problem, walk around to hear what they are saying and see what they are writing down. If several students seem stuck, it might be a good to collect the whole class again to clarify any confusion.  After students have made progress, bring the everyone back together and have students guide you as to what to write on the board.
  • It can help to first ask students to work on the problem by themselves for a minute, and then get into small groups to work on the problem collaboratively.
  • If you have ample board space, have students work in small groups at the board while solving the problem.  That way you can monitor their progress by standing back and watching what they put up on the board.
  • If you have several problems you would like to have the students practice, but not enough time for everyone to do all of them, you can assign different groups of students to work on different – but related - problems.

When do you want students to work in groups to solve problems?

  • Don’t ask students to work in groups for straightforward problems that most students could solve independently in a short amount of time.
  • Do have students work in groups for thought-provoking problems, where students will benefit from meaningful collaboration.
  • Even in cases where you plan to have students work in groups, it can be useful to give students some time to work on their own before collaborating with others.  This ensures that every student engages with the problem and is ready to contribute to a discussion.

What are some benefits of having students work in groups?

  • Students bring different strengths, different knowledge, and different ideas for how to solve a problem; collaboration can help students work through problems that are more challenging than they might be able to tackle on their own.
  • In working in a group, students might consider multiple ways to approach a problem, thus enriching their repertoire of strategies.
  • Students who think they understand the material will gain a deeper understanding by explaining concepts to their peers.

What are some strategies for helping students to form groups?  

  • Instruct students to work with the person (or people) sitting next to them.
  • Count off.  (e.g. 1, 2, 3, 4; all the 1’s find each other and form a group, etc)
  • Hand out playing cards; students need to find the person with the same number card. (There are many variants to this.  For example, you can print pictures of images that go together [rain and umbrella]; each person gets a card and needs to find their partner[s].)
  • Based on what you know about the students, assign groups in advance. List the groups on the board.
  • Note: Always have students take the time to introduce themselves to each other in a new group.

What should you do while your students are working on problems?

  • Walk around and talk to students. Observing their work gives you a sense of what people understand and what they are struggling with. Answer students’ questions, and ask them questions that lead in a productive direction if they are stuck.
  • If you discover that many people have the same question—or that someone has a misunderstanding that others might have—you might stop everyone and discuss a key idea with the entire class.

After students work on a problem during class, what are strategies to have them share their answers and their thinking?

  • Ask for volunteers to share answers. Depending on the nature of the problem, student might provide answers verbally or by writing on the board. As a variant, for questions where a variety of answers are relevant, ask for at least three volunteers before anyone shares their ideas.
  • Use online polling software for students to respond to a multiple-choice question anonymously.
  • If students are working in groups, assign reporters ahead of time. For example, the person with the next birthday could be responsible for sharing their group’s work with the class.
  • Cold call. To reduce student anxiety about cold calling, it can help to identify students who seem to have the correct answer as you were walking around the class and checking in on their progress solving the assigned problem. You may even want to warn the student ahead of time: "This is a great answer! Do you mind if I call on you when we come back together as a class?"
  • Have students write an answer on a notecard that they turn in to you.  If your goal is to understand whether students in general solved a problem correctly, the notecards could be submitted anonymously; if you wish to assess individual students’ work, you would want to ask students to put their names on their notecard.  
  • Use a jigsaw strategy, where you rearrange groups such that each new group is comprised of people who came from different initial groups and had solved different problems.  Students now are responsible for teaching the other students in their new group how to solve their problem.
  • Have a representative from each group explain their problem to the class.
  • Have a representative from each group draw or write the answer on the board.

What happens if a student gives a wrong answer?

  • Ask for their reasoning so that you can understand where they went wrong.
  • Ask if anyone else has other ideas. You can also ask this sometimes when an answer is right.
  • Cultivate an environment where it’s okay to be wrong. Emphasize that you are all learning together, and that you learn through making mistakes.
  • Do make sure that you clarify what the correct answer is before moving on.
  • Once the correct answer is given, go through some answer-checking techniques that can distinguish between correct and incorrect answers. This can help prepare students to verify their future work.

How can you make your classroom inclusive?

  • The goal is that everyone is thinking, talking, and sharing their ideas, and that everyone feels valued and respected. Use a variety of teaching strategies (independent work and group work; allow students to talk to each other before they talk to the class). Create an environment where it is normal to struggle and make mistakes.
  • See Kimberly Tanner’s article on strategies to promoste student engagement and cultivate classroom equity. 

A few final notes…

  • Make sure that you have worked all of the problems and also thought about alternative approaches to solving them.
  • Board work matters. You should have a plan beforehand of what you will write on the board, where, when, what needs to be added, and what can be erased when. If students are going to write their answers on the board, you need to also have a plan for making sure that everyone gets to the correct answer. Students will copy what is on the board and use it as their notes for later study, so correct and logical information must be written there.

For more information...

Tipsheet: Problem Solving in STEM Sections

Tanner, K. D. (2013). Structure matters: twenty-one teaching strategies to promote student engagement and cultivate classroom equity . CBE-Life Sciences Education, 12(3), 322-331.

  • Designing Your Course
  • A Teaching Timeline: From Pre-Term Planning to the Final Exam
  • The First Day of Class
  • Group Agreements
  • Classroom Debate
  • Flipped Classrooms
  • Leading Discussions
  • Polling & Clickers
  • Teaching with Cases
  • Engaged Scholarship
  • Devices in the Classroom
  • Beyond the Classroom
  • On Professionalism
  • Getting Feedback
  • Equitable & Inclusive Teaching
  • Advising and Mentoring
  • Teaching and Your Career
  • Teaching Remotely
  • Tools and Platforms
  • The Science of Learning
  • Bok Publications
  • Other Resources Around Campus

Teachers Institute

The Problem Solving Approach in Science Education

problem solving in science

Table of Contents

Have you ever wondered how science , with its vast array of facts and figures, becomes so deeply integrated into our understanding of the world? It isn’t just about memorizing data; it’s about engaging with problems and seeking solutions through a systematic approach. This is where the problem\-solving approach in science education takes the spotlight. It transforms passive listeners into active participants, nurturing the next generation of critical thinkers and innovators.

What is the Problem-Solving Approach?

At its core, the problem-solving approach is a student\-centered method that encourages learners to tackle scientific problems with curiosity and rigor. It isn’t just a teaching strategy; it’s a journey that begins with recognizing a problem and ends with reaching a conclusion through investigation and reasoning.

Step 1: Identifying the Problem

Every scientific journey begins with a question. In the classroom, this means fostering an environment where students are prompted to observe phenomena and articulate their curiosities in the form of clear, concise problems. This might look like a teacher demonstrating an unexpected result in an experiment and asking students to ponder why it occurred.

Step 2: Gathering Information

Once the problem is set, the next step is to gather relevant information. Here, students exercise their research skills, looking through textbooks, scientific journals, and credible internet sources to understand the context of their problem. They learn to differentiate between reliable and unreliable information—a skill with far-reaching implications.

Step 3: Formulating Hypotheses

Armed with information, students then formulate hypotheses. A hypothesis is an educated guess that can be tested through experiments. Encouraging learners to come up with their hypotheses promotes creativity and ownership of the learning process.

Step 4: Conducting Experiments

What sets science apart is its reliance on empirical evidence . In this step, students design and conduct experiments to test their hypotheses. They learn about controls, variables, and the importance of replicability. This hands-on experience is invaluable and often the most engaging part of the approach.

Step 5: Analyzing Data

After the experiment, comes the analysis. Students examine their results, often using statistical methods , to see if the data supports or refutes their hypotheses. This is where critical thinking is paramount, as they must interpret the data without bias.

Step 6: Drawing Conclusions

The final step in the process is drawing conclusions. Here, students evaluate the entirety of their work and determine the implications of their findings. Whether their hypotheses were supported or not, they gain insights into the scientific process and develop the ability to argue their conclusions based on evidence.

The Benefits of Problem Solving in Science Education

This methodology goes beyond knowledge acquisition; it’s about instilling a scientific mindset. Let’s explore how this approach benefits learners:

Develops Higher-Order Thinking Skills

By grappling with complex problems, students develop higher\-order thinking skills such as analysis, synthesis, and evaluation. These are not only vital in science but in everyday decision-making as well.

Encourages Active Learning

Active engagement in learning through problem-solving keeps students invested in their education. They’re not passive receivers of information but active participants in their learning journey.

Promotes Autonomy and Confidence

As students navigate through problems on their own, they build autonomy and confidence in their ability to tackle challenges. This self-assurance can translate to various aspects of their lives.

Fosters a Deeper Understanding of Scientific Principles

By connecting theoretical knowledge to practical problems, students develop a more nuanced understanding of scientific principles. It’s one thing to read about a concept; it’s another to see it in action.

Improves Collaboration Skills

Problem-solving often involves teamwork, allowing students to improve their collaborative skills . They learn to communicate ideas, share tasks, and respect different viewpoints.

Enhances Persistence and Resilience

Not every experiment will go as planned, and not every hypothesis will be correct. Navigating these challenges teaches learners persistence and resilience —qualities that are essential in science and in life.

Bringing Problem Solving Into the Classroom

Integrating the problem-solving approach into science education requires careful planning and a shift in mindset. Teachers become facilitators rather than lecturers, guiding students through the process and providing support when needed. Classrooms become active learning environments where mistakes are seen as learning opportunities.

The problem-solving approach in science education is more than a teaching strategy; it’s a blueprint for developing curious, independent, and analytical thinkers. By engaging learners in this manner, we’re not just teaching them science; we’re equipping them with the tools to solve the complex problems of tomorrow.

What do you think? How can we further encourage problem-solving skills in students from an early age? Do you believe that the problem-solving approach should be applied to other subjects beyond science? Share your thoughts and experiences with this dynamic educational strategy.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Submit Comment

Pedagogy of Science

1 Science – Perspectives and Nature

  • Understanding Science
  • Myths about Nature of Science
  • Understanding Nature of Science
  • Domains of Science

2 Aims and Objectives of Science Teaching-Learning

  • Aims of Science Education
  • Objectives of Science Teaching-Learning
  • Developing Learning Objectives
  • Shift in Pedagogic Approach

3 Process Skills in Science

  • Process Skills in Science
  • Basic Process Skills in Science
  • Developing Scientific Attitude and Scientific Temper
  • Nurturing Aesthetic Sense and Curiosity
  • Interdependence of Different Aspects of Nature of Science

4 Science in School Curriculum

  • Historical Development of Science Education in India
  • Teaching of Science as Recommended in National Curriculum Framework-2005
  • Correlation of Science with Other Subjects/Disciplines

5 Organizing Teaching – Learning Experiences

  • Linking Process Skills with Content
  • Formulating Learning Objectives
  • Unit Planning in Science
  • Lesson Planning in Science
  • Using Laboratory for Teaching-Learning

6 Approaches in Science Teaching – Learning

  • Science as a Process of Construction of Knowledge
  • Inquiry Approach
  • Problem Solving Approach
  • Cooperative Learning Approach
  • Experiential Learning Approach
  • Concept Mapping as an Approach for Planning and Transaction
  • Adopting Critical Pedagogy in Science Teaching-Learning

7 Methods in Science Teaching – Learning

  • Teacher Centric Methods
  • Learner Centric Methods
  • Cooperative Learning Methods
  • Inclusion in Science Classroom
  • Adopting Critical Pedagogy

8 Learning Resources in Science

  • Identifying Appropriate Learning Resource
  • Various Learning Resources
  • Classroom Learning Resources
  • ICT as Learning Resource
  • Developing Learning Resource Centres
  • Importance of Various Activities in Science Teaching-Learning
  • Innovations in Science Laboratories
  • Role of Innovation and Research in Science
  • Professional Development of Science Teachers

9 Assessment in Science

  • Nature of Assessment in Science
  • Assessment Indicators in Science
  • Tools and Techniques for Assessment
  • Diagnostics Assessment in Science
  • Schemes for Promoting Scientific Attitude
  • Components of Food
  • How to Get Higher Yields
  • Animal Husbandry

11 Material

  • Classification of Substances
  • States of Material
  • Mole Valency and Equivalence
  • Types of Chemical Reactions
  • Basic Metallurgical Processes

12 The Living World

  • Diversity in Plants and Animals
  • Nomenclature Scientific Names and Hierarchy
  • Cell and Cell Organelles
  • Life Processes

13 How Things Work

  • Electric Current and Electric Circuit
  • Electric Potential and Potential Difference
  • Combination of Resistors — Series and Parallel
  • Electric Power
  • Heating Effects of Electric Current
  • Magnetic Effects of Electric Current
  • Electric Motor
  • Electromagnetic Induction
  • Electric Generator
  • Domestic Electric Circuits

14 Moving Things, People and Ideas

  • Newton’s Law of Motion
  • Conservation of Momentum
  • Kinetic and Potential Energy

15 Natural Phenomenon

  • Light as a Natural Phenomenon
  • Water Cycle
  • Conservation of Water Bodies
  • Natural Disasters
  • Waste Management

16 Natural Resources

  • Physical Resources and their Utilization
  • Pollution and Role of Human Being
  • Bio-Geo-Chemical Cycles in Nature
  • Natural Resource Management

Share on Mastodon

STEM Problem Solving: Inquiry, Concepts, and Reasoning

  • Published: 29 January 2022
  • Volume 32 , pages 381–397, ( 2023 )

Cite this article

  • Aik-Ling Tan   ORCID: orcid.org/0000-0002-4627-4977 1 ,
  • Yann Shiou Ong   ORCID: orcid.org/0000-0002-6092-2803 1 ,
  • Yong Sim Ng   ORCID: orcid.org/0000-0002-8400-2040 1 &
  • Jared Hong Jie Tan 1  

9447 Accesses

8 Citations

2 Altmetric

Explore all metrics

Balancing disciplinary knowledge and practical reasoning in problem solving is needed for meaningful learning. In STEM problem solving, science subject matter with associated practices often appears distant to learners due to its abstract nature. Consequently, learners experience difficulties making meaningful connections between science and their daily experiences. Applying Dewey’s idea of practical and science inquiry and Bereiter’s idea of referent-centred and problem-centred knowledge, we examine how integrated STEM problem solving offers opportunities for learners to shuttle between practical and science inquiry and the kinds of knowledge that result from each form of inquiry. We hypothesize that connecting science inquiry with practical inquiry narrows the gap between science and everyday experiences to overcome isolation and fragmentation of science learning. In this study, we examine classroom talk as students engage in problem solving to increase crop yield. Qualitative content analysis of the utterances of six classes of 113 eighth graders and their teachers were conducted for 3 hours of video recordings. Analysis showed an almost equal amount of science and practical inquiry talk. Teachers and students applied their everyday experiences to generate solutions. Science talk was at the basic level of facts and was used to explain reasons for specific design considerations. There was little evidence of higher-level scientific conceptual knowledge being applied. Our observations suggest opportunities for more intentional connections of science to practical problem solving, if we intend to apply higher-order scientific knowledge in problem solving. Deliberate application and reference to scientific knowledge could improve the quality of solutions generated.

Similar content being viewed by others

problem solving in science

Science Camps for Introducing Nature of Scientific Inquiry Through Student Inquiries in Nature: Two Applications with Retention Study

G. Leblebicioglu, N. M. Abik, … R. Schwartz

problem solving in science

Ways of thinking in STEM-based problem solving

Lyn D. English

problem solving in science

Framing and Assessing Scientific Inquiry Practices

Avoid common mistakes on your manuscript.

1 Introduction

As we enter to second quarter of the twenty-first century, it is timely to take stock of both the changes and demands that continue to weigh on our education system. A recent report by World Economic Forum highlighted the need to continuously re-position and re-invent education to meet the challenges presented by the disruptions brought upon by the fourth industrial revolution (World Economic Forum, 2020 ). There is increasing pressure for education to equip children with the necessary, relevant, and meaningful knowledge, skills, and attitudes to create a “more inclusive, cohesive and productive world” (World Economic Forum, 2020 , p. 4). Further, the shift in emphasis towards twenty-first century competencies over mere acquisition of disciplinary content knowledge is more urgent since we are preparing students for “jobs that do not yet exist, technology that has not yet been invented, and problems that has yet exist” (OECD, 2018 , p. 2). Tan ( 2020 ) concurred with the urgent need to extend the focus of education, particularly in science education, such that learners can learn to think differently about possibilities in this world. Amidst this rhetoric for change, the questions that remained to be answered include how can science education transform itself to be more relevant; what is the role that science education play in integrated STEM learning; how can scientific knowledge, skills and epistemic practices of science be infused in integrated STEM learning; what kinds of STEM problems should we expose students to for them to learn disciplinary knowledge and skills; and what is the relationship between learning disciplinary content knowledge and problem solving skills?

In seeking to understand the extent of science learning that took place within integrated STEM learning, we dissected the STEM problems that were presented to students and examined in detail the sense making processes that students utilized when they worked on the problems. We adopted Dewey’s ( 1938 ) theoretical idea of scientific and practical/common-sense inquiry and Bereiter’s ideas of referent-centred and problem-centred knowledge building process to interpret teacher-students’ interactions during problem solving. There are two primary reasons for choosing these two theoretical frameworks. Firstly, Dewey’s ideas about the relationship between science inquiry and every day practical problem-solving is important in helping us understand the role of science subject matter knowledge and science inquiry in solving practical real-world problems that are commonly used in STEM learning. Secondly, Bereiter’s ideas of referent-centred and problem-centred knowledge augment our understanding of the types of knowledge that students can learn when they engage in solving practical real-world problems.

Taken together, Dewey’s and Bereiter’s ideas enable us to better understand the types of problems used in STEM learning and their corresponding knowledge that is privileged during the problem-solving process. As such, the two theoretical lenses offered an alternative and convincing way to understand the actual types of knowledge that are used within the context of integrated STEM and help to move our understanding of STEM learning beyond current focus on examining how engineering can be used as an integrative mechanism (Bryan et al., 2016 ) or applying the argument of the strengths of trans-, multi-, or inter-disciplinary activities (Bybee, 2013 ; Park et al., 2020 ) or mapping problems by the content and context as pure STEM problems, STEM-related problems or non-STEM problems (Pleasants, 2020 ). Further, existing research (for example, Gale et al., 2000 ) around STEM education focussed largely on description of students’ learning experiences with insufficient attention given to the connections between disciplinary conceptual knowledge and inquiry processes that students use to arrive at solutions to problems. Clarity in the role of disciplinary knowledge and the related inquiry will allow for more intentional design of STEM problems for students to learn higher-order knowledge. Applying Dewey’s idea of practical and scientific inquiry and Bereiter’s ideas of referent-centred and problem-centred knowledge, we analysed six lessons where students engaged with integrated STEM problem solving to propose answers to the following research questions: What is the extent of practical and scientific inquiry in integrated STEM problem solving? and What conceptual knowledge and problem-solving skills are learnt through practical and science inquiry during integrated STEM problem solving?

2 Inquiry in Problem Solving

Inquiry, according to Dewey ( 1938 ), involves the direct control of unknown situations to change them into a coherent and unified one. Inquiry usually encompasses two interrelated activities—(1) thinking about ideas related to conceptual subject-matter and (2) engaging in activities involving our senses or using specific observational techniques. The National Science Education Standards released by the National Research Council in the US in 1996 defined inquiry as “…a multifaceted activity that involves making observations; posing questions; examining books and other sources of information to see what is already known; planning investigations; reviewing what is already known in light of experimental evidence; using tools to gather, analyze, and interpret data; proposing answers, explanations, and predictions; and communicating the results. Inquiry requires identification of assumptions, use of critical and logical thinking, and consideration of alternative explanations” (p. 23). Planning investigation; collecting empirical evidence; using tools to gather, analyse and interpret data; and reasoning are common processes shared in the field of science and engineering and hence are highly relevant to apply to integrated STEM education.

In STEM education, establishing the connection between general inquiry and its application helps to link disciplinary understanding to epistemic knowledge. For instance, methods of science inquiry are popular in STEM education due to the familiarity that teachers have with scientific methods. Science inquiry, a specific form of inquiry, has appeared in many science curriculum (e.g. NRC, 2000 ) since Dewey proposed in 1910 that learning of science should be perceived as both subject-matter and a method of learning science (Dewey, 1910a , 1910b ). Science inquiry which involved ways of doing science should also encompass the ways in which students learn the scientific knowledge and investigative methods that enable scientific knowledge to be constructed. Asking scientifically orientated questions, collecting empirical evidence, crafting explanations, proposing models and reasoning based on available evidence are affordances of scientific inquiry. As such, science should be pursued as a way of knowing rather than merely acquisition of scientific knowledge.

Building on these affordances of science inquiry, Duschl and Bybee ( 2014 ) advocated the 5D model that focused on the practice of planning and carrying out investigations in science and engineering, representing two of the four disciplines in STEM. The 5D model includes science inquiry aspects such as (1) deciding on what and how to measure, observe and sample; (2) developing and selecting appropriate tools to measure and collect data; (3) recording the results and observations in a systematic manner; (4) creating ways to represent the data and patterns that are observed; and (5) determining the validity and the representativeness of the data collected. The focus on planning and carrying out investigations in the 5D model is used to help teachers bridge the gap between the practices of building and refining models and explanation in science and engineering. Indeed, a common approach to incorporating science inquiry in integrated STEM curriculum involves student planning and carrying out scientific investigations and making sense of the data collected to inform engineering design solution (Cunningham & Lachapelle, 2016 ; Roehrig et al., 2021 ). Duschl and Bybee ( 2014 ) argued that it is needful to design experiences for learners to appreciate that struggles are part of problem solving in science and engineering. They argued that “when the struggles of doing science is eliminated or simplified, learners get the wrong perceptions of what is involved when obtaining scientific knowledge and evidence” (Duschl & Bybee, 2014 , p. 2). While we concur with Duschl and Bybee about the need for struggles, in STEM learning, these struggles must be purposeful and grade appropriate so that students will also be able to experience success amidst failure.

The peculiar nature of science inquiry was scrutinized by Dewey ( 1938 ) when he cross-examined the relationship between science inquiry and other forms of inquiry, particularly common-sense inquiry. He positioned science inquiry along a continuum with general or common-sense inquiry that he termed as “logic”. Dewey argued that common-sense inquiry serves a practical purpose and exhibits features of science inquiry such as asking questions and a reliance on evidence although the focus of common-sense inquiry tends to be different. Common-sense inquiry deals with issues or problems that are in the immediate environment where people live, whereas the objects of science inquiry are more likely to be distant (e.g. spintronics) from familiar experiences in people’s daily lives. While we acknowledge the fundamental differences (such as novel discovery compared with re-discovering science, ‘messy’ science compared with ‘sanitised’ science) between school science and science that is practiced by scientists, the subject of interest in science (understanding the world around us) remains the same.

The unfamiliarity between the functionality and purpose of science inquiry to improve the daily lives of learners does little to motivate learners to learn science (Aikenhead, 2006 ; Lee & Luykx, 2006 ) since learners may not appreciate the connections of science inquiry in their day-to-day needs and wants. Bereiter ( 1992 ) has also distinguished knowledge into two forms—referent-centred and problem-centred. Referent-centred knowledge refers to subject-matter that is organised around topics such as that in textbooks. Problem-centred knowledge is knowledge that is organised around problems, whether they are transient problems, practical problems or problems of explanations. Bereiter argued that referent-centred knowledge that is commonly taught in schools is limited in their applications and meaningfulness to the lives of students. This lack of familiarity and affinity to referent-centred knowledge is likened to the science subject-matter knowledge that was mentioned by Dewey. Rather, it is problem-centred knowledge that would be useful when students encounter problems. Learning problem-centred knowledge will allow learners to readily harness the relevant knowledge base that is useful to understand and solve specific problems. This suggests a need to help learners make the meaningful connections between science and their daily lives.

Further, Dewey opined that while the contexts in which scientific knowledge arise could be different from our daily common-sense world, careful consideration of scientific activities and applying the resultant knowledge to daily situations for use and enjoyment is possible. Similarly, in arguing for problem-centred knowledge, Bereiter ( 1992 ) questioned the value of inert knowledge that plays no role in helping us understand or deal with the world around us. Referent-centred knowledge has a higher tendency to be inert due to the way that the knowledge is organised and the way that the knowledge is encountered by learners. For instance, learning about the equation and conditions for photosynthesis is not going to help learners appreciate how plants are adapted for photosynthesis and how these adaptations can allow plants to survive changes in climate and for farmers to grow plants better by creating the best growing conditions. Rather, students could be exposed to problems of explanations where they are asked to unravel the possible reasons for low crop yield and suggest possible ways to overcome the problem. Hence, we argue here that the value of the referent knowledge is that they form the basis and foundation for the students to be able to discuss or suggest ways to overcome real life problems. Referent-centred knowledge serves as part of the relevant knowledge base that can be harnessed to solve specific problems or as foundational knowledge students need to progress to learn higher-order conceptual knowledge that typically forms the foundations or pillars within a discipline. This notion of referent-centred knowledge serving as foundational knowledge that can be and should be activated for application in problem-solving situation is shown by Delahunty et al. ( 2020 ). They found that students show high reliance on memory when they are conceptualising convergent problem-solving tasks.

While Bereiter argues for problem-centred knowledge, he cautioned that engagement should be with problems of explanation rather than transient or practical problems. He opined that if learners only engage in transient or practical problem alone, they will only learn basic-category types of knowledge and fail to understand higher-order conceptual knowledge. For example, for photosynthesis, basic-level types of knowledge included facts about the conditions required for photosynthesis, listing the products formed from the process of photosynthesis and knowing that green leaves reflect green light. These basic-level knowledges should intentionally help learners learn higher-level conceptual knowledge that include learners being able to draw on the conditions for photosynthesis when they encounter that a plant is not growing well or is exhibiting discoloration of leaves.

Transient problems disappear once a solution becomes available and there is a high likelihood that we will not remember the problem after that. Practical problems, according to Bereiter are “stuck-door” problems that could be solved with or without basic-level knowledge and often have solutions that lacks precise definition. There are usually a handful of practical strategies, such as pulling or pushing the door harder, kicking the door, etc. that will work for the problems. All these solutions lack a well-defined approach related to general scientific principles that are reproducible. Problems of explanations are the most desirable types of problems for learners since these are problems that persist and recur such that they can become organising points for knowledge. Problems of explanations consist of the conceptual representations of (1) a text base that serves to represent the text content and (2) a situation model that shows the portion of the world in which the text is relevant. The idea of text base to represent text content in solving problems of explanations is like the idea of domain knowledge and structural knowledge (refers to knowledge of how concepts within a domain are connected) proposed by Jonassen ( 2000 ). He argued that both types of knowledges are required to solve a range of problems from well-structured problems to ill-structured problems with a simulated context, to simple ill-structured problems and to complex ill-structured problems.

Jonassen indicated that complex ill-structured problems are typically design problems and are likely to be the most useful forms of problems for learners to be engaged in inquiry. Complex ill-structured design problems are the “wicked” problems that Buchanan ( 1992 ) discussed. Buchanan’s idea is that design aims to incorporate knowledge from different fields of specialised inquiry to become whole. Complex or wicked problems are akin to the work of scientists who navigate multiple factors and evidence to offer models that are typically oversimplified, but they apply them to propose possible first approximation explanations or solutions and iteratively relax constraints or assumptions to refine the model. The connections between the subject matter of science and the design process to engineer a solution are delicate. While it is important to ensure that practical concerns and questions are taken into consideration in designing solutions (particularly a material artefact) to a practical problem, the challenge here lies in ensuring that creativity in design is encouraged even if students initially lack or neglect the scientific conceptual understanding to explain/justify their design. In his articulation of wicked problems and the role of design thinking, Buchanan ( 1992 ) highlighted the need to pay attention to category and placement. Categories “have fixed meanings that are accepted within the framework of a theory or a philosophy and serve as the basis for analyzing what already exist” (Buchanan, 1992 , p. 12). Placements, on the other hand, “have boundaries to shape and constrain meaning, but are not rigidly fixed and determinate” (p. 12).

The difference in the ideas presented by Dewey and Bereiter lies in the problem design. For Dewey, scientific knowledge could be learnt from inquiring into practical problems that learners are familiar with. After all, Dewey viewed “modern science as continuous with, and to some degree an outgrowth and refinement of, practical or ‘common-sense’ inquiry” (Brown, 2012 ). For Bereiter, he acknowledged the importance of familiar experiences, but instead of using them as starting points for learning science, he argued that practical problems are limiting in helping learners acquire higher-order knowledge. Instead, he advocated for learners to organize their knowledge around problems that are complex, persistent and extended and requiring explanations to better understand the problems. Learners are to have a sense of the kinds of problems to which the specific concept is relevant before they can be said to have grasp the concept in a functionally useful way.

To connect between problem solving, scientific knowledge and everyday experiences, we need to examine ways to re-negotiate the disciplinary boundaries (such as epistemic understanding, object of inquiry, degree of precision) of science and make relevant connections to common-sense inquiry and to the problem at hand. Integrated STEM appears to be one way in which the disciplinary boundaries of science can be re-negotiated to include practices from the fields of technology, engineering and mathematics. In integrated STEM learning, inquiry is seen more holistically as a fluid process in which the outcomes are not absolute but are tentative. The fluidity of the inquiry process is reflected in the non-deterministic inquiry approach. This means that students can use science inquiry, engineering design, design process or any other inquiry approaches that fit to arrive at the solution. This hybridity of inquiry between science, common-sense and problems allows for some familiar aspects of the science inquiry process to be applied to understand and generate solutions to familiar everyday problems. In attempting to infuse elements of common-sense inquiry with science inquiry in problem-solving, logic plays an important role to help learners make connections. Hypothetically, we argue that with increasing exposure to less familiar ways of thinking such as those associated with science inquiry, students’ familiarity with scientific reasoning increases, and hence such ways of thinking gradually become part of their common-sense, which students could employ to solve future relevant problems. The theoretical ideas related to complexities of problems, the different forms of inquiry afforded by different problems and the arguments for engaging in problem solving motivated us to examine empirically how learners engage with ill-structured problems to generate problem-centred knowledge. Of particular interest to us is how learners and teachers weave between practical and scientific reasoning as they inquire to integrate the components in the original problem into a unified whole.

3.1 Context

The integrated STEM activity in our study was planned using the S-T-E-M quartet instructional framework (Tan et al., 2019 ). The S-T-E-M quartet instructional framework positions complex, persistent and extended problems at its core and focusses on the vertical disciplinary knowledge and understanding of the horizontal connections between the disciplines that could be gained by learners through solving the problem (Tan et al., 2019 ). Figure  1 depicts the disciplinary aspects of the problem that was presented to the students. The activity has science and engineering as the two lead disciplines. It spanned three 1-h lessons and required students to both learn and apply relevant scientific conceptual knowledge to solve a complex, real-world problem through processes that resemble the engineering design process (Wheeler et al., 2019 ).

figure 1

Connections across disciplines in integrate STEM activity

figure 2

Frequency of different types of reasoning

In the first session (1 h), students were introduced to the problem and its context. The problem pertains to the issue of limited farmland in a land scarce country that imports 90% of food (Singapore Food Agency [SFA], 2020 ). The students were required to devise a solution by applying knowledge of the conditions required for photosynthesis and plant growth to design and build a vertical farming system to help farmers increase crop yield with limited farmland. This context was motivated by the government’s effort to generate interests and knowledge in farming to achieve the 30 by 30 goal—supplying 30% of country’s nutritional needs by 2030. The scenario was a fictitious one where they were asked to produce 120 tonnes of Kailan (a type of leafy vegetable) with two hectares of land instead of the usual six hectares over a specific period. In addition to the abovementioned constraints, the teacher also discussed relevant success criteria for evaluating the solution with the students. Students then researched about existing urban farming approaches. They were given reading materials pertaining to urban farming to help them understand the affordances and constraints of existing solutions. In the second session (6 h), students engaged in ideation to generate potential solutions. They then designed, built and tested their solution and had opportunities to iteratively refine their solution. Students were given a list of materials (e.g. mounting board, straws, ice-cream stick, glue, etc.) that they could use to design their solutions. In the final session (1 h), students presented their solution and reflected on how well their solution met the success criteria. The prior scientific conceptual knowledge that students require to make sense of the problem include knowledge related to plant nutrition, namely, conditions for photosynthesis, nutritional requirements of Kailin and growth cycle of Kailin. The problem resembles a real-world problem that requires students to engage in some level of explanation of their design solution.

A total of 113 eighth graders (62 boys and 51 girls), 14-year-olds, from six classes and their teachers participated in the study. The students and their teachers were recruited as part of a larger study that examined the learning experiences of students when they work on integrated STEM activities that either begin with a problem, a solution or are focused on the content. Invitations were sent to schools across the country and interested schools opted in for the study. For the study reported here, all students and teachers were from six classes within a school. The teachers had all undergone 3 h of professional development with one of the authors on ways of implementing the integrated STEM activity used in this study. During the professional development session, the teachers learnt about the rationale of the activity, familiarize themselves with the materials and clarified the intentions and goals of the activity. The students were mostly grouped in groups of three, although a handful of students chose to work independently. The group size of students was not critical for the analysis of talk in this study as the analytic focus was on the kinds of knowledge applied rather than collaborative or group think. We assumed that the types of inquiry adopted by teachers and students were largely dependent on the nature of problem. Eighth graders were chosen for this study since lower secondary science offered at this grade level is thematic and integrated across biology, chemistry and physics. Furthermore, the topic of photosynthesis is taught under the theme of Interactions at eighth grade (CPDD, 2021 ). This thematic and integrated nature of science at eighth grade offered an ideal context and platform for integrated STEM activities to be trialled.

The final lessons in a series of three lessons in each of the six classes was analysed and reported in this study. Lessons where students worked on their solutions were not analysed because the recordings had poor audibility due to masking and physical distancing requirements as per COVID-19 regulations. At the start of the first lesson, the instructions given by the teacher were:

You are going to present your models. Remember the scenario that you were given at the beginning that you were tasked to solve using your model. …. In your presentation, you have to present your prototype and its features, what is so good about your prototype, how it addresses the problem and how it saves costs and space. So, this is what you can talk about during your presentation. ….. pay attention to the presentation and write down questions you like to ask the groups after the presentation… you can also critique their model, you can evaluate, critique and ask questions…. Some examples of questions you can ask the groups are? Do you think your prototype can achieve optimal plant growth? You can also ask questions specific to their models.

3.2 Data collection

Parental consent was sought a month before the start of data collection. The informed consent adhered to confidentiality and ethics guidelines as described by the Institutional Review Board. The data collection took place over a period of one month with weekly video recording. Two video cameras, one at the front and one at the back of the science laboratory were set up. The front camera captured the students seated at the front while the back video camera recorded the teacher as well as the groups of students at the back of the laboratory. The video recordings were synchronized so that the events captured from each camera can be interpreted from different angles. After transcription of the raw video files, the identities of students were substituted with pseudonyms.

3.3 Data analysis

The video recordings were analysed using the qualitative content analysis approach. Qualitative content analysis allows for patterns or themes and meanings to emerge from the process of systematic classification (Hsieh & Shannon, 2005 ). Qualitative content analysis is an appropriate analytic method for this study as it allows us to systematically identify episodes of practical inquiry and science inquiry to map them to the purposes and outcomes of these episodes as each lesson unfolds.

In total, six h of video recordings where students presented their ideas while the teachers served as facilitator and mentor were analysed. The video recordings were transcribed, and the transcripts were analysed using the NVivo software. Our unit of analysis is a single turn of talk (one utterance). We have chosen to use utterances as proxy indicators of reasoning practices based on the assumption that an utterance relates to both grammar and context. An utterance is a speech act that reveals both meaning and intentions of the speaker within specific contexts (Li, 2008 ).

Our research analytical lens is also interpretative in nature and the validity of our interpretation is through inter-rater discussion and agreement. Each utterance at the speaker level in transcripts was examined and coded either as relevant to practical reasoning or scientific reasoning based on the content. The utterances could be a comment by the teacher, a question by a student or a response by another student. Deductive coding is deployed with the two codes, practical reasoning and scientific reasoning derived from the theoretical ideas of Dewey and Bereiter as described earlier. Practical reasoning refers to utterances that reflect commonsensical knowledge or application of everyday understanding. Scientific reasoning refers to utterances that consist of scientifically oriented questions, scientific terms, or the use of empirical evidence to explain. Examples of each type of reasoning are highlighted in the following section. Each coded utterance is then reviewed for detailed description of the events that took place that led to that specific utterance. The description of the context leading to the utterance is considered an episode. The episodes and codes were discussed and agreed upon by two of the authors. Two coders simultaneously watched the videos to identify and code the episodes. The coders interpreted the content of each utterance, examine the context where the utterance was made and deduced the purpose of the utterance. Once each coder has established the sense-making aspect of the utterance in relation to the context, a code of either practical reasoning or scientific reasoning is assigned. Once that was completed, the two coders compared their coding for similarities and differences. They discussed the differences until an agreement was reached. Through this process, an agreement of 85% was reached between the coders. Where disagreement persisted, codes of the more experienced coder were adopted.

4 Results and Discussion

The specific STEM lessons analysed were taken from the lessons whereby students presented the model of their solutions to the class for peer evaluation. Every group of students stood in front of the class and placed their model on the bench as they presented. There was also a board where they could sketch or write their explanations should they want to. The instructions given by the teacher to the students were to explain their models and state reasons for their design.

4.1 Prevalence of Reasoning

The 6h of videos consists of 1422 turns of talk. Three hundred four turns of talk (21%) were identified as talk related to reasoning, either practical reasoning or scientific reasoning. Practical reasoning made up 62% of the reasoning turns while 38% were scientific reasoning (Fig. 2 ).

The two types of reasoning differ in the justifications that are used to substantiate the claims or decisions made. Table 1 describes the differences between the two categories of reasoning.

4.2 Applications of Scientific Reasoning

Instances of engagement with scientific reasoning (for instance, using scientific concepts to justify, raising scientifically oriented questions, or providing scientific explanations) revolved around the conditions for photosynthesis and the concept of energy conversion when students were presenting their ideas or when they were questioned by their peers. For example, in explaining the reason for including fish in their plant system, one group of students made connection to cyclical energy transfer: “…so as the roots of the plants submerged in the water, faeces from the fish will be used as fertilizers so that the plant can grow”. The students considered how organic matter that is still trapped within waste materials can be released and taken up by plants to enhance the growth. The application of scientific reasoning made their design one that is innovative and sustainable as evaluated by the teacher. Some students attempted more ecofriendly designs by considering energy efficiencies through incorporating water turbines in their farming systems. They applied the concept of different forms of energy and energy conversion when their peers inquired about their design. The same scientific concepts were explained at different levels of details by different students. At one level, the students explained in a purely descriptive manner of what happens to the different entities in their prototypes, with implied changes to the forms of energy─ “…spins then generates electricity. So right, when the water falls down, then it will spin. The water will fall on the fan blade thing, then it will spin and then it generates electricity. So, it saves electricity, and also saves water”. At another level, students defended their design through an explanation of energy conversion─ “…because when the water flows right, it will convert gravitational potential energy so, when it reaches the bottom, there is not really much gravitational potential energy”. While these instances of applying scientific reasoning indicated that students have knowledge about the scientific phenomena and can apply them to assist in the problem-solving process, we are not able to establish if students understood the science behind how the dynamo works to generate electricity. Students in eighth grade only need to know how a generator works at a descriptive level and the specialized understanding how a dynamo works is beyond the intended learning outcomes at this grade level.

The application of scientific concepts for justification may not always be accurate. For instance, the naïve conception that students have about plants only respiring at night and not in the day surfaced when one group of students tried to justify the growth rates of Kailan─ “…I mean, they cannot be making food 24/7 and growing 24/7. They have nighttime for a reason. They need to respire”. These students do not appreciate that plants respire in the day as well, and hence respiration occurs 24/7. This naïve conception that plants only respire at night is one that is common among learners of biology (e.g. Svandova, 2014 ) since students learn that plant gives off oxygen in the day and takes in oxygen at night. The hasty conclusion to that observation is that plants carry out photosynthesis in the day and respire at night. The relative rates of photosynthesis and respiration were not considered by many students.

Besides naïve conceptions, engagement with scientific ideas to solve a practical problem offers opportunities for unusual and alternative ideas about science to surface. For instance, another group of students explained that they lined up their plants so that “they can take turns to absorb sunlight for photosynthesis”. These students appear to be explaining that the sun will move and depending on the position of the sun, some plants may be under shade, and hence rates of photosynthesis are dependent on the position of the sun. However, this idea could also be interpreted as (1) the students failed to appreciate that sunlight is everywhere, and (2) plants, unlike animals, particularly humans, do not have the concept of turn-taking. These diverse ideas held by students surfaced when students were given opportunities to apply their knowledge of photosynthesis to solve a problem.

4.3 Applications of Practical Reasoning

Teachers and students used more practical reasoning during an integrated STEM activity requiring both science and engineering practices as seen from 62% occurrence of practical reasoning compared with 38% for scientific reasoning. The intention of the activity to integrate students’ scientific knowledge related to plant nutrition to engineering practice of building a model of vertical farming system could be the reason for the prevalence of practical reasoning. The practical reasoning used related to structural design considerations of the farming system such as how water, light and harvesting can be carried out in the most efficient manner. Students defended the strengths of designs using logic based on their everyday experiences. In the excerpt below (transcribed verbatim), we see students applied their everyday experiences when something is “thinner” (likely to mean narrower), logically it would save space. Further, to reach a higher level, you use a machine to climb up.

Excerpt 1. “Thinner, more space” Because it is more thinner, so like in terms of space, it’s very convenient. So right, because there is – because it rotates right, so there is this button where you can stop it. Then I also installed steps, so that – because there are certain places you can’t reach even if you stop the – if you stop the machine, so when you stop it and you climb up, and then you see the condition of the plants, even though it costs a lot of labour, there is a need to have an experienced person who can grow plants. Then also, when like – when water reach the plants, cos the plants I want to use is soil-based, so as the water reach the soil, the soil will xxx, so like the water will be used, and then we got like – and then there’s like this filter that will filter like the dirt.

In the examples of practical reasoning, we were not able to identify instances where students and teachers engaged with discussion around trade-off and optimisation. Understanding constraints, trade-offs and optimisations are important ideas in informed design matrix for engineering as suggested by Crismond and Adams ( 2012 ). For instance, utterances such as “everything will be reused”, “we will be saving space”, “it looks very flimsy” or “so that it can contains [sic] the plants” were used. These utterances were made both by students while justifying their own prototypes and also by peers who challenged the design of others. Longer responses involving practical reasoning were made based on common-sense, everyday logic─ “…the product does not require much manpower, so other than one or two supervisors like I said just now, to harvest the Kailan, hence, not too many people need to be used, need to be hired to help supervise the equipment and to supervise the growth”. We infer that the higher instances of utterances related to practical reasoning could be due to the presence of more concrete artefacts that is shown, and the students and teachers were more focused on questioning the structure at hand. This inference was made as instructions given by the teacher at the start of students’ presentation focus largely on the model rather than the scientific concepts or reasoning behind the model.

4.4 Intersection Between Scientific and Practical Reasoning

Comparing science subject matter knowledge and problem-solving to the idea of categories and placement (Buchanan, 1992 ), subject matter is analogous to categories where meanings are fixed with well-established epistemic practices and norms. The problem-solving process and design of solutions are likened to placements where boundaries are less rigid, hence opening opportunities for students’ personal experiences and ideas to be presented. Placements allow students to apply their knowledge from daily experiences and common-sense logic to justify decisions. Common-sense knowledge and logic are more accessible, and hence we observe higher frequency of usage. Comparatively, while science subject matter (categories) is also used, it is observed less frequently. This could possibly be due either to less familiarity with the subject matter or lack of appropriate opportunity to apply in practical problem solving. The challenge for teachers during implementation of a STEM problem-solving activity, therefore, lies in the balance of the application of scientific and practical reasoning to deepen understanding of disciplinary knowledge in the context of solving a problem in a meaningful manner.

Our observations suggest that engaging students with practical inquiry tasks with some engineering demands such as the design of modern farm systems offers opportunities for them to convert their personal lived experiences into feasible concrete ideas that they can share in a public space for critique. The peer critique following the sharing of their practical ideas allows for both practical and scientific questions to be asked and for students to defend their ideas. For instance, after one group of students presented their prototype that has silvered surfaces, a student asked a question: “what is the function of the silver panels?”, to which his peers replied : “Makes the light bounce. Bounce the sunlight away and then to other parts of the tray.” This question indicated that students applied their knowledge that shiny silvered surfaces reflect light, and they used this knowledge to disperse the light to other trays where the crops were growing. An example of a practical question asked was “what is the purpose of the ladder?”, to which the students replied: “To take the plants – to refill the plants, the workers must climb up”. While the process of presentation and peer critique mimic peer review in the science inquiry process, the conceptual knowledge of science may not always be evident as students paid more attention to the design constraints such as lighting, watering, and space that was set in the activity. Given the context of growing plants, engagement with the science behind nutritional requirements of plants, the process of photosynthesis, and the adaptations of plants could be more deliberately explored.

5 Conclusion

The goal of our work lies in applying the theoretical ideas of Dewey and Bereiter to better understand reasoning practices in integrate STEM problem solving. We argue that this is a worthy pursue to better understand the roles of scientific reasoning in practical problem solving. One of the goals of integrated STEM education in schools is to enculture students into the practices of science, engineering and mathematics that include disciplinary conceptual knowledge, epistemic practices, and social norms (Kelly & Licona, 2018 ). In the integrated form, the boundaries and approaches to STEM learning are more diverse compared with monodisciplinary ways of problem solving. For instance, in integrated STEM problem solving, besides scientific investigations and explanations, students are also required to understand constraints, design optimal solutions within specific parameters and even to construct prototypes. For students to learn the ways of speaking, doing and being as they participate in integrated STEM problem solving in schools in a meaningful manner, students could benefit from these experiences.

With reference to the first research question of What is the extent of practical and scientific reasoning in integrated STEM problem solving, our analysis suggests that there are fewer instances of scientific reasoning compared with practical reasoning. Considering the intention of integrated STEM learning and adopting Bereiter’s idea that students should learn higher-order conceptual knowledge through engagement with problem solving, we argue for a need for scientific reasoning to be featured more strongly in integrated STEM lessons so that students can gain higher order scientific conceptual knowledge. While the lessons observed were strong in design and building, what was missing in generating solutions was the engagement in investigations, where learners collected or are presented with data and make decisions about the data to allow them to assess how viable the solutions are. Integrated STEM problems can be designed so that science inquiry can be infused, such as carrying out investigations to figure out relationships between variables. Duschl and Bybee ( 2014 ) have argued for the need to engage students in problematising science inquiry and making choices about what works and what does not.

With reference to the second research question , What is achieved through practical and scientific reasoning during integrated STEM problem solving? , our analyses suggest that utterance for practical reasoning are typically used to justify the physical design of the prototype. These utterances rely largely on what is observable and are associated with basic-level knowledge and experiences. The higher frequency of utterances related to practical reasoning and the nature of the utterances suggests that engagement with practical reasoning is more accessible since they relate more to students’ lived experiences and common-sense. Bereiter ( 1992 ) has urged educators to engage learners in learning that is beyond basic-level knowledge since accumulation of basic-level knowledge does not lead to higher-level conceptual learning. Students should be encouraged to use scientific knowledge also to justify their prototype design and to apply scientific evidence and logic to support their ideas. Engagement with scientific reasoning is preferred as conceptual knowledge, epistemic practices and social norms of science are more widely recognised compared with practical reasoning that are likely to be more varied since they rely on personal experiences and common-sense. This leads us to assert that both context and content are important in integrated STEM learning. Understanding the context or the solution without understanding the scientific principles that makes it work makes the learning less meaningful since we “…cannot strip learning of its context, nor study it in a ‘neutral’ context. It is always situated, always relayed to some ongoing enterprise”. (Bruner, 2004 , p. 20).

To further this discussion on how integrated STEM learning experiences harness the ideas of practical and scientific reasoning to move learners from basic-level knowledge to higher-order conceptual knowledge, we propose the need for further studies that involve working with teachers to identify and create relevant problems-of-explanations that focuses on feasible, worthy inquiry ideas such as those related to specific aspects of transportation, alternative energy sources and clean water that have impact on the local community. The design of these problems can incorporate opportunities for systematic scientific investigations and scaffolded such that there are opportunities to engage in epistemic practices of the constitute disciplines of STEM. Researchers could then examine the impact of problems-of-explanations on students’ learning of higher order scientific concepts. During the problem-solving process, more attention can be given to elicit students’ initial and unfolding ideas (practical) and use them as a basis to start the science inquiry process. Researchers can examine how to encourage discussions that focus on making meaning of scientific phenomena that are embedded within specific problems. This will help students to appreciate how data can be used as evidence to support scientific explanations as well as justifications for the solutions to problems. With evidence, learners can be guided to work on reasoning the phenomena with explanatory models. These aspects should move engagement in integrated STEM problem solving from being purely practice to one that is explanatory.

6 Limitations

There are four key limitations of our study. Firstly, the degree of generalisation of our observations is limited. This study sets out to illustrate what how Dewey and Bereiter’s ideas can be used as lens to examine knowledge used in problem-solving. As such, the findings that we report here is limited in its ability to generalise across different contexts and problems. Secondly, the lessons that were analysed came from teacher-frontal teaching and group presentation of solution and excluded students’ group discussions. We acknowledge that there could potentially be talk that could involve practical and scientific reasonings within group work. There are two practical consideration for choosing to analyse the first and presentation segments of the suite of lesson. Firstly, these two lessons involved participation from everyone in class and we wanted to survey the use of practical and scientific reasoning by the students as a class. Secondly, methodologically, clarity of utterances is important for accurate analysis and as students were wearing face masks during the data collection, their utterances during group discussions lack the clarity for accurate transcription and analysis. Thirdly, insights from this study were gleaned from a small sample of six classes of students. Further work could involve more classes of students although that could require more resources devoted to analysis of the videos. Finally, the number of students varied across groups and this could potentially affect the reasoning practices during discussions.

Data Availability

The datasets used and analysed during the current study are available from the corresponding author on reasonable request.

Aikenhead, G. S. (2006). Science education for everyday life: Evidence-based practice . Teachers College Press.

Google Scholar  

Bereiter, C. (1992). Referent-centred and problem-centred knowledge: Elements of an educational epistemology. Interchange, 23 (4), 337–361.

Article   Google Scholar  

Breiner, J. M., Johnson, C. C., Harkness, S. S., & Koehler, C. M. (2012). What is STEM? A discussion about conceptions of STEM in education and partnership. School Science and Mathematics, 112 (1), 3–11. https://doi.org/10.1111/j.194908594.2011.00109.x

Brown, M. J. (2012). John Dewey’s logic of science. HOPS: The Journal of the International Society for the History of Philosophy of Science, 2 (2), 258–306.

Bruner, J. (2004). The psychology of learning: A short history (pp.13–20). Winter: Daedalus.

Bryan, L. A., Moore, T. J., Johnson, C. C., & Roehrig, G. H. (2016). Integrated STEM education. In C. C. Johnson, E. E. Peters-Burton, & T. J. Moore (Eds.), STEM road map: A framework for integrated STEM education (pp. 23–37). Routledge.

Buchanan, R. (1992). Wicked problems in design thinking. Design Issues, 8 (2), 5–21.

Bybee, R. W. (2013). The case for STEM education: Challenges and opportunities . NSTA Press.

Crismond, D. P., & Adams, R. S. (2012). The informed design teaching and learning matrix. Journal of Engineering Education, 101 (4), 738–797.

Cunningham, C. M., & Lachapelle, P. (2016). Experiences to engage all students. Educational Designer , 3(9), 1–26. https://www.educationaldesigner.org/ed/volume3/issue9/article31/

Curriculum Planning and Development Division [CPDD] (2021). 2021 Lower secondary science express/ normal (academic) teaching and learning syllabus . Singapore: Ministry of Education.

Delahunty, T., Seery, N., & Lynch, R. (2020). Exploring problem conceptualization and performance in STEM problem solving contexts. Instructional Science, 48 , 395–425. https://doi.org/10.1007/s11251-020-09515-4

Dewey, J. (1938). Logic: The theory of inquiry . Henry Holt and Company Inc.

Dewey, J. (1910a). Science as subject-matter and as method. Science, 31 (787), 121–127.

Dewey, J. (1910b). How we think . D.C. Heath & Co Publishers.

Book   Google Scholar  

Duschl, R. A., & Bybee, R. W. (2014). Planning and carrying out investigations: an entry to learning and to teacher professional development around NGSS science and engineering practices. International Journal of STEM Education, 1 (12). DOI: https://doi.org/10.1186/s40594-014-0012-6 .

Gale, J., Alemder, M., Lingle, J., & Newton, S (2000). Exploring critical components of an integrated STEM curriculum: An application of the innovation implementation framework. International Journal of STEM Education, 7(5), https://doi.org/10.1186/s40594-020-0204-1 .

Hsieh, H.-F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15 (9), 1277–1288.

Jonassen, D. H. (2000). Toward a design theory of problem solving. ETR&D, 48 (4), 63–85.

Kelly, G., & Licona, P. (2018). Epistemic practices and science education. In M. R. Matthews (Ed.), History, philosophy and science teaching: New perspectives (pp. 139–165). Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-319-62616-1 .

Lee, O., & Luykx, A. (2006). Science education and student diversity: Synthesis and research agenda . Cambridge University Press.

Li, D. (2008). The pragmatic construction of word meaning in utterances. Journal of Chinese Language and Computing, 18 (3), 121–137.

National Research Council. (1996). The National Science Education standards . National Academy Press.

National Research Council (2000). Inquiry and the national science education standards: A guide for teaching and learning. Washington, DC: The National Academies Press. https://doi.org/10.17226/9596 .

OECD (2018). The future of education and skills: Education 2030. Downloaded on October 3, 2020 from https://www.oecd.org/education/2030/E2030%20Position%20Paper%20(05.04.2018).pdf

Park, W., Wu, J.-Y., & Erduran, S. (2020) The nature of STEM disciplines in science education standards documents from the USA, Korea and Taiwan: Focusing on disciplinary aims, values and practices.  Science & Education, 29 , 899–927.

Pleasants, J. (2020). Inquiring into the nature of STEM problems: Implications for pre-college education. Science & Education, 29 , 831–855.

Roehrig, G. H., Dare, E. A., Ring-Whalen, E., & Wieselmann, J. R. (2021). Understanding coherence and integration in integrated STEM curriculum. International Journal of STEM Education, 8(2), https://doi.org/10.1186/s40594-020-00259-8

SFA (2020). The food we eat . Downloaded on May 5, 2021 from https://www.sfa.gov.sg/food-farming/singapore-food-supply/the-food-we-eat

Svandova, K. (2014). Secondary school students’ misconceptions about photosynthesis and plant respiration: Preliminary results. Eurasia Journal of Mathematics, Science, & Technology Education, 10 (1), 59–67.

Tan, M. (2020). Context matters in science education. Cultural Studies of Science Education . https://doi.org/10.1007/s11422-020-09971-x

Tan, A.-L., Teo, T. W., Choy, B. H., & Ong, Y. S. (2019). The S-T-E-M Quartet. Innovation and Education , 1 (1), 3. https://doi.org/10.1186/s42862-019-0005-x

Wheeler, L. B., Navy, S. L., Maeng, J. L., & Whitworth, B. A. (2019). Development and validation of the Classroom Observation Protocol for Engineering Design (COPED). Journal of Research in Science Teaching, 56 (9), 1285–1305.

World Economic Forum (2020). Schools of the future: Defining new models of education for the fourth industrial revolution. Retrieved on Jan 18, 2020 from https://www.weforum.org/reports/schools-of-the-future-defining-new-models-of-education-for-the-fourth-industrial-revolution/

Download references

Acknowledgements

The authors would like to acknowledge the contributions of the other members of the research team who gave their comment and feedback in the conceptualization stage.

This study is funded by Office of Education Research grant OER 24/19 TAL.

Author information

Authors and affiliations.

Natural Sciences and Science Education, meriSTEM@NIE, National Institute of Education, Nanyang Technological University, Singapore, Singapore

Aik-Ling Tan, Yann Shiou Ong, Yong Sim Ng & Jared Hong Jie Tan

You can also search for this author in PubMed   Google Scholar

Contributions

The first author conceptualized, researched, read, analysed and wrote the article.

The second author worked on compiling the essential features and the variations tables.

The third and fourth authors worked with the first author on the ideas and refinements of the idea.

Corresponding author

Correspondence to Yann Shiou Ong .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Tan, AL., Ong, Y.S., Ng, Y.S. et al. STEM Problem Solving: Inquiry, Concepts, and Reasoning. Sci & Educ 32 , 381–397 (2023). https://doi.org/10.1007/s11191-021-00310-2

Download citation

Accepted : 28 November 2021

Published : 29 January 2022

Issue Date : April 2023

DOI : https://doi.org/10.1007/s11191-021-00310-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Practical Inquiry
  • Science Inquiry
  • Referent-centered knowledge
  • Problem-centered knowledge
  • Find a journal
  • Publish with us
  • Track your research

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Chemistry LibreTexts

1.2: Scientific Approach for Solving Problems

  • Last updated
  • Save as PDF
  • Page ID 358114

Learning Objectives

  • To identify the components of the scientific method

Scientists search for answers to questions and solutions to problems by using a procedure called the scientific method . This procedure consists of making observations, formulating hypotheses, and designing experiments, which in turn lead to additional observations, hypotheses, and experiments in repeated cycles (Figure \(\PageIndex{1}\)).

imageedit_2_5896776795.jpg

Observations can be qualitative or quantitative. Qualitative observations describe properties or occurrences in ways that do not rely on numbers. Examples of qualitative observations include the following: the outside air temperature is cooler during the winter season, table salt is a crystalline solid, sulfur crystals are yellow, and dissolving a penny in dilute nitric acid forms a blue solution and a brown gas. Quantitative observations are measurements, which by definition consist of both a number and a unit. Examples of quantitative observations include the following: the melting point of crystalline sulfur is 115.21 °C, and 35.9 grams of table salt—whose chemical name is sodium chloride—dissolve in 100 grams of water at 20 °C. An example of a quantitative observation was the initial observation leading to the modern theory of the dinosaurs’ extinction: iridium concentrations in sediments dating to 66 million years ago were found to be 20–160 times higher than normal. The development of this theory is a good exemplar of the scientific method in action (see Figure \(\PageIndex{2}\) below).

After deciding to learn more about an observation or a set of observations, scientists generally begin an investigation by forming a hypothesis , a tentative explanation for the observation(s). The hypothesis may not be correct, but it puts the scientist’s understanding of the system being studied into a form that can be tested. For example, the observation that we experience alternating periods of light and darkness corresponding to observed movements of the sun, moon, clouds, and shadows is consistent with either of two hypotheses:

  • Earth rotates on its axis every 24 hours, alternately exposing one side to the sun, or
  • The sun revolves around Earth every 24 hours.

Suitable experiments can be designed to choose between these two alternatives. For the disappearance of the dinosaurs, the hypothesis was that the impact of a large extraterrestrial object caused their extinction. Unfortunately (or perhaps fortunately), this hypothesis does not lend itself to direct testing by any obvious experiment, but scientists collected additional data that either support or refute it.

After a hypothesis has been formed, scientists conduct experiments to test its validity. Experiments are systematic observations or measurements, preferably made under controlled conditions—that is, under conditions in which a single variable changes. For example, in the dinosaur extinction scenario, iridium concentrations were measured worldwide and compared. A properly designed and executed experiment enables a scientist to determine whether the original hypothesis is valid. Experiments often demonstrate that the hypothesis is incorrect or that it must be modified. More experimental data are then collected and analyzed, at which point a scientist may begin to think that the results are sufficiently reproducible (i.e., dependable) to merit being summarized in a law , a verbal or mathematical description of a phenomenon that allows for general predictions. A law simply says what happens; it does not address the question of why.

One example of a law, the Law of Definite Proportions , which was discovered by the French scientist Joseph Proust (1754–1826), states that a chemical substance always contains the same proportions of elements by mass. Thus sodium chloride (table salt) always contains the same proportion by mass of sodium to chlorine, in this case 39.34% sodium and 60.66% chlorine by mass, and sucrose (table sugar) is always 42.11% carbon, 6.48% hydrogen, and 51.41% oxygen by mass. Some solid compounds do not strictly obey the law of definite proportions. The law of definite proportions should seem obvious—we would expect the composition of sodium chloride to be consistent—but the head of the US Patent Office did not accept it as a fact until the early 20th century.

Whereas a law states only what happens, a theory attempts to explain why nature behaves as it does. Laws are unlikely to change greatly over time unless a major experimental error is discovered. In contrast, a theory, by definition, is incomplete and imperfect, evolving with time to explain new facts as they are discovered. The theory developed to explain the extinction of the dinosaurs, for example, is that Earth occasionally encounters small- to medium-sized asteroids, and these encounters may have unfortunate implications for the continued existence of most species. This theory is by no means proven, but it is consistent with the bulk of evidence amassed to date. Figure \(\PageIndex{2}\) summarizes the application of the scientific method in this case.

imageedit_8_3393569312.jpg

Example \(\PageIndex{1}\)

Classify each statement as a law, a theory, an experiment, a hypothesis, a qualitative observation, or a quantitative observation.

  • Ice always floats on liquid water.
  • Birds evolved from dinosaurs.
  • Hot air is less dense than cold air, probably because the components of hot air are moving more rapidly.
  • When 10 g of ice were added to 100 mL of water at 25 °C, the temperature of the water decreased to 15.5 °C after the ice melted.
  • The ingredients of Ivory soap were analyzed to see whether it really is 99.44% pure, as advertised.

Given : components of the scientific method

Asked for : statement classification

Strategy: Refer to the definitions in this section to determine which category best describes each statement.

  • This is a general statement of a relationship between the properties of liquid and solid water, so it is a law.
  • This is a possible explanation for the origin of birds, so it is a hypothesis.
  • This is a statement that tries to explain the relationship between the temperature and the density of air based on fundamental principles, so it is a theory.
  • The temperature is measured before and after a change is made in a system, so these are quantitative observations.
  • This is an analysis designed to test a hypothesis (in this case, the manufacturer’s claim of purity), so it is an experiment.

Exercise \(\PageIndex{1}\)

  • Measured amounts of acid were added to a Rolaids tablet to see whether it really “consumes 47 times its weight in excess stomach acid.”
  • Heat always flows from hot objects to cooler ones, not in the opposite direction.
  • The universe was formed by a massive explosion that propelled matter into a vacuum.
  • Michael Jordan is the greatest pure shooter ever to play professional basketball.
  • Limestone is relatively insoluble in water but dissolves readily in dilute acid with the evolution of a gas.
  • Gas mixtures that contain more than 4% hydrogen in air are potentially explosive.

qualitative observation

quantitative observation

Because scientists can enter the cycle shown in Figure \(\PageIndex{1}\) at any point, the actual application of the scientific method to different topics can take many different forms. For example, a scientist may start with a hypothesis formed by reading about work done by others in the field, rather than by making direct observations.

It is important to remember that scientists have a tendency to formulate hypotheses in familiar terms simply because it is difficult to propose something that has never been encountered or imagined before. As a result, scientists sometimes discount or overlook unexpected findings that disagree with the basic assumptions behind the hypothesis or theory being tested. Fortunately, truly important findings are immediately subject to independent verification by scientists in other laboratories, so science is a self-correcting discipline. When the Alvarezes originally suggested that an extraterrestrial impact caused the extinction of the dinosaurs, the response was almost universal skepticism and scorn. In only 20 years, however, the persuasive nature of the evidence overcame the skepticism of many scientists, and their initial hypothesis has now evolved into a theory that has revolutionized paleontology and geology.

Chemists expand their knowledge by making observations, carrying out experiments, and testing hypotheses to develop laws to summarize their results and theories to explain them. In doing so, they are using the scientific method.

Learn by doing

Guided interactive problem solving that’s effective and fun. master concepts in 15 minutes a day., data analysis, computer science, programming & ai, science & engineering, join over 10 million people learning on brilliant, over 50,000 5-star reviews on ios app store and google play.

App of the day award

Master concepts in 15 minutes a day

Whether you’re a complete beginner or ready to dive into machine learning and beyond, Brilliant makes it easy to level up fast with fun, bite-sized lessons.

Effective, hands-on learning

Visual, interactive lessons make concepts feel intuitive — so even complex ideas just click. Our real-time feedback and simple explanations make learning efficient.

Learn at your level

Students and professionals alike can hone dormant skills or learn new ones. Progress through lessons and challenges tailored to your level. Designed for ages 13 to 113.

Guided bite-sized lessons

Guided bite-sized lessons

We make it easy to stay on track, see your progress, and build your problem solving skills one concept at a time.

Stay motivated

Form a real learning habit with fun content that’s always well-paced, game-like progress tracking, and friendly reminders.

Guided courses for every journey

All of our courses are crafted by award-winning teachers, researchers, and professionals from MIT, Caltech, Duke, Microsoft, Google, and more.

  • Foundational Math
  • Software Development
  • Foundational Logic
  • Data Science
  • High School Math
  • Engineering
  • Statistics and Finance

Courses in Foundational Math

  • Solving Equations
  • Measurement
  • Mathematical Fundamentals
  • Reasoning with Algebra
  • Functions and Quadratics

iOS

10k+ Ratings

android

60k+ Ratings

We use cookies to improve your experience on Brilliant. Learn more about our cookie policy and settings .

loading

How it works

For Business

Join Mind Tools

Article • 5 min read

Using the Scientific Method to Solve Problems

How the scientific method and reasoning can help simplify processes and solve problems.

By the Mind Tools Content Team

The processes of problem-solving and decision-making can be complicated and drawn out. In this article we look at how the scientific method, along with deductive and inductive reasoning can help simplify these processes.

problem solving in science

‘It is a capital mistake to theorize before one has information. Insensibly one begins to twist facts to suit our theories, instead of theories to suit facts.’ Sherlock Holmes

The Scientific Method

The scientific method is a process used to explore observations and answer questions. Originally used by scientists looking to prove new theories, its use has spread into many other areas, including that of problem-solving and decision-making.

The scientific method is designed to eliminate the influences of bias, prejudice and personal beliefs when testing a hypothesis or theory. It has developed alongside science itself, with origins going back to the 13th century. The scientific method is generally described as a series of steps.

  • observations/theory
  • explanation/conclusion

The first step is to develop a theory about the particular area of interest. A theory, in the context of logic or problem-solving, is a conjecture or speculation about something that is not necessarily fact, often based on a series of observations.

Once a theory has been devised, it can be questioned and refined into more specific hypotheses that can be tested. The hypotheses are potential explanations for the theory.

The testing, and subsequent analysis, of these hypotheses will eventually lead to a conclus ion which can prove or disprove the original theory.

Applying the Scientific Method to Problem-Solving

How can the scientific method be used to solve a problem, such as the color printer is not working?

1. Use observations to develop a theory.

In order to solve the problem, it must first be clear what the problem is. Observations made about the problem should be used to develop a theory. In this particular problem the theory might be that the color printer has run out of ink. This theory is developed as the result of observing the increasingly faded output from the printer.

2. Form a hypothesis.

Note down all the possible reasons for the problem. In this situation they might include:

  • The printer is set up as the default printer for all 40 people in the department and so is used more frequently than necessary.
  • There has been increased usage of the printer due to non-work related printing.
  • In an attempt to reduce costs, poor quality ink cartridges with limited amounts of ink in them have been purchased.
  • The printer is faulty.

All these possible reasons are hypotheses.

3. Test the hypothesis.

Once as many hypotheses (or reasons) as possible have been thought of, then each one can be tested to discern if it is the cause of the problem. An appropriate test needs to be devised for each hypothesis. For example, it is fairly quick to ask everyone to check the default settings of the printer on each PC, or to check if the cartridge supplier has changed.

4. Analyze the test results.

Once all the hypotheses have been tested, the results can be analyzed. The type and depth of analysis will be dependant on each individual problem, and the tests appropriate to it. In many cases the analysis will be a very quick thought process. In others, where considerable information has been collated, a more structured approach, such as the use of graphs, tables or spreadsheets, may be required.

5. Draw a conclusion.

Based on the results of the tests, a conclusion can then be drawn about exactly what is causing the problem. The appropriate remedial action can then be taken, such as asking everyone to amend their default print settings, or changing the cartridge supplier.

Inductive and Deductive Reasoning

The scientific method involves the use of two basic types of reasoning, inductive and deductive.

Inductive reasoning makes a conclusion based on a set of empirical results. Empirical results are the product of the collection of evidence from observations. For example:

‘Every time it rains the pavement gets wet, therefore rain must be water’.

There has been no scientific determination in the hypothesis that rain is water, it is purely based on observation. The formation of a hypothesis in this manner is sometimes referred to as an educated guess. An educated guess, whilst not based on hard facts, must still be plausible, and consistent with what we already know, in order to present a reasonable argument.

Deductive reasoning can be thought of most simply in terms of ‘If A and B, then C’. For example:

  • if the window is above the desk, and
  • the desk is above the floor, then
  • the window must be above the floor

It works by building on a series of conclusions, which results in one final answer.

Social Sciences and the Scientific Method

The scientific method can be used to address any situation or problem where a theory can be developed. Although more often associated with natural sciences, it can also be used to develop theories in social sciences (such as psychology, sociology and linguistics), using both quantitative and qualitative methods.

Quantitative information is information that can be measured, and tends to focus on numbers and frequencies. Typically quantitative information might be gathered by experiments, questionnaires or psychometric tests. Qualitative information, on the other hand, is based on information describing meaning, such as human behavior, and the reasons behind it. Qualitative information is gathered by way of interviews and case studies, which are possibly not as statistically accurate as quantitative methods, but provide a more in-depth and rich description.

The resultant information can then be used to prove, or disprove, a hypothesis. Using a mix of quantitative and qualitative information is more likely to produce a rounded result based on the factual, quantitative information enriched and backed up by actual experience and qualitative information.

In terms of problem-solving or decision-making, for example, the qualitative information is that gained by looking at the ‘how’ and ‘why’ , whereas quantitative information would come from the ‘where’, ‘what’ and ‘when’.

It may seem easy to come up with a brilliant idea, or to suspect what the cause of a problem may be. However things can get more complicated when the idea needs to be evaluated, or when there may be more than one potential cause of a problem. In these situations, the use of the scientific method, and its associated reasoning, can help the user come to a decision, or reach a solution, secure in the knowledge that all options have been considered.

Join Mind Tools and get access to exclusive content.

This resource is only available to Mind Tools members.

Already a member? Please Login here

problem solving in science

Try Mind Tools for FREE

Get unlimited access to all our career-boosting content and member benefits with our 7-day free trial.

Sign-up to our newsletter

Subscribing to the Mind Tools newsletter will keep you up-to-date with our latest updates and newest resources.

Subscribe now

Business Skills

Personal Development

Leadership and Management

Member Extras

Most Popular

Newest Releases

Article acd2ru2

Team Briefings

Article a4vbznx

Onboarding With STEPS

Mind Tools Store

About Mind Tools Content

Discover something new today

New pain points podcast - perfectionism.

Why Am I Such a Perfectionist?

Pain Points Podcast - Building Trust

Developing and Strengthening Trust at Work

How Emotionally Intelligent Are You?

Boosting Your People Skills

Self-Assessment

What's Your Leadership Style?

Learn About the Strengths and Weaknesses of the Way You Like to Lead

Recommended for you

The lean startup: how today's entrepreneurs use continuous innovation to create radically successful businesses.

Book Insights

Business Operations and Process Management

Strategy Tools

Customer Service

Business Ethics and Values

Handling Information and Data

Project Management

Knowledge Management

Self-Development and Goal Setting

Time Management

Presentation Skills

Learning Skills

Career Skills

Communication Skills

Negotiation, Persuasion and Influence

Working With Others

Difficult Conversations

Creativity Tools

Self-Management

Work-Life Balance

Stress Management and Wellbeing

Coaching and Mentoring

Change Management

Team Management

Managing Conflict

Delegation and Empowerment

Performance Management

Leadership Skills

Developing Your Team

Talent Management

Problem Solving

Decision Making

Member Podcast

  • Publications
  • Conferences & Events
  • Professional Learning
  • Science Standards
  • Awards & Competitions
  • Daily Do Lesson Plans
  • Free Resources
  • American Rescue Plan
  • For Preservice Teachers
  • NCCSTS Case Collection
  • Partner Jobs in Education
  • Interactive eBooks+
  • Digital Catalog
  • Regional Product Representatives
  • e-Newsletters
  • Bestselling Books
  • Latest Books
  • Popular Book Series
  • Prospective Authors
  • Web Seminars
  • Exhibits & Sponsorship
  • Conference Reviewers
  • National Conference • Denver 24
  • Leaders Institute 2024
  • National Conference • New Orleans 24
  • Submit a Proposal
  • Latest Resources
  • Professional Learning Units & Courses
  • For Districts
  • Online Course Providers
  • Schools & Districts
  • College Professors & Students
  • The Standards
  • Teachers and Admin
  • eCYBERMISSION
  • Toshiba/NSTA ExploraVision
  • Junior Science & Humanities Symposium
  • Teaching Awards
  • Climate Change
  • Earth & Space Science
  • New Science Teachers
  • Early Childhood
  • Middle School
  • High School
  • Postsecondary
  • Informal Education
  • Journal Articles
  • Lesson Plans
  • e-newsletters
  • Science & Children
  • Science Scope
  • The Science Teacher
  • Journal of College Sci. Teaching
  • Connected Science Learning
  • NSTA Reports
  • Next-Gen Navigator
  • Science Update
  • Teacher Tip Tuesday
  • Trans. Sci. Learning

MyNSTA Community

  • My Collections

A Problem-Solving Experiment

Using Beer’s Law to Find the Concentration of Tartrazine

The Science Teacher—January/February 2022 (Volume 89, Issue 3)

By Kevin Mason, Steve Schieffer, Tara Rose, and Greg Matthias

Share Start a Discussion

A Problem-Solving Experiment

A problem-solving experiment is a learning activity that uses experimental design to solve an authentic problem. It combines two evidence-based teaching strategies: problem-based learning and inquiry-based learning. The use of problem-based learning and scientific inquiry as an effective pedagogical tool in the science classroom has been well established and strongly supported by research ( Akinoglu and Tandogan 2007 ; Areepattamannil 2012 ; Furtak, Seidel, and Iverson 2012 ; Inel and Balim 2010 ; Merritt et al. 2017 ; Panasan and Nuangchalerm 2010 ; Wilson, Taylor, and Kowalski 2010 ).

Floyd James Rutherford, the founder of the American Association for the Advancement of Science (AAAS) Project 2061 once stated, “To separate conceptually scientific content from scientific inquiry,” he underscored, “is to make it highly probable that the student will properly understand neither” (1964, p. 84). A more recent study using randomized control trials showed that teachers that used an inquiry and problem-based pedagogy for seven months improved student performance in math and science ( Bando, Nashlund-Hadley, and Gertler 2019 ). A problem-solving experiment uses problem-based learning by posing an authentic or meaningful problem for students to solve and inquiry-based learning by requiring students to design an experiment to collect and analyze data to solve the problem.

In the problem-solving experiment described in this article, students used Beer’s Law to collect and analyze data to determine if a person consumed a hazardous amount of tartrazine (Yellow Dye #5) for their body weight. The students used their knowledge of solutions, molarity, dilutions, and Beer’s Law to design their own experiment and calculate the amount of tartrazine in a yellow sports drink (or citrus-flavored soda).

According to the Next Generation Science Standards, energy is defined as “a quantitative property of a system that depends on the motion and interactions of matter and radiation with that system” ( NGSS Lead States 2013 ). Interactions of matter and radiation can be some of the most challenging for students to observe, investigate, and conceptually understand. As a result, students need opportunities to observe and investigate the interactions of matter and radiation. Light is one example of radiation that interacts with matter.

Light is electromagnetic radiation that is detectable to the human eye and exhibits properties of both a wave and a particle. When light interacts with matter, light can be reflected at the surface, absorbed by the matter, or transmitted through the matter ( Figure 1 ). When a single beam of light enters a substance at a perpendicularly (at a 90 ° angle to the surface), the amount of reflection is minimal. Therefore, the light will either be absorbed by the substance or be transmitted through the substance. When a given wavelength of light shines into a solution, the amount of light that is absorbed will depend on the identity of the substance, the thickness of the container, and the concentration of the solution.

Light interacting with matter.  (Retrieved from https://etorgerson.files.wordpress.com/2011/05/light-reflect-refract-absorb-label.jpg).

Light interacting with matter.

(Retrieved from https://etorgerson.files.wordpress.com/2011/05/light-reflect-refract-absorb-label.jpg ).

Beer’s Law states the amount of light absorbed is directly proportional to the thickness and concentration of a solution. Beer’s Law is also sometimes known as the Beer-Lambert Law. A solution of a higher concentration will absorb more light and transmit less light ( Figure 2 ). Similarly, if the solution is placed in a thicker container that requires the light to pass through a greater distance, then the solution will absorb more light and transmit less light.

Figure 2 Light transmitted through a solution.  (Retrieved from https://media.springernature.com/original/springer-static/image/chp%3A10.1007%2F978-3-319-57330-4_13/MediaObjects/432946_1_En_13_Fig4_HTML.jpg).

Light transmitted through a solution.

(Retrieved from https://media.springernature.com/original/springer-static/image/chp%3A10.1007%2F978-3-319-57330-4_13/MediaObjects/432946_1_En_13_Fig4_HTML.jpg ).

Definitions of key terms.

Absorbance (A) – the process of light energy being captured by a substance

Beer’s Law (Beer-Lambert Law) – the absorbance (A) of light is directly proportional to the molar absorptivity (ε), thickness (b), and concentration (C) of the solution (A = εbC)

Concentration (C) – the amount of solute dissolved per amount of solution

Cuvette – a container used to hold a sample to be tested in a spectrophotometer

Energy (E) – a quantitative property of a system that depends on motion and interactions of matter and radiation with that system (NGSS Lead States 2013).

Intensity (I) – the amount or brightness of light

Light – electromagnetic radiation that is detectable to the human eye and exhibits properties of both a wave and a particle

Molar Absorptivity (ε) – a property that represents the amount of light absorbed by a given substance per molarity of the solution and per centimeter of thickness (M-1 cm-1)

Molarity (M) – the number of moles of solute per liters of solution (Mol/L)

Reflection – the process of light energy bouncing off the surface of a substance

Spectrophotometer – a device used to measure the absorbance of light by a substance

Tartrazine – widely used food and liquid dye

Transmittance (T) – the process of light energy passing through a substance

The amount of light absorbed by a solution can be measured using a spectrophotometer. The solution of a given concentration is placed in a small container called a cuvette. The cuvette has a known thickness that can be held constant during the experiment. It is also possible to obtain cuvettes of different thicknesses to study the effect of thickness on the absorption of light. The key definitions of the terms related to Beer’s Law and the learning activity presented in this article are provided in Figure 3 .

Overview of the problem-solving experiment

In the problem presented to students, a 140-pound athlete drinks two bottles of yellow sports drink every day ( Figure 4 ; see Online Connections). When she starts to notice a rash on her skin, she reads the label of the sports drink and notices that it contains a yellow dye known as tartrazine. While tartrazine is safe to drink, it may produce some potential side effects in large amounts, including rashes, hives, or swelling. The students must design an experiment to determine the concentration of tartrazine in the yellow sports drink and the number of milligrams of tartrazine in two bottles of the sports drink.

While a sports drink may have many ingredients, the vast majority of ingredients—such as sugar or electrolytes—are colorless when dissolved in water solution. The dyes added to the sports drink are responsible for the color of the sports drink. Food manufacturers may use different dyes to color sports drinks to the desired color. Red dye #40 (allura red), blue dye #1 (brilliant blue), yellow dye #5 (tartrazine), and yellow dye #6 (sunset yellow) are the four most common dyes or colorants in sports drinks and many other commercial food products ( Stevens et al. 2015 ). The concentration of the dye in the sports drink affects the amount of light absorbed.

In this problem-solving experiment, the students used the previously studied concept of Beer’s Law—using serial dilutions and absorbance—to find the concentration (molarity) of tartrazine in the sports drink. Based on the evidence, the students then determined if the person had exceeded the maximum recommended daily allowance of tartrazine, given in mg/kg of body mass. The learning targets for this problem-solving experiment are shown in Figure 5 (see Online Connections).

Pre-laboratory experiences

A problem-solving experiment is a form of guided inquiry, which will generally require some prerequisite knowledge and experience. In this activity, the students needed prior knowledge and experience with Beer’s Law and the techniques in using Beer’s Law to determine an unknown concentration. Prior to the activity, students learned how Beer’s Law is used to relate absorbance to concentration as well as how to use the equation M 1 V 1 = M 2 V 2 to determine concentrations of dilutions. The students had a general understanding of molarity and using dimensional analysis to change units in measurements.

The techniques for using Beer’s Law were introduced in part through a laboratory experiment using various concentrations of copper sulfate. A known concentration of copper sulfate was provided and the students followed a procedure to prepare dilutions. Students learned the technique for choosing the wavelength that provided the maximum absorbance for the solution to be tested ( λ max ), which is important for Beer’s Law to create a linear relationship between absorbance and solution concentration. Students graphed the absorbance of each concentration in a spreadsheet as a scatterplot and added a linear trend line. Through class discussion, the teacher checked for understanding in using the equation of the line to determine the concentration of an unknown copper sulfate solution.

After the students graphed the data, they discussed how the R2 value related to the data set used to construct the graph. After completing this experiment, the students were comfortable making dilutions from a stock solution, calculating concentrations, and using the spectrophotometer to use Beer’s Law to determine an unknown concentration.

Introducing the problem

After the initial experiment on Beer’s Law, the problem-solving experiment was introduced. The problem presented to students is shown in Figure 4 (see Online Connections). A problem-solving experiment provides students with a valuable opportunity to collaborate with other students in designing an experiment and solving a problem. For this activity, the students were assigned to heterogeneous or mixed-ability laboratory groups. Groups should be diversified based on gender; research has shown that gender diversity among groups improves academic performance, while racial diversity has no significant effect ( Hansen, Owan, and Pan 2015 ). It is also important to support students with special needs when assigning groups. The mixed-ability groups were assigned intentionally to place students with special needs with a peer who has the academic ability and disposition to provide support. In addition, some students may need additional accommodations or modifications for this learning activity, such as an outlined lab report, a shortened lab report format, or extended time to complete the analysis. All students were required to wear chemical-splash goggles and gloves, and use caution when handling solutions and glass apparatuses.

Designing the experiment

During this activity, students worked in lab groups to design their own experiment to solve a problem. The teacher used small-group and whole-class discussions to help students understand the problem. Students discussed what information was provided and what they need to know and do to solve the problem. In planning the experiment, the teacher did not provide a procedure and intentionally provided only minimal support to the students as needed. The students designed their own experimental procedure, which encouraged critical thinking and problem solving. The students needed to be allowed to struggle to some extent. The teacher provided some direction and guidance by posing questions for students to consider and answer for themselves. Students were also frequently reminded to review their notes and the previous experiment on Beer’s Law to help them better use their resources to solve the problem. The use of heterogeneous or mixed-ability groups also helped each group be more self-sufficient and successful in designing and conducting the experiment.

Students created a procedure for their experiment with the teacher providing suggestions or posing questions to enhance the experimental design, if needed. Safety was addressed during this consultation to correct safety concerns in the experimental design or provide safety precautions for the experiment. Students needed to wear splash-proof goggles and gloves throughout the experiment. In a few cases, students realized some opportunities to improve their experimental design during the experiment. This was allowed with the teacher’s approval, and the changes to the procedure were documented for the final lab report.

Conducting the experiment

A sample of the sports drink and a stock solution of 0.01 M stock solution of tartrazine were provided to the students. There are many choices of sports drinks available, but it is recommended that the ingredients are checked to verify that tartrazine (yellow dye #5) is the only colorant added. This will prevent other colorants from affecting the spectroscopy results in the experiment. A citrus-flavored soda could also be used as an alternative because many sodas have tartrazine added as well. It is important to note that tartrazine is considered safe to drink, but it may produce some potential side effects in large amounts, including rashes, hives, or swelling. A list of the materials needed for this problem-solving experiment is shown in Figure 6 (see Online Connections).

This problem-solving experiment required students to create dilutions of known concentrations of tartrazine as a reference to determine the unknown concentration of tartrazine in a sports drink. To create the dilutions, the students were provided with a 0.01 M stock solution of tartrazine. The teacher purchased powdered tartrazine, available from numerous vendors, to create the stock solution. The 0.01 M stock solution was prepared by weighing 0.534 g of tartrazine and dissolving it in enough distilled water to make a 100 ml solution. Yellow food coloring could be used as an alternative, but it would take some research to determine its concentration. Since students have previously explored the experimental techniques, they should know to prepare dilutions that are somewhat darker and somewhat lighter in color than the yellow sports drink sample. Students should use five dilutions for best results.

Typically, a good range for the yellow sports drink is standard dilutions ranging from 1 × 10-3 M to 1 × 10-5 M. The teacher may need to caution the students that if a dilution is too dark, it will not yield good results and lower the R2 value. Students that used very dark dilutions often realized that eliminating that data point created a better linear trendline, as long as it didn’t reduce the number of data points to fewer than four data points. Some students even tried to use the 0.01 M stock solution without any dilution. This was much too dark. The students needed to do substantial dilutions to get the solutions in the range of the sports drink.

After the dilutions are created, the absorbance of each dilution was measured using a spectrophotometer. A Vernier SpectroVis (~$400) spectrophotometer was used to measure the absorbance of the prepared dilutions with known concentrations. The students adjusted the spectrophotometer to use different wavelengths of light and selected the wavelength with the highest absorbance reading. The same wavelength was then used for each measurement of absorbance. A wavelength of 650 nanometers (nm) provided an accurate measurement and good linear relationship. After measuring the absorbance of the dilutions of known concentrations, the students measured the absorbance of the sports drink with an unknown concentration of tartrazine using the spectrophotometer at the same wavelength. If a spectrophotometer is not available, a color comparison can be used as a low-cost alternative for completing this problem-solving experiment ( Figure 7 ; see Online Connections).

Analyzing the results

After completing the experiment, the students graphed the absorbance and known tartrazine concentrations of the dilutions on a scatter-plot to create a linear trendline. In this experiment, absorbance was the dependent variable, which should be graphed on the y -axis. Some students mistakenly reversed the axes on the scatter-plot. Next, the students used the graph to find the equation for the line. Then, the students solve for the unknown concentration (molarity) of tartrazine in the sports drink given the linear equation and the absorbance of the sports drink measured experimentally.

To answer the question posed in the problem, the students also calculated the maximum amount of tartrazine that could be safely consumed by a 140 lb. person, using the information given in the problem. A common error in solving the problem was not converting the units of volume given in the problem from ounces to liters. With the molarity and volume in liters, the students then calculated the mass of tartrazine consumed per day in milligrams. A sample of the graph and calculations from one student group are shown in Figure 8 . Finally, based on their calculations, the students answered the question posed in the original problem and determined if the person’s daily consumption of tartrazine exceeded the threshold for safe consumption. In this case, the students concluded that the person did NOT consume more than the allowable daily limit of tartrazine.

Sample graph and calculations from a student group.

Sample graph and calculations from a student group.

Communicating the results

After conducting the experiment, students reported their results in a written laboratory report that included the following sections: title, purpose, introduction, hypothesis, materials and methods, data and calculations, conclusion, and discussion. The laboratory report was assessed using the scoring rubric shown in Figure 9 (see Online Connections). In general, the students did very well on this problem-solving experiment. Students typically scored a three or higher on each criteria of the rubric. Throughout the activity, the students successfully demonstrated their ability to design an experiment, collect data, perform calculations, solve a problem, and effectively communicate those results.

This activity is authentic problem-based learning in science as the true concentration of tartrazine in the sports drink was not provided by the teacher or known by the students. The students were generally somewhat biased as they assumed the experiment would result in exceeding the recommended maximum consumption of tartrazine. Some students struggled with reporting that the recommended limit was far higher than the two sports drinks consumed by the person each day. This allows for a great discussion about the use of scientific methods and evidence to provide unbiased answers to meaningful questions and problems.

The most common errors in this problem-solving experiment were calculation errors, with the most common being calculating the concentrations of the dilutions (perhaps due to the use of very small concentrations). There were also several common errors in communicating the results in the laboratory report. In some cases, students did not provide enough background information in the introduction of the report. When the students communicated the results, some students also failed to reference specific data from the experiment. Finally, in the discussion section, some students expressed concern or doubts in the results, not because there was an obvious error, but because they did not believe the level consumed could be so much less than the recommended consumption limit of tartrazine.

The scientific study and investigation of energy and matter are salient topics addressed in the Next Generation Science Standards ( Figure 10 ; see Online Connections). In a chemistry classroom, students should have multiple opportunities to observe and investigate the interaction of energy and matter. In this problem-solving experiment students used Beer’s Law to collect and analyze data to determine if a person consumed an amount of tartrazine that exceeded the maximum recommended daily allowance. The students correctly concluded that the person in the problem did not consume more than the recommended daily amount of tartrazine for their body weight.

In this activity students learned to work collaboratively to design an experiment, collect and analyze data, and solve a problem. These skills extend beyond any one science subject or class. Through this activity, students had the opportunity to do real-world science to solve a problem without a previously known result. The process of designing an experiment may be difficult for some students that are often accustomed to being given an experimental procedure in their previous science classroom experiences. However, because students sometimes struggled to design their own experiment and perform the calculations, students also learned to persevere in collecting and analyzing data to solve a problem, which is a valuable life lesson for all students. ■

Online Connections

The Beer-Lambert Law at Chemistry LibreTexts: https://bit.ly/3lNpPEi

Beer’s Law – Theoretical Principles: https://teaching.shu.ac.uk/hwb/chemistry/tutorials/molspec/beers1.htm

Beer’s Law at Illustrated Glossary of Organic Chemistry: http://www.chem.ucla.edu/~harding/IGOC/B/beers_law.html

Beer Lambert Law at Edinburgh Instruments: https://www.edinst.com/blog/the-beer-lambert-law/

Beer’s Law Lab at PhET Interactive Simulations: https://phet.colorado.edu/en/simulation/beers-law-lab

Figure 4. Problem-solving experiment problem statement: https://bit.ly/3pAYHtj

Figure 5. Learning targets: https://bit.ly/307BHtb

Figure 6. Materials list: https://bit.ly/308a57h

Figure 7. The use of color comparison as a low-cost alternative: https://bit.ly/3du1uyO

Figure 9. Summative performance-based assessment rubric: https://bit.ly/31KoZRj

Figure 10. Connecting to the Next Generation Science Standards : https://bit.ly/3GlJnY0

Kevin Mason ( [email protected] ) is Professor of Education at the University of Wisconsin–Stout, Menomonie, WI; Steve Schieffer is a chemistry teacher at Amery High School, Amery, WI; Tara Rose is a chemistry teacher at Amery High School, Amery, WI; and Greg Matthias is Assistant Professor of Education at the University of Wisconsin–Stout, Menomonie, WI.

Akinoglu, O., and R. Tandogan. 2007. The effects of problem-based active learning in science education on students’ academic achievement, attitude and concept learning. Eurasia Journal of Mathematics, Science, and Technology Education 3 (1): 77–81.

Areepattamannil, S. 2012. Effects of inquiry-based science instruction on science achievement and interest in science: Evidence from Qatar. The Journal of Educational Research 105 (2): 134–146.

Bando R., E. Nashlund-Hadley, and P. Gertler. 2019. Effect of inquiry and problem-based pedagogy on learning: Evidence from 10 field experiments in four countries. The National Bureau of Economic Research 26280.

Furtak, E., T. Seidel, and H. Iverson. 2012. Experimental and quasi-experimental studies of inquiry-based science teaching: A meta-analysis. Review of Educational Research 82 (3): 300–329.

Hansen, Z., H. Owan, and J. Pan. 2015. The impact of group diversity on class performance. Education Economics 23 (2): 238–258.

Inel, D., and A. Balim. 2010. The effects of using problem-based learning in science and technology teaching upon students’ academic achievement and levels of structuring concepts. Pacific Forum on Science Learning and Teaching 11 (2): 1–23.

Merritt, J., M. Lee, P. Rillero, and B. Kinach. 2017. Problem-based learning in K–8 mathematics and science education: A literature review. The Interdisciplinary Journal of Problem-based Learning 11 (2).

NGSS Lead States. 2013. Next Generation Science Standards: For states, by states. Washington, DC: National Academies Press.

Panasan, M., and P. Nuangchalerm. 2010. Learning outcomes of project-based and inquiry-based learning activities. Journal of Social Sciences 6 (2): 252–255.

Rutherford, F.J. 1964. The role of inquiry in science teaching. Journal of Research in Science Teaching 2 (2): 80–84.

Stevens, L.J., J.R. Burgess, M.A. Stochelski, and T. Kuczek. 2015. Amounts of artificial food dyes and added sugars in foods and sweets commonly consumed by children. Clinical Pediatrics 54 (4): 309–321.

Wilson, C., J. Taylor, and S. Kowalski. 2010. The relative effects and equity of inquiry-based and commonplace science teaching on students’ knowledge, reasoning, and argumentation. Journal of Research in Science Teaching 47 (3): 276–301.

Chemistry Crosscutting Concepts Curriculum Disciplinary Core Ideas General Science Inquiry Instructional Materials Labs Lesson Plans Mathematics NGSS Pedagogy Science and Engineering Practices STEM Teaching Strategies Technology Three-Dimensional Learning High School

You may also like

Reports Article

Journal Article

Evaluation Insights is a new column about program evaluation that will help readers build their capacity as program evaluators....

Mobile instructional spaces such as retrofit buses, customized trucks or trailers, and repurposed shipping containers are innovative, unique venues fo...

Change Password

Your password must have 8 characters or more and contain 3 of the following:.

  • a lower case character, 
  • an upper case character, 
  • a special character 

Password Changed Successfully

Your password has been changed

  • Sign in / Register

Request Username

Can't sign in? Forgot your username?

Enter your email address below and we will send you your username

If the address matches an existing account you will receive an email with instructions to retrieve your username

A Detailed Characterization of the Expert Problem-Solving Process in Science and Engineering: Guidance for Teaching and Assessment

  • Argenta M. Price
  • Candice J. Kim
  • Eric W. Burkholder
  • Amy V. Fritz
  • Carl E. Wieman

*Address correspondence to: Argenta M. Price ( E-mail Address: [email protected] ).

Department of Physics, Stanford University, Stanford, CA 94305

Search for more papers by this author

Graduate School of Education, Stanford University, Stanford, CA 94305

School of Medicine, Stanford University, Stanford, CA 94305

Department of Electrical Engineering, Stanford University, Stanford, CA 94305

A primary goal of science and engineering (S&E) education is to produce good problem solvers, but how to best teach and measure the quality of problem solving remains unclear. The process is complex, multifaceted, and not fully characterized. Here, we present a detailed characterization of the S&E problem-solving process as a set of specific interlinked decisions. This framework of decisions is empirically grounded and describes the entire process. To develop this, we interviewed 52 successful scientists and engineers (“experts”) spanning different disciplines, including biology and medicine. They described how they solved a typical but important problem in their work, and we analyzed the interviews in terms of decisions made. Surprisingly, we found that across all experts and fields, the solution process was framed around making a set of just 29 specific decisions. We also found that the process of making those discipline-general decisions (selecting between alternative actions) relied heavily on domain-specific predictive models that embodied the relevant disciplinary knowledge. This set of decisions provides a guide for the detailed measurement and teaching of S&E problem solving. This decision framework also provides a more specific, complete, and empirically based description of the “practices” of science.

INTRODUCTION

Many faculty members with new graduate students and many managers with employees who are recent college graduates have had similar experiences. Their advisees/employees have just completed a program of rigorous course work, often with distinction, but they seem unable to solve the real-world problems they encounter. The supervisor struggles to figure out exactly what the problem is and how they can guide the person in overcoming it. This paper is providing a way to answer those questions in the context of science and engineering (S&E). By characterizing the problem-solving process of experts, this paper investigates the “mastery” performance level and specifies an overarching learning goal for S&E students, which can be taught and measured to improve teaching.

The importance of problem solving as an educational outcome has long been recognized, but too often postsecondary S&E graduates have serious difficulties when confronted with real-world problems ( Quacquarelli Symonds, 2018 ). This reflects two long-standing educational problems with regard to problem solving: how to properly measure it, and how to effectively teach it. We theorize that the root of these difficulties is that good “problem solving” is a complex multifaceted process, and the details of that process have not been sufficiently characterized. Better characterization of the problem-solving process is necessary to allow problem solving, and more particularly, the complex set of skills and knowledge it entails, to be measured and taught more effectively. We sought to create an empirically grounded conceptual framework that would characterize the detailed structure of the full problem-solving process used by skilled practitioners when solving problems as part of their work. We also wanted a framework that would allow use and comparison across S&E disciplines. To create such a framework, we examined the operational decisions (choices among alternatives that result in subsequent actions) that these practitioners make when solving problems in their discipline.

Various aspects of problem solving have been studied across multiple domains, using a variety of methods (e.g., Newell and Simon, 1972 ; Dunbar, 2000 ; National Research Council [NRC], 2012b ; Lintern et al. , 2018 ). These ranged from expert self-reflections (e.g., Polya, 1945 ), to studies on knowledge lean tasks to discover general problem-solving heuristics (e.g., Egan and Greeno, 1974 ), to comparisons of expert and novice performances on simplified problems across a variety of disciplines (e.g., Chase and Simon, 1973 ; Chi et al. , 1981 ; Larkin and Reif, 1979 ; Ericsson et al. , 2006 , 2018 ). These studies revealed important novice–expert differences—notably, that experts are better at identifying important features and have knowledge structures that allow them to reduce demands on working memory. Studies that specifically gave the experts unfamiliar problems in their disciplines also found that, relative to novices, they had more deliberate and reflective strategies, including more extensive planning and managing of their own behavior, and they could use their knowledge base to better define the problem ( Schoenfeld, 1985 ; Wineburg, 1998 ; Singh, 2002 ). While these studies focused on discrete cognitive steps of the individual, an alternative framing of problem solving has been in terms of “ecological psychology” of “situativity,” looking at how the problem solver views and interacts with the environment in terms of affordances and constraints ( Greeno, 1994 ). “Naturalistic decision making” is a related framework that specifically examines how experts make decisions in complex, real-world, settings, with an emphasis on the importance of assessing the situation surrounding the problem at hand ( Klein, 2008 ; Mosier et al. , 2018 ).

While this work on expertise has provided important insights into the problem-solving process, its focus has been limited. Most has focused on looking for cognitive differences between experts and novices using limited and targeted tasks, such as remembering the pieces on a chessboard ( Chase and Simon, 1973 ) or identifying the important concepts represented in an introductory physics textbook problem ( Chi et al. , 1981 ). It did not attempt to explore the full process of solving, particularly for solving the type of complex problem that a scientist or engineer encounters as a member of the workforce (“authentic problems”).

There have also been many theoretical proposals as to expert problem-solving practices, but with little empirical evidence as to their completeness or accuracy (e.g., Polya, 1945 ; Heller and Reif, 1984 ; Organisation for Economic Cooperation and Development [OECD], 2019 ). The work of Dunbar (2000) is a notable exception to the lack of empirical work, as his group did examine how biologists solved problems in their work by analyzing lab meetings held by eight molecular biology research groups. His groundbreaking work focused on creativity and discovery in the research process, and he identified the importance of analogical reasoning and distributed reasoning by scientists in answering research questions and gaining new insights. Kozma et al. (2000) studied professional chemists solving problems, but their work focused only on the use of specialized representations.

The “cognitive systems engineering” approach ( Lintern et al. , 2018 ) takes a more empirically based approach looking at experts solving problems in their work, and as such tends to span aspects of both the purely cognitive and the ecological psychological theories. It uses both observations of experts in authentic work settings and retrospective interviews about how experts carried out particular work tasks. This theoretical framing and the experimental methods are similar to what we use, particularly in the “naturalistic decision making” area of research ( Mosier et al. , 2018 ). That work looks at how critical decisions are made in solving specific problems in their real-world setting. The decision process is studied primarily through retrospective interviews about challenging cases faced by experts. As described below, our methods are adapted from that work ( Crandall et al. , 2006 ), though there are some notable differences in focus and field. A particular difference is that we focused on identifying what are decisions to be made, which are more straight-forward to identify from retrospective interviews than how those decisions are made. We all have the same ultimate goal, however, to improve the training/teaching of the respective expertise.

Problem solving is central to the processes of science, engineering, and medicine, so research and educational standards about scientific thinking and the process and practices of science are also relevant to this discussion. Work by Osborne and colleagues describes six styles of scientific reasoning that can be used to explain how scientists and students approach different problems ( Kind and Osborne, 2016 ). There are also numerous educational standards and frameworks that, based on theory, lay out the skills or practices that science and engineering students are expected to master (e.g., American Association for the Advancement of Science [AAAS], 2011 ; Next Generation Science Standards Lead States, 2013 ; OECD, 2019 ; ABET, 2020 ). More specifically related to the training of problem solving, Priemer et al. (2020) synthesizes literature on problem solving and scientific reasoning to create a “STEM [science, technology, engineering, and mathematics] and computer science framework for problem solving” that lays out steps that could be involved in a students’ problem-solving efforts across STEM fields. These frameworks provide a rich groundwork, but they have several limitations: 1) They are based on theoretical ideas of the practice of science, not empirical evidence, so while each framework contains overlapping elements of the problem-solving process, it is unclear whether they capture the complete process. 2) They are focused on school science, rather than the actual problem solving that practitioners carry out and that students will need to carry out in future STEM careers. 3) They are typically underspecified, so that the steps or practices apply generally, but it is difficult to translate them into measurable learning goals for students to practice. Working to address that, Clemmons et al. (2020) recently sought to operationalize the core competencies from the Vision and Change report ( AAAS, 2011 ), establishing a set of skills that biology students should be able to master.

Our work seeks to augment this prior work by building a conceptual framework that is empirically based, grounded in how scientists and engineers solve problems in practice instead of in school. We base our framework on the decisions that need to be made during problem solving, which makes each item clearly defined for practice and assessment. In our analysis of expert problem solving, we empirically identified the entire problem-solving process. We found this includes deciding when and how to use the steps and skills defined in the work described previously but also includes additional elements. There are also questions in the literature about how generalizable across fields a particular set of practices may be. Here, we present the first empirical examination of the entire problem-solving process, and we compare that process across many different S&E disciplines.

A variety of instructional methods have been used to try and teach science and engineering problem solving, but there has been little evidence of their efficacy at improving problem solving (for a review, see NRC, 2012b ). Research explicitly on teaching problem solving has primarily focused on textbook-type exercises and utilized step-by-step strategies or heuristics. These studies have shown limited success, often getting students to follow specific procedural steps but with little gain in actually solving problems and showing some potential drawbacks ( Heller and Reif, 1984 ; Heller et al. , 1992 ; Huffman, 1997 ; Heckler, 2010 ; Kuo et al. , 2017 ). As discussed later, the framework presented here offers guidance for different and potentially more effective approaches to teaching problem solving.

These challenges can be illustrated by considering three different problems taken from courses in mechanical engineering, physics, and biology, respectively ( Figure 1 ). All of these problems are challenging, requiring considerable knowledge and effort by the student to solve correctly. Problems such as these are routinely used to both assess students’ problem-solving skills, and students are expected to learn such skills by practicing doing such problems. However, it is obvious to any expert in the respective fields, that, while these problems might be complicated and difficult to answer, they are vastly different from solving authentic problems in that field. They all have well-defined answers that can be reached by straightforward solution paths. More specifically, they do not involve needing to use judgment to make any decisions based on limited information (e.g., insufficient to specify a correct decision with certainty). The relevant concepts and information and assumptions are all stated or obvious. The failure of problems like these to capture the complexity of authentic problem solving underlies the failure of efforts to measure and teach problem solving. Recognizing this failure motivated our efforts to more completely characterize the problem-solving process of practicing scientists, engineers, and doctors.

FIGURE 1. Example problems from courses or textbooks in mechanical engineering, physics and biology. Problems from: Mechanical engineering: Wayne State mechanical engineering sample exam problems (Wayne State, n.d.), Physics: A standard physics problem in nearly every advanced quantum mechanics course, Biology: Molecular Biology of the Cell 6th edition, Chapter 7 end of chapter problems ( Alberts et al ., 2014 ).

We are building on the previous work studying expert–novice differences and problem solving but taking a different direction. We sought to create an empirically grounded framework that would characterize the detailed structure of the full problem-solving process by focusing on the operational decisions that skilled practitioners make when successfully solving authentic problems in their scientific, engineering, or medical work. We chose to identify the decisions that S&E practitioners made, because, unlike potentially nebulous skills or general problem-solving steps that might change with the discipline, decisions are sufficiently specified that they can be individually practiced by students and measured by instructors or departments. The authentic problems that we analyzed are typical problems practitioners encounter in “doing” the science or engineering entailed in their jobs. In the language of traditional problem-
solving and expertise research, such authentic problems are “ill-structured” ( Simon, 1973 ) and require “adaptive expertise” ( Hatano and Inagaki, 1986 ) to solve. However, our authentic problems are considerably more complex and unstructured than what is normally considered in those literatures, because not only do they lack a clear solution path, but in many cases, it is not clear a priori that they have any solution at all. Determining that, and whether the problem needs to be redefined to be soluble, is part of the successful expert solution process. Another way in which our set of decisions goes beyond the characterization of what is involved in adaptive expertise is the prominent role of making judgments with limited information.

A common reaction of scientists and engineers to seeing the list of decisions we obtain as our primary result is, “Oh, yes, these are things I always do in solving problems. There is nothing new here.” It is comforting that these decisions all look familiar; that supports their validity. However, what is new is not that experts are making such decisions, but rather that there is a relatively small but complete set of decisions that has now been explicitly identified and that applies so generally.

We have used a much larger and broader sample of experts in this work than used in prior expert–novice studies, and we used a more stringent selection criterion. Previous empirical work has typically involved just a few experts, almost always in a single domain, and included graduate students as “experts” in some cases. Our semistructured interview sample was 31 experienced practitioners from 10 different disciplines of science, engineering, and medicine, with demonstrated competence and accomplishments well beyond those of most graduate students. Also, approximately 25 additional experts from across science, engineering, and medicine served as consultants during the planning and execution of this work.

Our research question was: What are the decisions experts make in solving authentic problems, and to what extent is this set of decisions to be made consistent both within and across disciplines?

Our approach was designed to identify the level of consistency and unique differences across disciplines. Our hypothesis was that there would be a manageable number (20–50) of decisions to be made, with a large amount of overlap of decisions made between experts within each discipline and a substantial but smaller overlap across disciplines. We believed that if we had found that every expert and/or discipline used a large and completely unique set of decisions, it would have been an interesting research result but of little further use. If our hypothesis turned out to be correct, we expected that the set of decisions obtained would have useful applications in guiding teaching and assessment, as they would show how experts in the respective disciplines applied their content knowledge to solve problems and hence provide a model for what to teach. We were not expecting to find the nearly complete degree of overlap in the decisions made across all the experts.

We first conducted 22 relatively unstructured interviews with a range of S&E experts, in which we asked about problem-solving expertise in their fields. From these interviews, we developed an initial list of decisions to be made in S&E problem solving. To refine and validate the list, we then carried out a set of 31 semistructured interviews in which S&E experts chose a specific problem from their work and described the solution process in detail. The semistructured interviews were coded for the decisions represented, either explicitly stated or implied by a choice of action. This provided a framework of decisions that characterize the problem-solving process across S&E disciplines. The research was approved by the Stanford Institutional Review Board (IRB no. 48785), and informed consent was obtained from all the participants.

This work involved interviewing many experts across different fields. We defined experts as practicing scientists, engineers, or physicians with considerable experience working as faculty at highly rated universities or having several years of experience working in moderately high-level technical positions at successful companies. We also included a few longtime postdocs and research staff in biosciences to capture more details of experimental decisions from which faculty members in those fields often were more removed. This definition of expert allows us to identify the practices of skilled professionals; we are not studying what makes only the most exceptional experts unique.

Experts were volunteers recruited through direct contact via the research team's personal and professional networks and referrals from experts in our networks. This recruitment method likely biased our sample toward people who experienced relatively similar training (most were trained in STEM disciplines at U.S. universities within the last 15–50 years). Within this limitation, we attempted to get a large range of experts by field and experience. This included people from 10 different fields (including molecular biology/biochemistry, ecology, and medicine), 11 U.S. universities, and nine different companies or government labs, and the sample was 33% female (though our engineering sample only included one female). The medical experts were volunteers from a select group of medical school faculty chosen to serve as clinical reasoning mentors for medical students at a prestigious university. We only contacted people who met our criteria for being an “expert,” and everyone who volunteered was included in the study. Most of the people who were contacted volunteered, and the only reason given for not volunteering was insufficient time. Other than their disciplinary expertise, there was little to distinguish these experts beyond the fact they were acquaintances with members of the team or acquaintances of acquaintances of team or project advisory board members. The precise number from each field was determined largely by availability of suitable experts.

We defined an “authentic problem” to be one that these experts solve in their actual jobs. Generally, this meant research projects for the science and engineering faculty, design problems for the industry engineers, and patient diagnoses for the medical doctors. Such problems are characterized by complexity, with many factors involved and no obvious solution process, and involve substantial time, effort, and resources. Such problems involve far more complexity and many more decisions, particularly decisions with limited information, than the typical problems used in previous problem-solving research or used with students in instructional settings.

Creating an Initial List of Problem-Solving Decisions

We first interviewed 22 experts ( Table 1 ), most of whom were faculty at a prestigious university, in which we asked them to discuss expertise and problem solving in their fields as it related to their own experiences. This usually resulted in their discussing examples of one or more problems they had solved. Based on the first seven interviews, plus reflections on personal experience from the research team and review of the literature on expert problem solving and teaching of scientific practices ( Ericsson et al. , 2006 ; NRC, 2012a ; Wieman, 2015 ), we created a generic list of decisions that were made in S&E problem solving. In the rest of the unstructured interviews (15), we also provided the experts with our list and asked them to comment on any additions or deletions they would suggest. Faculty who had close supervision of graduate students and industry experts who had extensively supervised inexperienced staff were particularly informative. Their observations of the way inexperienced people could fail made them sensitive to the different elements of expertise and where incorrect decisions could be made. Although we initially expected to find substantial differences across disciplines, from early in the process, we noted a high degree of overlap across the interviews in the decisions that were described.

URM (under-represented minority) included 3 African American and 2 Hispanic/Latinx. One medical faculty member was interviewed twice – in both informal and structure interviews, for a total of 53 interviews with 52 experts.

Refinement and Validation of the List of Decisions

After creating the preliminary list of decisions from the informal interviews, we conducted a separate set of more structured interviews to test and refine the list. Semistructured interviews were conducted with 31 experts from across science, engineering, and medical fields ( Table 1 ). For these interviews, we recruited experts from a range of universities and companies, though the range of institutions is still limited, given the sample size. Interviews were conducted in person or over video chat and were transcribed for analysis. In the semistructured interviews, experts were asked to choose a problem or two from their work that they could recall the details of solving and then describe the process, including all the steps and decisions they made. So that we could get a full picture of the successful problem-solving process, we decided to focus the interviews on problems that they had eventually solved successfully, though their processes inherently involved paths that needed to be revised and reconsidered. Transcripts from interviewees who agreed to have their interview transcript published are available in the supplemental data set.

Our interview protocol (see Supplemental Text) was inspired in part by the critical decision method of cognitive task analysis ( Crandall et al. , 2006 ; Lintern et al. , 2018 ), which was created for research in cognitive systems engineering and naturalistic decision making. There are some notable differences between our work and theirs, both in research goal and method. First, their goal is to improve training in specific fields by focusing on how critical decisions are made in that field during an unusual or important event; the analysis seeks to identify factors involved in making those critical decisions. We are focusing on the overall problem solving and how it compares across many different fields, which quickly led to attention on what decisions are to be made, rather than how a limited set of those decisions are made. We asked experts to describe a specific, but not necessarily unusual, problem in their work, and focused our analysis on identifying all decisions made, not reasons for making them or identifying which were most critical. The specific order of problem-solving steps was also less important to us, in part because it was clear that there was no consistent order that was followed. Second, we are looking at different types of work. Cognitive systems engineering work has primarily focused on performance in professions like firefighters, power plant operators, military technicians, and nurses. These tend to require time-sensitive critical skills that are taught with modest amounts of formal training. We are studying scientists, engineers, and doctors solving problems that require much longer and less time-critical solutions and for which the formal training occupies many years.

Given our different focus, we made several adaptations to eliminate some of the more time-consuming steps from the interview protocol, allowing us to limit the interview time to approximately 1 hour. Both protocols seek to elicit an accurate and complete reporting of the steps taken and decisions made in the process of solving a problem. Our general strategy was: 1) Have the expert explain the problem and talk step by step through the decisions involved in solving it, with relatively few interruptions from the interviewer except to keep the discussion focused on the specific problem and occasionally to ask for clarifications. 2) Ask follow-up questions to probe for more detail about particular steps and aspects of the problem-solving process. 3) Occasionally ask for general thoughts on how a novice's process might differ.

While some have questioned the reliability of information from retrospective interviews ( Nisbett and Wilson, 1977 ), we believe we avoid these concerns, because we are only identifying a decision to be made, which in this case, means identifying a well-defined action that was chosen from alternatives. This is less subjective and much more likely to be accurately recalled than is the rationale behind such a decision. See Ericsson and Simon (1980) . However, the decisions identified may still be somewhat limited—the process of deciding among possible actions might involve additional decisions in the moment, when the solution is still unknown, that we are unable to capture in the retrospective context. For the decisions we can identify, we are able to check their accuracy and completeness by comparing them with the actions taken in the conduct of the research/design. For example, consider this quote from a physician who had to re-evaluate a diagnosis, “And, in my very subjective sense, he seemed like he was being forthcoming and honest. Granted people can fool you, but he seemed like he was being forthcoming. So we had to reevaluate.” The physician then considered alternative diagnoses that could explain a test result that at first had indicated an incorrect diagnosis. While this quote does describe the (retrospective) reasoning behind a decision, we do not need to know whether that reasoning is accurately recalled. We can simply code this as “decision 18, how believable is info?” The physician followed up by considering alternative diagnoses, which in this context was coded as “26, how good is solution?” and “8, potential solutions?” This was followed by the description of the literature and additional tests conducted. These indicated actions taken that confirm the physician made a decision about the reliability of the information given by the patient.

Interview Coding

We coded the semistructured interviews in terms of decisions made, through iterative rounds of coding ( Chi, 1997 ), following a “directed content analysis approach,” which involves coding according to predefined theoretical categories and updating the codes as needed based on the data ( Hsieh and Shannon, 2005 ). Our predefined categories were the list of decisions we had developed during the informal interviews. This approach means that we limited the focus of our qualitative analysis—we were able to test and refine the list of decisions, but we did not seek to identify all possible categories of approach to selecting and solving problems. The goals of each iterative round of coding are described in the next three paragraphs. To code for decisions in general, we matched decisions from the list to statements in each interview, based on the following criteria: 1) there was an explicit statement of a decision or choice made or needing to be made; 2) there was the description of the outcome of a decision, such as listing important features of the problem (that had been decided on) or conclusions arrived at; or 3) there was a statement of actions taken that indicated a decision about the appropriate action had been made, usually from a set of alternatives. Two examples illustrate the types of comments we identified as decisions: A molecular biologist explicitly stated the decisions required to decompose a problem into subproblems (decision 11), “Which cell do we use? The gene. Which gene do we edit? Which part of that gene do we edit? How do we build the enzyme that is going to do the cutting? … And how do we read out that it worked?” An ecologist made a statement that was also coded as a decomposition decision, because it described the action taken: “So I analyze the bird data first on its own, rather than trying to smash all the taxonomic groups together because they seem really apples and oranges. And just did two kinds of analysis, one was just sort of across all of these cases, around the world.” A single statement could be coded as multiple decisions if they were occurring simultaneously in the story being recalled or were intimately interconnected in the context of that interview, as with the ecology quote, in which the last sentence leads into deciding what data analysis is needed. Inherent in nearly every one of these decisions was that there was insufficient information to know the answer with certainty, so judgment was required.

Our primary goal for the first iterative round of coding was to check whether our list was complete by checking for any decisions that were missing, as indicated by either an action taken or a stated decision that was not clearly connected to a decision on our initial list. In this round, we also clarified wording and combined decisions that we were consistently unable to differentiate during the coding. A sample of three interviews (from biology, medicine, and electrical engineering) were first coded independently by four coders (AP, EB, CK, and AF), then discussed. The decision list was modified to add decisions and update wording based on that discussion. Then the interviews were recoded with the new list and rediscussed, leading to more refinements to the list. Two additional interviews (from physics and chemical engineering) were then coded by three coders (AP, EB, and CK) and further similar refinements were made. Throughout the subsequent rounds of coding, we continued to check for missing decisions, but after the additions and adjustments made based on these five interviews, we did not identify any more missing decisions.

In our next round of coding, we focused on condensing overlapping decisions and refining wording to improve the clarity of descriptions as they applied across different disciplinary contexts and to ensure consistent interpretation by different coders. Two or three coders independently coded an additional 11 interviews, iteratively meeting to discuss codes identified in the interviews, refining wording and condensing the list to improve agreement and combine overlapping codes, and then using the updated list to code subsequent interviews. We condensed the list by combining decisions that represented the same cognitive process taking place at different times, that were discipline-specific variations on the same decision, or that were substeps involved in making a larger decision. We noticed that some decisions were frequently co-coded with others, particularly in some disciplines. But if they were identified as distinct a reasonable fraction of the time in any discipline, we listed them as separate. This provided us with a list, condensed from 42 to 29 discrete decisions (plus five additional non-decision themes that were so prevalent that they are important to describe), that gave good consistency between coders.

Finally, we used the resulting codes to tabulate which decisions occurred in each interview, simplifying our coding process to focus on deciding whether or not each decision had occurred, with an example if it did occur to back up the “yes” code, but no longer attempting to capture every time each decision was mentioned. Individual coders identified decisions mentioned in the remaining 15 interviews. Interviews that had been coded with the early versions of the list were also recoded to ensure consistency. Coders flagged any decisions they were unsure about occurring in a particular interview, and two to four coders (AP, EB, CK, and CW) met to discuss those debated codes, with most uncertainties being resolved by explanations from a team member who had more technical expertise in the field of the interview. Minor wording changes were made during this process to ensure that each description of a decision captured all instantiations of the decision across disciplines, but no significant changes to the list were needed or made.

Coding an interview in terms of decisions made and actions taken in the research often required a high level of expertise in the discipline in question. The coder had to be familiar with the conduct of research in the field in order to recognize which actions corresponded to a decision between alternatives, but our team was assembled with this requirement in mind. It included high-level expertise across five different fields of science, engineering, and medicine and substantial familiarity with several other fields.

Supplemental Table S1 shows the final tabulation of decisions identified in each interview. In the tabulation, most decisions were marked as either “yes” or “no” for each interview, though 65 out of 1054 total were marked as “implied,” for one of the following reasons: 1) for 40/65, based on the coder's knowledge of the field, it was clear that a step must have been taken to achieve an outcome or action, even though that decision was not explicitly mentioned (e.g., interviewees describe collecting certain raw data and then coming to a specific conclusion, so they must have decided how to analyze the data, even if they did not mention the analysis explicitly); 2) for 15/65, the interview context was important, in that multiple statements from different parts of the interview taken together were sufficient to conclude that the decision must have happened, though no single statement described that decision explicitly; 3) 10/65 involved a decision that was explicitly discussed as an important step in problem solving, but they did not directly state how it was related to the problem at hand, or it was stated only in response to a direct prompt from the interviewer. The proportion of decisions identified in each interview, broken down by either explicit or explicit + implied, is presented in Supplemental Tables S1 and S2. Table 2 and Figure 2 of the main text show explicit + implied decision numbers.

a See supplementary text and Table S2 for full description and examples of each decision. A set of other non-decision knowledge and skill development themes were also frequently mentioned as important to professional success: Staying up to date in the field (84%), intuition and experience (77%), interpersonal and teamwork (100%), efficiency (32%), and attitude (68%).

b Percentage of interviews in which category or decision was mentioned.

c Numbering is for reference. In practice ordering is fluid – involves extensive iteration with other possible starting points.

d Chosen predictive framework(s) will inform all other decisions.

e Reflection occurs throughout process, and often leads to iteration. Reflection on solution occurs at the end as well.

FIGURE 2. Proportion of decisions coded in interviews by field. This tabulation includes decisions 1–29, not the additional themes. Error bars represent standard deviations. Number of interviews: total = 31; physical science = 9; biological science = 8; engineering = 8; medicine = 6. Compared with the sciences, slightly fewer decisions overall were identified in the coding of engineering and medicine interviews, largely for discipline-specific reasons. See Supplemental Table S2 and associated discussion.

Two of the interviews that had not been discussed during earlier rounds of coding (one physics [AP and EB], one medicine [AP and CK]) were independently coded by two coders to check interrater reliability using the final list of decisions. The goal of our final coding was to tabulate whether or not each expert described making each decision at any point in the problem-solving process, so the level of detail we chose for coding and interrater reliability was whether or not a decision was present in the entire interview. The decisions identified in each interview were compared for the two coders. For both interviews, the raters disagreed on whether or not only one of the 29 decisions occurred. Codes of “implied” were counted as agreement if the other coder selected either “yes” or “implied.” This equates to a percent agreement of 97% for each interview (28 agree/29 total decisions per interview = 97%). As a side note, there was also one disagreement per interview on the coding of the five other themes, but those themes were not a focus of this work nor the interviews.

We identified a total set of 29 decisions to be made (plus five other themes), all of which were identified in a large fraction of the interviews across all disciplines ( Table 2 and Figure 2 ). There was a surprising degree of overlap across the different fields with all the experts mentioning similar decisions to be made. All 29 were evident by the fifth semistructured interview, and on average, each interview revealed 85% of the 29 decisions. Many decisions occurred multiple times in an interview, with the number of times varying widely, depending on the length and complexity of the problem-solving process discussed.

We focused our analysis on what decisions needed to be made, not on the experts’ processes for making those decisions: noting that a choice happened, not how they selected and chose among different alternatives. This is because, while the decisions to be made were the same across disciplines, how the experts made those decisions varied greatly by discipline and individual. The process of making the decisions relied on specialized disciplinary knowledge and experience and may vary depending on demographics or other factors that our study design (both our sample and nature of retrospective interviews) did not allow us to investigate. However, while that knowledge was distinct and specialized, we could tell that it was consistently organized according to a common structure we call a “predictive framework,” as discussed in the “ Predictive Framework ” section below. Also, while every “decision” reflected a step in the problem solving involved in the work, and the expert being interviewed was involved in making or approving the decision, that does not mean the decision process was carried out only by that individual. In many cases, the experts described the decisions made in terms of ideas and results of their teams, and the importance of interpersonal skills and teamwork was an important non-decision theme raised in all interviews.

We were particularly concerned with the correctness and completeness of the set of decisions. Although the correctness was largely established by the statements in the interviews, we also showed the list of decisions to these experts at the end of the interviews as well as to about a dozen other experts. In all cases, they all agreed that these decisions were ones they and others in their field made when solving problems. The completeness of the list of decisions was confirmed by: 1) looking carefully at all specific actions taken in the described problem-solving process and checking that each action matched a corresponding decision from the list; and 2) the high degree of consistency in the set of decisions across all the interviews and disciplines. This implies that it is unlikely that there are important decisions that we are missing, because that would require any such missing decisions to be consistently unspoken by all 31 interviewees as well as consistently unrecognized by us from the actions that were taken in the problem-solving process.

In focusing on experts’ recollections of their successful solving of problems, our study design may have missed decisions that experts only made during failed problem-solving attempts. However, almost all interviews described solution paths that were not smooth and continuous, but rather involved going down numerous dead ends. There were approaches that were tried and failed, data that turned out to be ambiguous and worthless, and so on. Identifying the failed path involved reflection decisions (23–26). Often decision 9 (is problem solvable?) would be mentioned, because it described a path that was determined to be not solvable. For example, a biologist explained, “And then I ended up just switching to a different strain that did it [crawling off the plate] less. Because it was just … hard to really get them to behave themselves. I suppose if I really needed to rely on that very particular one, I probably would have exhausted the possibilities a bit more.” Thus, we expect unsuccessful problem solving would entail a smaller subset of decisions being made, particularly lack of reflection decisions, or poor choices on the decisions, rather than making a different set of decisions.

The set of decisions represent a remarkably consistent structure underlying S&E problem solving. For the purposes of presentation, we have categorized the decisions as shown in Figure 3 , roughly based on the purposes they achieve. However, the process is far less orderly and sequential than implied by this diagram, or in fact any characterization of an orderly “scientific method.” We were struck by how variable the sequence of decisions was in the descriptions provided. For example, experts who described how they began work on a problem sometimes discussed importance and goals (1–3, what is important in field?; opportunity fits solver’s expertise?; and goals, criteria, constraints?), but others mentioned a curious observation (20, any significant anomalies?), important features of their system that led them to questions (4, important features and info?, 6, how to narrow down problem?), or other starting points. We also saw that there were flexible connections between decisions and repeated iterations—jumping back to the same type of decision multiple times in the solution process, often prompted by reflection as new information and insights were developed. The sequence and number of iterations described varied dramatically by interview, and we cannot determine to what extent this was due to legitimate differences in the problem-solving process or to how the expert recalled and chose to describe the process. This lack of a consistent starting point, with jumping and iterating between decisions, has also been identified in the naturalistic decision-making literature ( Mosier et al. , 2018 ). Finally, the experts also often described considering multiple decisions simultaneously. In some interviews, a few decisions were always described together, while in others, they were clearly separate decisions. In summary, while the specific decisions themselves are fully grounded in expert practice, the categories and order shown here are artificial simplifications for presentation purposes.

FIGURE 3. Representation of problem-solving decisions by categories. The black arrows represent a hypothetical but unrealistic order of operations, the blue arrows represent more realistic iteration paths. The decisions are grouped into categories for presentation purposes; numbers indicate the number of decisions in each category. Knowledge and skill development were commonly mentioned themes but are not decisions.

The decisions contained in the seven categories are summarized here. See Supplemental Table S2 for specific examples of each decision across multiple disciplines.

Category A. Selection and Goals of the Problem

This category involves deciding on the importance of the problem, what criteria a solution must meet, and how well it matches the capabilities, resources, and priorities of the expert. As an example, an earth scientist described the goal of her project (decision 3, goals, criteria, constraints?) to map and date the earliest volcanic rocks associated with what is now Yellowstone and explained why the project was a good fit for her group (2, opportunity fits solver’s expertise?) and her decision to pursue the project in light of the significance of this type of eruption in major extinction events (1, what is important in field?). In many cases, decisions related to framing (see category B) were mentioned before decisions in this category or were an integral part of the process for developing goals.

1. What is important in the field?

What are important questions or problems? Where is the field heading? Are there advances in the field that open new possibilities?

2. Opportunity fits solver's expertise?

If and where are there gaps/opportunities to solve in field? Given experts’ unique perspectives and capabilities, are there opportunities particularly accessible to them? (This could involve challenging the status quo, questioning assumptions in the field.)

3. Goals, criteria, constraints?

a. What are the goals, design criteria, or requirements of the problem or its solution?

b. What is the scope of the problem?

c. What constraints are there on the solution?

d. What will be the criteria on which the solution is evaluated?

Category B. Frame Problem

These decisions lead to a more concrete formulation of the solution process and potential solutions. This involves identifying the key features of the problem and deciding on predictive frameworks to use (see “ Predictive Framework ” section below), as well as narrowing down the problem, often forming specific questions or hypotheses. Many of these decisions are guided by past problem solutions with which the expert is familiar and sees as relevant. The framing decisions of a physician can be seen in his discussion of a patient with liver failure who had previously been diagnosed with HIV but had features (4, important features and info?; 5, what predictive framework?) that made the physician question the HIV diagnosis (5, what predictive framework?; 26, how good is solution?). His team then searched for possible diagnoses that could explain liver failure and lead to a false-positive HIV test (7, related problems?; 8, potential solutions?), which led to their hypothesis the patient might have Q fever (6, how to narrow down problem?; 13, what info needed?; 15, specific plan for getting info?). While each individual decision is strongly supported by the data, the categories are groupings for presentation purposes. In particular, framing (category B) and planning (see category C) decisions often blended together in interviews.

a. Which available information is relevant to problem solving and why?

b. (When appropriate) Create/find a suitable abstract representation of core ideas and information Examples: physics, equation representing process involved; chemistry, bond diagrams/potential energy surfaces; biology, diagram of pathway steps.

5. What predictive framework?

Which potential predictive frameworks to use? (Decide among possible predictive frameworks or create framework.) This includes deciding on the appropriate level of mechanism and structure that the framework needs to embody to be most useful for the problem at hand.

6. How to narrow down the problem?

How to narrow down the problem? Often involves formulating specific questions and hypotheses.

7. Related problems?

What are related problems or work seen before, and what aspects of their problem-solving process and solutions might be useful in the present context? (This may involve reviewing literature and/or reflecting on experience.)

8. Potential solutions?

What are potential solutions? (This is based on experience and fitting some criteria for solution they have for a problem having general key features identified.)

9. Is problem solvable?

Is the problem plausibly solvable and is the solution worth pursuing given the difficulties, constraints, risks, and uncertainties?

Category C. Plan the Process for Solving

These decisions establish the specifics needed to solve the problem and include: how to simplify the problem and decompose it into pieces, what specific information is needed, how to obtain that information, and what are the resources needed and priorities? Planning by an ecologist can be seen in her extensive discussion of her process of simplifying (10, approximations/simplifications to make?) a meta-analysis project about changes in migration behavior, which included deciding what types of data she needed (13, what info needed?), planning how to conduct her literature search (15, specific plan for getting info?), difficulties in analyzing the data (12, most difficult/uncertain areas?; 16, which calculations and data analysis?), and deciding to analyze different taxonomic groups separately (11, how to decompose into subproblems?). In general, decomposition often resulted in multiple iterations through the problem-solving decisions, as subsets of decisions need to be made about each decomposed aspect of a problem. Framing (category B) and planning (category C) decisions occupied much of the interviews, indicating their importance.

10. Approximations and simplifications to make?

What approximations or simplifications are appropriate? How to simplify the problem to make it easier to solve? Test possible simplifications/approximations against established criteria.

11. How to decompose into subproblems?

How to decompose the problem into more tractable subproblems? (Subproblems are independently solvable pieces with their own subgoals.)

12. Most difficult or uncertain areas?

a. What are acceptable levels of uncertainty with which to proceed at various stages?

13. What info needed?

a. What will be sufficient to test and distinguish between potential solutions?

14. Priorities?

What to prioritize among many competing considerations? What to do first and how to obtain necessary resources?

Considerations could include: What's most important? Most difficult? Addressing uncertainties? Easiest? Constraints (time, materials, etc.)? Cost? Optimization and trade-offs? Availability of resources? (facilities/materials, funding sources, personnel)

15. Specific plan for getting information?

a. What are the general requirements of a problem-solving approach, and what general approach will they pursue? (These decisions are often made early in the problem-solving process as part of framing.)

b. How to obtain needed information? Then carry out those plans. (This could involve many discipline- and problem-specific investigation possibilities such as: designing and conducting experiments, making observations, talking to experts, consulting the literature, doing calculations, building models, or using simulations.)

c. What are achievable milestones, and what are metrics for evaluating progress?

d. What are possible alternative outcomes and paths that may arise during the problem-solving process, both consistent with predictive framework and not, and what would be paths to follow for the different outcomes?

Category D. Interpret Information and Choose Solution(s)

This category includes deciding how to analyze, organize, and draw conclusions from available information, reacting to unexpected information, and deciding upon a solution. A biologist studying aging in worms described how she analyzed results from her experiments, which included representing her results in survival curves and conducting statistical analyses (16, which calculations and data analysis?; 17, how to represent and organize info?), as well as setting up blind experiments (15, specific plan for getting info?) so that she could make unbiased interpretations (18, how believable is info?) of whether a worm was alive or dead. She also described comparing results with predictions to justify the conclusion that worm aging was related to fertility (19, how does info compare to predictions?; 21, appropriate conclusions?; 22, what is best solution?). Deciding how results compared with expectations based on a predictive framework was a key decision that often preceded several other decisions.

16. Which calculations and data analysis?

What calculations and data analysis are needed? Once determined, these must then be carried out.

17. How to represent and organize information?

What is the best way to represent and organize available information to provide clarity and insights? (Usually this will involve specialized and technical representations related to key features of predictive framework.)

18. How believable is the information?

Is information valid, reliable, and believable (includes recognizing potential biases)?

19. How does information compare to predictions?

As new information comes in, particularly from experiments or calculations, how does it compare with expected results (based on the predictive framework)?

20. Any significant anomalies?

a. Does potential anomaly fit within acceptable range of predictive framework(s) (given limitations of predictive framework and underlying assumptions and approximations)?

b. Is potential anomaly an unusual statistical variation or relevant data? Is it within acceptable levels of uncertainty?

21. Appropriate conclusions?

What are appropriate conclusions based on the data? (This involves making conclusions and deciding if they are justified.)

22. What is the best solution?

a. Which of multiple candidate solutions are consistent with all available information and which can be rejected? (This could be based on comparing data with predicted results.)

b. What refinements need to be made to candidate solutions?

Category E. Reflect

Reflection decisions occur throughout the process and include deciding whether assumptions are justified, whether additional knowledge or information is needed, how well the solution approach is working, and whether potential and then final solutions are adequate. These decisions match the categories of reflection identified by Salehi (2018) . A mechanical engineer described developing a model (to inform surgical decisions) of which muscles allow the thumb to function in the most useful manner (22, what is best solution?), including reflecting on how well engineering approximations applied in the biological context (23, assumptions and simplifications appropriate?). He also described reflecting on his approach, that is, why he chose to use cadaveric models instead of mathematical models (25, how well is solving approach working?), and the limitations of his findings in that the “best” muscle identified was difficult to access surgically (26, how good is solution?; 27, broader implications?). Reflection decisions are made throughout the problem-solving process, often lead to reconsidering other decisions, and are critical for success.

23. Assumptions and simplifications appropriate?

a. Do the assumptions and simplifications made previously still look appropriate considering new information?

b Does predictive framework need to be modified?

24. Additional knowledge needed?

a. Is solver's relevant knowledge sufficient?

b. Is more information needed and, if so, what?

c. Does some information need to be checked? (Is there a need to repeat experiment or check a different source?)

25. How well is the problem-solving approach working?

How well is the problem-solving approach working, and does it need to be modified? This includes possibly modifying the goals. (One needs to reflect on one's strategy by evaluating progress toward the solution.) and reflecting on one’s strategy by evaluating progress toward the solution.

26. How good is the solution?

a. Decide by exploring possible failure modes and limitations—“try to break” solution.

b. Does it “make sense” and pass discipline-specific tests for solutions of this type of problem?

c. Does it completely meet the goals/criteria?

Category F. Implications and Communication of Results

These are decisions about the broader implications of the work, and how to communicate results most effectively. For example, a theoretical physicist developing a method to calculate the magnetic moment of the muon decided on who would be interested in his work (28, audience for communication?) and what would be the best way to present it (29, best way to present work?). He also discussed the implications of preliminary work on a simplified aspect of the problem (10, approximations and simplifications to make?) in terms of evaluating its impact on the scientific community and deciding on next steps (27, broader implications?; 29, best way to present work?). Many interviewees described that making decisions in this category affected their decisions in other categories.

27. Broader implications?

What are the broader implications of the results, including over what range of contexts does the solution apply? What outstanding problems in the field might it solve? What novel predictions can it enable? How and why might this be seen as interesting to a broader community?

28. Audience for communication?

What is the audience for communication of work, and what are their important characteristics?

29. Best way to present work?

What is the best way to present the work to have it understood, and its correctness and importance appreciated? How to make a compelling story of the work?

Category G. Ongoing Skill and Knowledge Development

Although we focused on decisions in the problem-solving process, the experts volunteered general skills and knowledge they saw as important elements of problem-solving expertise in their fields. These included teamwork and interpersonal skills (strongly emphasized), acquiring experience and intuition, and keeping abreast of new developments in their fields.

30. Stay up to date in field

a. Reviewing literature, which does involve making decisions as to which is important.

b. Learning relevant new knowledge (ideas and technology from literature, conferences, colleagues, etc.)

31. Intuition and experience

Acquiring experience and associated intuition to improve problem solving.

32. Interpersonal, teamwork

Includes navigating collaborations, team management, patient interactions, communication skills, etc., particularly as how these apply in the context of the various types of problem-solving processes.

33. Efficiency

Time management including learning to complete certain common tasks efficiently and accurately.

34. Attitude

Motivation and attitude toward the task. Factors such as interest, perseverance, dealing with stress, and confidence in decisions.

Predictive Framework

How the decisions were made was highly dependent on the discipline and problem. However, there was one element that was fundamental and common across all interviews: the early adoption of a “predictive framework” that the experts used throughout the problem-solving process. We define this framework as “a mental model of key features of the problem and the relationships between the features.” All the predictive frameworks involved some degree of simplification and approximation and an underlying level of mechanism that established the relationships between key features. The frameworks provided a structure of knowledge and facilitated the application of that knowledge to the problem at hand, allowing experts to repeatedly run “mental simulations” to make predictions for dependencies and observables and to interpret new information.

As an example, an ecologist described her predictive framework for migration, which incorporated important features such as environmental conditions and genetic differences between species and the mechanisms by which these interacted to impact the migration patterns for a species. She used this framework to guide her meta-analysis of changes in migration patterns, affecting everything from her choice of data sets to include to her interpretation of why migration patterns changed for different species. In many interviews, the frameworks used evolved as additional information was obtained, with additional features being added or underlying assumptions modified. For some problems, the relevant framework was well established and used with confidence, while for other problems, there was considerable uncertainty as to a suitable framework, so developing and testing the framework was a substantial part of the solution process.

A predictive framework contains the expert knowledge organization that has been observed in previous studies of expertise ( Egan and Greeno, 1974 ) but goes further, as here it serves as an explicit tool that guides most decisions and actions during the solving of complex problems. Mental models and mental simulations that are described in the naturalistic decision-making literature are similar, in that they are used to understand the problem and guide decisions ( Klein, 2008 ; Mosier et al. , 2018 ), but they do not necessarily contain the same level of mechanistic understanding of relationships that underlies the predictive frameworks used in science and engineering problem solving. While the use of predictive frameworks was universal, the individual frameworks themselves explicitly reflected the relevant specialized knowledge, structure, and standards of the discipline, and arguably largely define a discipline ( Wieman, 2019 ).

Discipline-Specific Variation

While the set of decisions to be made was highly consistent across disciplines, there were extensive differences within and across disciplines and work contexts, which reflected the differences in perspectives and experiences. These differences were usually evident in how experts made each of the specific decisions, but not in the choice of which decisions needed to be made. In other words, the solution methods, which included following standard accepted procedures in each field, were very different. For example, planning in some experimental sciences may involve formulating a multiyear construction and data-collection effort, while in medicine it may be deciding on a simple blood test. Some decisions, notably in categories A, D, and F, were less likely to be mentioned in particular disciplines, because of the nature of the problems. Specifically, decisions 1 (what is important in field?), 2 (opportunity fits solver’s expertise?), 27 (broader implications?), 28 (audience for communication?), and 29 (best way to present work?) were dependent on the scope of the problem being described and the expert's specific role in it. These were mentioned less frequently in interviews where the problem was assigned to the expert (most often engineering or industry) or where the importance or audience was implicit (most often in medicine). Decisions 16 (which calculations and data analysis?) and 17 (how to represent and organize info?) were particularly unlikely to be mentioned in medicine, because test results are typically provided to doctors not in the form or raw data, but rather already analyzed by a lab or other medical technology professional, so the doctors we interviewed did not need to make decisions themselves about how to analyze or represent the data. Qualitatively, we also noticed some differences between disciplines in the patterns of connections between decisions. When the problem involved development of a tool or product, most commonly the case in engineering, the interview indicated relatively rapid cycles between goals (3), framing problem/potential solutions (8), and reflection on the potential solution (26), before going through the other decisions. Biology, the experimental science most represented in our interviews, had strong links between planning (15), deciding on appropriate conclusions (21), and reflection on the solution (26). This is likely because the respective problems involved complex systems with many unknowns, so careful planning was unusually important for achieving definitive conclusions. See Supplemental Text and Supplemental Table S2 for additional notes on decisions that were mentioned at lower frequency and decisions that were likely to be interconnected, regardless of field.

This work has created a framework of decisions to characterize problem solving in science and engineering. This framework is empirically based and captures the successful problem-solving process of all experts interviewed. We see that several dozen experts across many different fields all make a common set of decisions when solving authentic problems. There are flexible linkages between decisions that are guided by reflection in a continually evolving process. We have also identified the nature of the “predictive frameworks” that S&E experts consistently use in problem solving. These predictive frameworks reveal how these experts organize their disciplinary knowledge to facilitate making decisions. Many of the decisions we identified are reflected in previous work on expertise and scientific problem solving. This is particularly true for those listed in the planning and interpreting information categories ( Egan and Greeno, 1974 ). The priority experts give to framing and planning decisions over execution compared with novices has been noted repeatedly (e.g., Chi et al. , 1988 ). Expert reflection has been discussed, but less extensively ( Chase and Simon, 1973 ), and elements of the selection and implications and communication categories have been included in policy and standards reports (e.g., AAAS, 2011 ). Thus, our framework of decisions is consistent with previous work on scientific practices and expertise, but it is more complete, specific, empirically based, and generalizable across S&E disciplines.

A limitation of this study is the small number of experts we have in total, from each discipline, and from underrepresented groups (especially lack of female representation in engineering). The lack of randomized selection of participants may also bias the sample toward experts who experienced similar academic training (STEM disciplines at U.S. universities). This means we cannot prove that there are not some experts who follow other paths in problem solving. As with any scientific model, the framework described here should be subjected to further tests and modifications as necessary. However, to our knowledge, this is a far larger sample than used in any previous study of expert problem solving. Although we see a large amount of variation both within and across disciplines in the problem-solving process, this is reflected in how experts make decisions, not in what decisions they make. The very high degree of consistency in the decisions made across the entire sample strongly suggests that we are capturing elements that are common to all experts across science and engineering. A second limitation is that decisions often overlap and co-occur in an interview, so the division between decision items is often somewhat ambiguous and could be defined somewhat differently. As noted, a number of these decisions can be interconnected, and in some fields are nearly always interconnected.

The set of decisions we have observed provides a general framework for characterizing, analyzing, and teaching S&E problem solving. These decisions likely define much of the set of cognitive skills a student needs to practice and master to perform as a skilled practitioner in S&E. This framework of decisions provides a detailed and structured way to approach the teaching and measurement of problem solving at the undergraduate, graduate, and professional training levels. For teaching, we propose using the process of “deliberate practice” ( Ericsson, 2018 ) to help students learn problem solving. Deliberate practice of problem solving would involve effective scaffolding and concentrated practice, with feedback, at making the specific decisions identified here in relevant contexts. In a course, this would likely involve only an appropriately selected set of the decisions, but a good research mentor would ensure that trainees have opportunities to practice and receive feedback on their performance on each of these 29 decisions. Future work is needed to determine whether there are additional decisions that were not identified in experts but are productive components of student problem solving and should also be practiced. Measurements of individual problem-solving expertise based on our decision list and the associated discipline-specific predictive frameworks will allow a detailed measure of an individual's discipline-specific problem-solving strengths and weaknesses relative to an established expert. This can be used to provide targeted feedback to the learner, and when aggregated across students in a program, feedback on the educational quality of the program. We are currently working on the implementation of these ideas in a variety of instructional settings and will report on that work in future publications.

As discussed in the Introduction , typical science and engineering problems fail to engage students in the complete problem-solving process. By considering which of the 29 decisions are required to answer the problem, we can more clearly articulate why. The biology problem, for example, requires students to decide on a predictive framework and access the necessary content knowledge, and they need to decide which information they need to answer the problem. However, other decisions are not required or are already made for them, such as deciding on important features and identifying anomalies. We propose that different problems, designed specifically to require students to make sets of the problem-solving decisions from our framework, will provide more effective tools for measuring, practicing, and ultimately mastering the full S&E problem-solving process.

Our preliminary work with the use of such decision-based problems for assessing problem-solving expertise is showing great promise. For several different disciplines, we have given test subjects a relevant context, requiring content knowledge covered in courses they have taken, and asked them to make decisions from the list presented here. Skilled practitioners in the relevant discipline respond in very consistent ways, while students respond very differently and show large differences that typically correlate with their different educational experiences. What apparently matters is not what content they have seen, but rather what decisions they have had practice making. Our approach was to identify the decisions made by experts, this being the task that educators want students to master. Our data do not exclude the possibility that students engage in and/or should learn other decisions as a productive part of the problem-solving process while they are learning. Future work would seek to identify decisions made at intermediate levels during the development of expertise, to identify potential learning progressions that could be used to teach problem solving more efficiently. What we have seen is consistent with previous work identifying expert–novice differences but provides a much more extensive and detailed picture of a student's strengths and weaknesses and the impacts of particular educational experiences. We have also carried out preliminary development of courses that explicitly involve students making and justifying many of these decisions in relevant contexts, followed by feedback on their decisions. Preliminary results from these courses are also encouraging. Future work will involve the more extensive development and application of decision-based measurement and teaching of problem solving.

ACKNOWLEDGMENTS

We acknowledge the many experts who agreed to be interviewed for this work, M. Flynn for contributions on expertise in mechanical engineering, and Shima Salehi for useful discussions. This work was funded by the Howard Hughes Medical Institute through an HHMI Professor grant to C.E.W.

  • ABET . ( 2020 ). Criteria for accrediting engineering programs, 2020–2021 . Retrieved November 23, 2020, from www.abet.org/accreditation/accreditation-criteria/criteria-for-accrediting-engineering-programs-2020-2021 Google Scholar
  • Alberts, B., Johnson, A., Lewis, J., Morgan, D., Raff, M., Roberts, K., & Walter, P. ( 2014 ). Control of gene expression . In Molecular Biology of the Cell (6th ed., pp. 436–437). New York: Garland Science. Retrieved November 12, 2020, from https://books.google.com/books?id=2xIwDwAAQBAJ Google Scholar
  • American Association for the Advancement of Science . ( 2011 ). Vision and change in undergraduate biology education: A call to action . Washington, DC. Retrieved February 12, 2021, from https://visionandchange.org/finalreport Google Scholar
  • Chi, M. T. H., Glaser, R., & Farr, M. J.( ( 1988 ). The nature of expertise . Hillsdale, NJ: Erlbaum. Google Scholar
  • Crandall, B., Klein, G. A., & Hoffman, R. R. ( 2006 ). Working minds: A practitioner's guide to cognitive task analysis . Cambridge, MA: MIT Press. Google Scholar
  • Egan, D. E., & Greeno, J. G. ( 1974 ). Theory of rule induction: Knowledge acquired in concept learning, serial pattern learning, and problem solving in L . In Gregg, W. (Ed.), Knowledge and cognition . Potomac, MD: Erlbaum. Google Scholar
  • Ericsson, K. A., Charness, N., Feltovich, P. J., & Hoffman, R. R. , (Eds.) ( 2006 ). The Cambridge handbook of expertise and expert performance . Cambridge, United Kingdom: Cambridge University Press. Google Scholar
  • Ericsson, K. A., Hoffman, R. R., Kozbelt, A., & Williams, A. A. , (Eds.) ( 2018 ). The Cambridge handbook of expertise and expert performance (2nd ed.). Cambridge, United Kingdom: Cambridge University Press. Google Scholar
  • Hatano, G., & Inagaki, K. ( 1986 ). Two courses of expertise . In Stevenson, H. W.Azuma, H.Hakuta, K. (Eds.), A series of books in psychology. Child development and education in Japan (pp. 262–272). New York: Freeman/Times Books/Henry Holt. Google Scholar
  • Klein, G. ( 2008 ). Naturalistic decision making . Human Factors , 50 (3), 456–460. Medline ,  Google Scholar
  • Kozma, R., Chin, E., Russell, J., & Marx, N. ( 2000 ). The roles of representations and tools in the chemistry laboratory and their implications for chemistry learning . Journal of the Learning Sciences , 9 (2), 105–143. Google Scholar
  • Lintern, G., Moon, B., Klein, G., & Hoffman, R. ( 2018 ). Eliciting and representing the knowledge of experts . In Ericcson, K. A.Hoffman, R. R.Kozbelt, A.Williams, A. M. (Eds.), The Cambridge handbook of expertise and expert performance (2nd ed). (pp. 165–191). Cambridge, United Kingdom: Cambridge University Press. Google Scholar
  • Mosier, K., Fischer, U., Hoffman, R. R., & Klein, G. ( 2018 ). Expert professional judgments and “naturalistic decision making.” In Ericcson, K. A.Hoffman, R. R.Kozbelt, A.Williams, A. M. (Eds.), The Cambridge handbook of expertise and expert performance (2nd ed). (pp. 453–475). Cambridge, United Kingdom: Cambridge University Press. Google Scholar
  • National Research Council (NRC) . ( 2012a ). A framework for K–12 science education: Practices, crosscutting concepts, and core ideas . Washington, DC: National Academies Press. Google Scholar
  • Newell, A., & Simon, H. A. ( 1972 ). Human problem solving . Prentice-Hall. Google Scholar
  • Next Generation Science Standards Lead States . ( 2013 ). Next Generation Science Standards: For states, by states . Washington, DC: National Academies Press. Google Scholar
  • Polya, G. ( 1945 ). How to solve it: A new aspect of mathematical method . Princeton, NJ: Princeton University Press. Google Scholar
  • Quacquarelli Symonds . ( 2018 ). The global skills gap in the 21st century . Retrieved July 20, 2021, from www.qs.com/portfolio-items/the-global-skills-gap-in-the-21st-century/ Google Scholar
  • Salehi, S. ( 2018 ). Improving problem-solving through reflection (Doctoral dissertation) . Stanford Digital Repository, Stanford University. Retrieved February 18, 2021, from https://purl.stanford.edu/gc847wj5876 Google Scholar
  • Schoenfeld, A. H. ( 1985 ). Mathematical problem solving . Orlando, FL: Academic Press. Google Scholar
  • Wayne State University . ( n.d ). Mechanical engineering practice qualifying exams. Wayne State University Mechanical Engineering department . Retrieved February 23, 2021, from https://engineering.wayne.edu/me/exams/mechanics_of_materials_-_sample_pqe_problems_.pdf Google Scholar
  • Wineburg, S. ( 1998 ). Reading Abraham Lincoln: An expert/expert study in the interpretation of historical texts . Cognitive Science , 22 (3), 319–346. https://doi.org/10.1016/S0364-0213(99)80043-3 Google Scholar
  • Uncovering students’ problem-solving processes in game-based learning environments Computers & Education, Vol. 182
  • Student understanding of kinematics: a qualitative assessment 9 May 2022 | European Journal of Engineering Education, Vol. 5
  • What decisions do experts make when doing back-of-the-envelope calculations? 5 April 2022 | Physical Review Physics Education Research, Vol. 18, No. 1
  • Simulation led optical design assessments: Emphasizing practical and computational considerations in an upper division physics lecture course American Journal of Physics, Vol. 90, No. 4
  • Evidence-Based Principles for Worksheet Design The Physics Teacher, Vol. 59, No. 6

problem solving in science

Submitted: 2 December 2020 Revised: 11 June 2021 Accepted: 23 June 2021

© 2021 A. M. Price et al. CBE—Life Sciences Education © 2021 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

April 11, 2017

The Science of Problem-Solving

It turns out practices that might seem a little odd—like talking to yourself—can be pretty effective

By Ulrich Boser

problem solving in science

justgrimes Flickr   (CC BY-SA 2.0)   

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American

For Gurpreet Dhaliwal, just about every decision is a potential opportunity for effective problem solving. What route should he take into the office? Should Dhaliwal write his research paper today or next week? "We all do problem solving all day long," Dhaliwal told me.

An emergency medicine physician, Dhaliwal is one of the leaders in a field known as clinical reasoning , a type of applied problem solving. In recent years, Dhaliwal has mapped out a better way to solve thorny issues, and he believes that his problem solving approach can be applied to just about any field from knitting to chemistry.

For most of us, problem solving is one of those everyday activities that we do without much thought. But it turns out that many common approaches like brainstorming don’t have much research behind them. In contrast, practices that might seem a little odd—like talking to yourself —can be pretty effective.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

I came across the new research on problem solving as part of my reporting on a book on the science of learning, and it was mathematician George Polya who first established the field, detailing a four-step approach to cracking enduring riddles.

None

Credit: Ulrich Boser

For Polya, the first phase of problem solving is “understanding.” In this phase, people should look to find the core idea behind a problem. “You have to understand the problem,” Polya argued. “What is the unknown? What are the data?”

The second phase is “devising a plan,” in which people map out how they’d address the problem. “Find the connection between the data and the unknown,” Polya counseled. 

The third phase of problem solving is “carrying out the plan.” This is a matter of doing—and vetting: “Can you prove that it is correct?”

The final phase for Polya is “looking back.” Or learning from the solution: People should "consolidate their knowledge.”

While Dhaliwal broadly follows this four-step method, he stresses that procedures are not enough. While a focused method is helpful, thorny issues don’t always fit nicely into categories.

This idea is clear in medicine. After all, symptoms rarely match up perfectly with an illness. Dizziness can be the signal of something serious—or a symptom of a lack of sleep. “What is tricky is to figure out what’s signal and what’s noise,” Dhaliwal told me.

In this regard, Dhaliwal argues that what’s at the heart of effective problem solving is making a robust connection between the problem and the solution. "Problem solving is part craft and part science, " Dhaliwal says, a type of "matching exercise. "

To get a sense of Dhaliwal’s approach, I once watched him solve a perplexing case. It was at a medical conference, and Dhaliwal stood at a dais as a fellow doctor explained the case: Basically, a man came into ER one day—let’s call him Andreas—and he spat up blood, could not breath very well, and had a slight fever.

At the start of the process, Dhaliwal recommends developing a one-sentence description of the problem. "It’s like a good Google search,” he said. “You want a concise summary,” and in this case, it was: Sixty-eight-year-old man with hemoptysis, or coughing up blood.

Dhaliwal also makes a few early generalizations, and he thought that Andreas might have a lung infection or an autoimmune problem. There wasn’t enough data to offer any sort of reliable conclusion, though, and really Dhaliwal was just gathering information.

Then came an x-ray, an HIV test, and as each bit of evidence rolled in, Dhaliwal detailed various scenarios, assembling the data in different ways. “To diagnosis, sometimes we are trying to lump, and sometimes trying to split,” he said.

Dhaliwal’s eyes flashed, for instance, when it became apparent that Andreas had worked in a fertilizer factory. It meant that Andreas was exposed to noxious chemicals, and for a while, it seemed like a toxic substance was at the root of Andreas’s illness.

Dhaliwal had a few strong pieces of evidence that supported the theory including some odd-looking red blood cells. But Dhaliwal wasn't comfortable with the level of proof. “I'm like an attorney presenting in a court of law,” Dhaliwal told me. “I want evidence.”

As the case progressed, Dhaliwal came across a new detail, and there was a growth in the heart. This shifted the diagnosis, knocking out the toxic chemical angle because it doesn't spark tumors.

Eventually, Dhaliwal uncovered a robust pattern, diagnosing Andreas with a cardiac angiosarcoma, or heart cancer. The pattern best explained the problem. “Diagnosing often comes down the ability to pull things together,” he said.

Dhaliwal doesn’t always get the right answer. But at the same time, it was clear that a more focused approach to problem solving can make a clear difference. If we’re more aware of how we approach an issue, we are better able to resolve the issue.

This idea explains why people who talk to themselves are more effective at problem solving. Self-queries—like is there enough evidence? —help us think through an issue.

As for Dhaliwal, he had yet another problem to solve after his diagnosis of Andreas: Should he take an Uber to the airport? Or should he grab a cab? After a little thought, Dhaliwal decided on an Uber. It was likely to be cheaper and equally comfortable. In other words, it was the solution that best matched the problem.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • CBE Life Sci Educ
  • v.8(3); Fall 2009

Teaching Creativity and Inventive Problem Solving in Science

Robert l. dehaan.

Division of Educational Studies, Emory University, Atlanta, GA 30322

Engaging learners in the excitement of science, helping them discover the value of evidence-based reasoning and higher-order cognitive skills, and teaching them to become creative problem solvers have long been goals of science education reformers. But the means to achieve these goals, especially methods to promote creative thinking in scientific problem solving, have not become widely known or used. In this essay, I review the evidence that creativity is not a single hard-to-measure property. The creative process can be explained by reference to increasingly well-understood cognitive skills such as cognitive flexibility and inhibitory control that are widely distributed in the population. I explore the relationship between creativity and the higher-order cognitive skills, review assessment methods, and describe several instructional strategies for enhancing creative problem solving in the college classroom. Evidence suggests that instruction to support the development of creativity requires inquiry-based teaching that includes explicit strategies to promote cognitive flexibility. Students need to be repeatedly reminded and shown how to be creative, to integrate material across subject areas, to question their own assumptions, and to imagine other viewpoints and possibilities. Further research is required to determine whether college students' learning will be enhanced by these measures.

INTRODUCTION

Dr. Dunne paces in front of his section of first-year college students, today not as their Bio 110 teacher but in the role of facilitator in their monthly “invention session.” For this meeting, the topic is stem cell therapy in heart disease. Members of each team of four students have primed themselves on the topic by reading selected articles from accessible sources such as Science, Nature, and Scientific American, and searching the World Wide Web, triangulating for up-to-date, accurate, background information. Each team knows that their first goal is to define a set of problems or limitations to overcome within the topic and to begin to think of possible solutions. Dr. Dunne starts the conversation by reminding the group of the few ground rules: one speaker at a time, listen carefully and have respect for others' ideas, question your own and others' assumptions, focus on alternative paths or solutions, maintain an atmosphere of collaboration and mutual support. He then sparks the discussion by asking one of the teams to describe a problem in need of solution.

Science in the United States is widely credited as a major source of discovery and economic development. According to the 2005 TAP Report produced by a prominent group of corporate leaders, “To maintain our country's competitiveness in the twenty-first century, we must cultivate the skilled scientists and engineers needed to create tomorrow's innovations.” ( www.tap2015.org/about/TAP_report2.pdf ). A panel of scientists, engineers, educators, and policy makers convened by the National Research Council (NRC) concurred with this view, reporting that the vitality of the nation “is derived in large part from the productivity of well-trained people and the steady stream of scientific and technical innovations they produce” ( NRC, 2007 ).

For many decades, science education reformers have promoted the idea that learners should be engaged in the excitement of science; they should be helped to discover the value of evidence-based reasoning and higher-order cognitive skills, and be taught to become innovative problem solvers (for reviews, see DeHaan, 2005 ; Hake, 2005 ; Nelson, 2008 ; Perkins and Wieman, 2008 ). But the means to achieve these goals, especially methods to promote creative thinking in scientific problem solving, are not widely known or used. An invention session such as that led by the fictional Dr. Dunne, described above, may seem fanciful as a means of teaching students to think about science as something more than a body of facts and terms to memorize. In recent years, however, models for promoting creative problem solving were developed for classroom use, as detailed by Treffinger and Isaksen (2005) , and such techniques are often used in the real world of high technology. To promote imaginative thinking, the advertising executive Alex F. Osborn invented brainstorming ( Osborn, 1948 , 1979 ), a technique that has since been successful in stimulating inventiveness among engineers and scientists. Could such strategies be transferred to a class for college students? Could they serve as a supplement to a high-quality, scientific teaching curriculum that helps students learn the facts and conceptual frameworks of science and make progress along the novice–expert continuum? Could brainstorming or other instructional strategies that are specifically designed to promote creativity teach students to be more adaptive in their growing expertise, more innovative in their problem-solving abilities? To begin to answer those questions, we first need to understand what is meant by “creativity.”

What Is Creativity? Big-C versus Mini-C Creativity

How to define creativity is an age-old question. Justice Potter Stewart's famous dictum regarding obscenity “I know it when I see it” has also long been an accepted test of creativity. But this is not an adequate criterion for developing an instructional approach. A scientist colleague of mine recently noted that “Many of us [in the scientific community] rarely give the creative process a second thought, imagining one either ‘has it’ or doesn't.” We often think of inventiveness or creativity in scientific fields as the kind of gift associated with a Michelangelo or Einstein. This is what Kaufman and Beghetto (2008) call big-C creativity, borrowing the term that earlier workers applied to the talents of experts in various fields who were identified as particularly creative by their expert colleagues ( MacKinnon, 1978 ). In this sense, creativity is seen as the ability of individuals to generate new ideas that contribute substantially to an intellectual domain. Howard Gardner defined such a creative person as one who “regularly solves problems, fashions products, or defines new questions in a domain in a way that is initially considered novel but that ultimately comes to be accepted in a particular cultural setting” ( Gardner, 1993 , p. 35).

But there is another level of inventiveness termed by various authors as “little-c” ( Craft, 2000 ) or “mini-c” ( Kaufman and Beghetto, 2008 ) creativity that is widespread among all populations. This would be consistent with the workplace definition of creativity offered by Amabile and her coworkers: “coming up with fresh ideas for changing products, services and processes so as to better achieve the organization's goals” ( Amabile et al. , 2005 ). Mini-c creativity is based on what Craft calls “possibility thinking” ( Craft, 2000 , pp. 3–4), as experienced when a worker suddenly has the insight to visualize a new, improved way to accomplish a task; it is represented by the “aha” moment when a student first sees two previously disparate concepts or facts in a new relationship, an example of what Arthur Koestler identified as bisociation: “perceiving a situation or event in two habitually incompatible associative contexts” ( Koestler, 1964 , p. 95).

In this essay, I maintain that mini-c creativity is not a mysterious, innate endowment of rare individuals. Instead, I argue that creative thinking is a multicomponent process, mediated through social interactions, that can be explained by reference to increasingly well-understood mental abilities such as cognitive flexibility and cognitive control that are widely distributed in the population. Moreover, I explore some of the recent research evidence (though with no effort at a comprehensive literature review) showing that these mental abilities are teachable; like other higher-order cognitive skills (HOCS), they can be enhanced by explicit instruction.

Creativity Is a Multicomponent Process

Efforts to define creativity in psychological terms go back to J. P. Guilford ( Guilford, 1950 ) and E. P. Torrance ( Torrance, 1974 ), both of whom recognized that underlying the construct were other cognitive variables such as ideational fluency, originality of ideas, and sensitivity to missing elements. Many authors since then have extended the argument that a creative act is not a singular event but a process, an interplay among several interactive cognitive and affective elements. In this view, the creative act has two phases, a generative and an exploratory or evaluative phase ( Finke et al. , 1996 ). During the generative process, the creative mind pictures a set of novel mental models as potential solutions to a problem. In the exploratory phase, we evaluate the multiple options and select the best one. Early scholars of creativity, such as J. P. Guilford, characterized the two phases as divergent thinking and convergent thinking ( Guilford, 1950 ). Guilford defined divergent thinking as the ability to produce a broad range of associations to a given stimulus or to arrive at many solutions to a problem (for overviews of the field from different perspectives, see Amabile, 1996 ; Banaji et al. , 2006 ; Sawyer, 2006 ). In neurocognitive terms, divergent thinking is referred to as associative richness ( Gabora, 2002 ; Simonton, 2004 ), which is often measured experimentally by comparing the number of words that an individual generates from memory in response to stimulus words on a word association test. In contrast, convergent thinking refers to the capacity to quickly focus on the one best solution to a problem.

The idea that there are two stages to the creative process is consistent with results from cognition research indicating that there are two distinct modes of thought, associative and analytical ( Neisser, 1963 ; Sloman, 1996 ). In the associative mode, thinking is defocused, suggestive, and intuitive, revealing remote or subtle connections between items that may be correlated, or may not, and are usually not causally related ( Burton, 2008 ). In the analytical mode, thought is focused and evaluative, more conducive to analyzing relationships of cause and effect (for a review of other cognitive aspects of creativity, see Runco, 2004 ). Science educators associate the analytical mode with the upper levels (analysis, synthesis, and evaluation) of Bloom's taxonomy (e.g., Crowe et al. , 2008 ), or with “critical thinking,” the process that underlies the “purposeful, self-regulatory judgment that drives problem-solving and decision-making” ( Quitadamo et al. , 2008 , p. 328). These modes of thinking are under cognitive control through the executive functions of the brain. The core executive functions, which are thought to underlie all planning, problem solving, and reasoning, are defined ( Blair and Razza, 2007 ) as working memory control (mentally holding and retrieving information), cognitive flexibility (considering multiple ideas and seeing different perspectives), and inhibitory control (resisting several thoughts or actions to focus on one). Readers wishing to delve further into the neuroscience of the creative process can refer to the cerebrocerebellar theory of creativity ( Vandervert et al. , 2007 ) in which these mental activities are described neurophysiologically as arising through interactions among different parts of the brain.

The main point from all of these works is that creativity is not some single hard-to-measure property or act. There is ample evidence that the creative process requires both divergent and convergent thinking and that it can be explained by reference to increasingly well-understood underlying mental abilities ( Haring-Smith, 2006 ; Kim, 2006 ; Sawyer, 2006 ; Kaufman and Sternberg, 2007 ) and cognitive processes ( Simonton, 2004 ; Diamond et al. , 2007 ; Vandervert et al. , 2007 ).

Creativity Is Widely Distributed and Occurs in a Social Context

Although it is understandable to speak of an aha moment as a creative act by the person who experiences it, authorities in the field have long recognized (e.g., Simonton, 1975 ) that creative thinking is not so much an individual trait but rather a social phenomenon involving interactions among people within their specific group or cultural settings. “Creativity isn't just a property of individuals, it is also a property of social groups” ( Sawyer, 2006 , p. 305). Indeed, Osborn introduced his brainstorming method because he was convinced that group creativity is always superior to individual creativity. He drew evidence for this conclusion from activities that demand collaborative output, for example, the improvisations of a jazz ensemble. Although each musician is individually creative during a performance, the novelty and inventiveness of each performer's playing is clearly influenced, and often enhanced, by “social and interactional processes” among the musicians ( Sawyer, 2006 , p. 120). Recently, Brophy (2006) offered evidence that for problem solving, the situation may be more nuanced. He confirmed that groups of interacting individuals were better at solving complex, multipart problems than single individuals. However, when dealing with certain kinds of single-issue problems, individual problem solvers produced a greater number of solutions than interacting groups, and those solutions were judged to be more original and useful.

Consistent with the findings of Brophy (2006) , many scholars acknowledge that creative discoveries in the real world such as solving the problems of cutting-edge science—which are usually complex and multipart—are influenced or even stimulated by social interaction among experts. The common image of the lone scientist in the laboratory experiencing a flash of creative inspiration is probably a myth from earlier days. As a case in point, the science historian Mara Beller analyzed the social processes that underlay some of the major discoveries of early twentieth-century quantum physics. Close examination of successive drafts of publications by members of the Copenhagen group revealed a remarkable degree of influence and collaboration among 10 or more colleagues, although many of these papers were published under the name of a single author ( Beller, 1999 ). Sociologists Bruno Latour and Steve Woolgar's study ( Latour and Woolgar, 1986 ) of a neuroendocrinology laboratory at the Salk Institute for Biological Studies make the related point that social interactions among the participating scientists determined to a remarkable degree what discoveries were made and how they were interpreted. In the laboratory, researchers studied the chemical structure of substances released by the brain. By analysis of the Salk scientists' verbalizations of concepts, theories, formulas, and results of their investigations, Latour and Woolgar showed that the structures and interpretations that were agreed upon, that is, the discoveries announced by the laboratory, were mediated by social interactions and power relationships among members of the laboratory group. By studying the discovery process in other fields of the natural sciences, sociologists and anthropologists have provided more cases that further illustrate how social and cultural dimensions affect scientific insights (for a thoughtful review, see Knorr Cetina, 1995 ).

In sum, when an individual experiences an aha moment that feels like a singular creative act, it may rather have resulted from a multicomponent process, under the influence of group interactions and social context. The process that led up to what may be sensed as a sudden insight will probably have included at least three diverse, but testable elements: 1) divergent thinking, including ideational fluency or cognitive flexibility, which is the cognitive executive function that underlies the ability to visualize and accept many ideas related to a problem; 2) convergent thinking or the application of inhibitory control to focus and mentally evaluate ideas; and 3) analogical thinking, the ability to understand a novel idea in terms of one that is already familiar.

LITERATURE REVIEW

What do we know about how to teach creativity.

The possibility of teaching for creative problem solving gained credence in the 1960s with the studies of Jerome Bruner, who argued that children should be encouraged to “treat a task as a problem for which one invents an answer, rather than finding one out there in a book or on the blackboard” ( Bruner, 1965 , pp. 1013–1014). Since that time, educators and psychologists have devised programs of instruction designed to promote creativity and inventiveness in virtually every student population: pre–K, elementary, high school, and college, as well as in disadvantaged students, athletes, and students in a variety of specific disciplines (for review, see Scott et al. , 2004 ). Smith (1998) identified 172 instructional approaches that have been applied at one time or another to develop divergent thinking skills.

Some of the most convincing evidence that elements of creativity can be enhanced by instruction comes from work with young children. Bodrova and Leong (2001) developed the Tools of the Mind (Tools) curriculum to improve all of the three core mental executive functions involved in creative problem solving: cognitive flexibility, working memory, and inhibitory control. In a year-long randomized study of 5-yr-olds from low-income families in 21 preschool classrooms, half of the teachers applied the districts' balanced literacy curriculum (literacy), whereas the experimenters trained the other half to teach the same academic content by using the Tools curriculum ( Diamond et al. , 2007 ). At the end of the year, when the children were tested with a battery of neurocognitive tests including a test for cognitive flexibility ( Durston et al. , 2003 ; Davidson et al. , 2006 ), those exposed to the Tools curriculum outperformed the literacy children by as much as 25% ( Diamond et al. , 2007 ). Although the Tools curriculum and literacy program were similar in academic content and in many other ways, they differed primarily in that Tools teachers spent 80% of their time explicitly reminding the children to think of alternative ways to solve a problem and building their executive function skills.

Teaching older students to be innovative also demands instruction that explicitly promotes creativity but is rigorously content-rich as well. A large body of research on the differences between novice and expert cognition indicates that creative thinking requires at least a minimal level of expertise and fluency within a knowledge domain ( Bransford et al. , 2000 ; Crawford and Brophy, 2006 ). What distinguishes experts from novices, in addition to their deeper knowledge of the subject, is their recognition of patterns in information, their ability to see relationships among disparate facts and concepts, and their capacity for organizing content into conceptual frameworks or schemata ( Bransford et al. , 2000 ; Sawyer, 2005 ).

Such expertise is often lacking in the traditional classroom. For students attempting to grapple with new subject matter, many kinds of problems that are presented in high school or college courses or that arise in the real world can be solved merely by applying newly learned algorithms or procedural knowledge. With practice, problem solving of this kind can become routine and is often considered to represent mastery of a subject, producing what Sternberg refers to as “pseudoexperts” ( Sternberg, 2003 ). But beyond such routine use of content knowledge the instructor's goal must be to produce students who have gained the HOCS needed to apply, analyze, synthesize, and evaluate knowledge ( Crowe et al. , 2008 ). The aim is to produce students who know enough about a field to grasp meaningful patterns of information, who can readily retrieve relevant knowledge from memory, and who can apply such knowledge effectively to novel problems. This condition is referred to as adaptive expertise ( Hatano and Ouro, 2003 ; Schwartz et al. , 2005 ). Instead of applying already mastered procedures, adaptive experts are able to draw on their knowledge to invent or adapt strategies for solving unique or novel problems within a knowledge domain. They are also able, ideally, to transfer conceptual frameworks and schemata from one domain to another (e.g., Schwartz et al. , 2005 ). Such flexible, innovative application of knowledge is what results in inventive or creative solutions to problems ( Crawford and Brophy, 2006 ; Crawford, 2007 ).

Promoting Creative Problem Solving in the College Classroom

In most college courses, instructors teach science primarily through lectures and textbooks that are dominated by facts and algorithmic processing rather than by concepts, principles, and evidence-based ways of thinking. This is despite ample evidence that many students gain little new knowledge from traditional lectures ( Hrepic et al. , 2007 ). Moreover, it is well documented that these methods engender passive learning rather than active engagement, boredom instead of intellectual excitement, and linear thinking rather than cognitive flexibility (e.g., Halpern and Hakel, 2003 ; Nelson, 2008 ; Perkins and Wieman, 2008 ). Cognitive flexibility, as noted, is one of the three core mental executive functions involved in creative problem solving ( Ausubel, 1963 , 2000 ). The capacity to apply ideas creatively in new contexts, referred to as the ability to “transfer” knowledge (see Mestre, 2005 ), requires that learners have opportunities to actively develop their own representations of information to convert it to a usable form. Especially when a knowledge domain is complex and fraught with ill-structured information, as in a typical introductory college biology course, instruction that emphasizes active-learning strategies is demonstrably more effective than traditional linear teaching in reducing failure rates and in promoting learning and transfer (e.g., Freeman et al. , 2007 ). Furthermore, there is already some evidence that inclusion of creativity training as part of a college curriculum can have positive effects. Hunsaker (2005) has reviewed a number of such studies. He cites work by McGregor (2001) , for example, showing that various creativity training programs including brainstorming and creative problem solving increase student scores on tests of creative-thinking abilities.

What explicit instructional strategies are available to promote creative problem solving? In addition to brainstorming, McFadzean (2002) discusses several “paradigm-stretching” techniques that can encourage creative ideas. One method, known as heuristic ideation, encourages participants to force together two unrelated concepts to discover novel relationships, a modern version of Koestler's bisociation ( Koestler, 1964 ). On the website of the Center for Development and Learning, Robert Sternberg and Wendy M. Williams offer 24 “tips” for teachers wishing to promote creativity in their students ( Sternberg and Williams, 1998 ). Among them, the following techniques might apply to a science classroom:

  • Model creativity—students develop creativity when instructors model creative thinking and inventiveness.
  • Repeatedly encourage idea generation—students need to be reminded to generate their own ideas and solutions in an environment free of criticism.
  • Cross-fertilize ideas—where possible, avoid teaching in subject-area boxes: a math box, a social studies box, etc; students' creative ideas and insights often result from learning to integrate material across subject areas.
  • Build self-efficacy—all students have the capacity to create and to experience the joy of having new ideas, but they must be helped to believe in their own capacity to be creative.
  • Constantly question assumptions—make questioning a part of the daily classroom exchange; it is more important for students to learn what questions to ask and how to ask them than to learn the answers.
  • Imagine other viewpoints—students broaden their perspectives by learning to reflect upon ideas and concepts from different points of view.

Although these strategies are all consistent with the knowledge about creativity that I have reviewed above, evidence from well-designed investigations to warrant the claim that they can enhance measurable indicators of creativity in college students is only recently beginning to materialize. If creativity most often occurs in “a mental state where attention is defocused, thought is associative, and a large number of mental representations are simultaneously activated” ( Martindale, 1999 , p. 149), the question arises whether instructional strategies designed to enhance the HOCS also foster such a mental state? Do valid tests exist to show that creative problem solving can be enhanced by such instruction?

How Is Creativity Related to Critical Thinking and the Higher-Order Cognitive Skills?

It is not uncommon to associate creativity and ingenuity with scientific reasoning ( Sawyer, 2005 ; 2006 ). When instructors apply scientific teaching strategies ( Handelsman et al. , 2004 ; DeHaan, 2005 ; Wood, 2009 ) by using instructional methods based on learning research, according to Ebert-May and Hodder ( 2008 ), “we see students actively engaged in the thinking, creativity, rigor, and experimentation we associate with the practice of science—in much the same way we see students learn in the field and in laboratories” (p. 2). Perkins and Wieman (2008) note that “To be successful innovators in science and engineering, students must develop a deep conceptual understanding of the underlying science ideas, an ability to apply these ideas and concepts broadly in different contexts, and a vision to see their relevance and usefulness in real-world applications … An innovator is able to perceive and realize potential connections and opportunities better than others” (pp. 181–182). The results of Scott et al. (2004) suggest that nontraditional courses in science that are based on constructivist principles and that use strategies of scientific teaching to promote the HOCS and enhance content mastery and dexterity in scientific thinking ( Handelsman et al. , 2007 ; Nelson, 2008 ) also should be effective in promoting creativity and cognitive flexibility if students are explicitly guided to learn these skills.

Creativity is an essential element of problem solving ( Mumford et al. , 1991 ; Runco, 2004 ) and of critical thinking ( Abrami et al. , 2008 ). As such, it is common to think of applications of creativity such as inventiveness and ingenuity among the HOCS as defined in Bloom's taxonomy ( Crowe et al. , 2008 ). Thus, it should come as no surprise that creativity, like other elements of the HOCS, can be taught most effectively through inquiry-based instruction, informed by constructivist theory ( Ausubel, 1963 , 2000 ; Duch et al. , 2001 ; Nelson, 2008 ). In a survey of 103 instructors who taught college courses that included creativity instruction, Bull et al. (1995) asked respondents to rate the importance of various course characteristics for enhancing student creativity. Items ranking high on the list were: providing a social climate in which students feels safe, an open classroom environment that promotes tolerance for ambiguity and independence, the use of humor, metaphorical thinking, and problem defining. Many of the responses emphasized the same strategies as those advanced to promote creative problem solving (e.g., Mumford et al. , 1991 ; McFadzean, 2002 ; Treffinger and Isaksen, 2005 ) and critical thinking ( Abrami et al. , 2008 ).

In a careful meta-analysis, Scott et al. (2004) examined 70 instructional interventions designed to enhance and measure creative performance. The results were striking. Courses that stressed techniques such as critical thinking, convergent thinking, and constraint identification produced the largest positive effect sizes. More open techniques that provided less guidance in strategic approaches had less impact on the instructional outcomes. A striking finding was the effectiveness of being explicit; approaches that clearly informed students about the nature of creativity and offered clear strategies for creative thinking were most effective. Approaches such as social modeling, cooperative learning, and case-based (project-based) techniques that required the application of newly acquired knowledge were found to be positively correlated to high effect sizes. The most clear-cut result to emerge from the Scott et al. (2004) study was simply to confirm that creativity instruction can be highly successful in enhancing divergent thinking, problem solving, and imaginative performance. Most importantly, of the various cognitive processes examined, those linked to the generation of new ideas such as problem finding, conceptual combination, and idea generation showed the greatest improvement. The success of creativity instruction, the authors concluded, can be attributed to “developing and providing guidance concerning the application of requisite cognitive capacities … [and] a set of heuristics or strategies for working with already available knowledge” (p. 382).

Many of the scientific teaching practices that have been shown by research to foster content mastery and HOCS, and that are coming more widely into use, also would be consistent with promoting creativity. Wood (2009) has recently reviewed examples of such practices and how to apply them. These include relatively small modifications of the traditional lecture to engender more active learning, such as the use of concept tests and peer instruction ( Mazur, 1996 ), Just-in-Time-Teaching techniques ( Novak et al. , 1999 ), and student response systems known as “clickers” ( Knight and Wood, 2005 ; Crossgrove and Curran, 2008 ), all designed to allow the instructor to frequently and effortlessly elicit and respond to student thinking. Other strategies can transform the lecture hall into a workshop or studio classroom ( Gaffney et al. , 2008 ) where the teaching curriculum may emphasize problem-based (also known as project-based or case-based) learning strategies ( Duch et al. , 2001 ; Ebert-May and Hodder, 2008 ) or “community-based inquiry” in which students engage in research that enhances their critical-thinking skills ( Quitadamo et al. , 2008 ).

Another important approach that could readily subserve explicit creativity instruction is the use of computer-based interactive simulations, or “sims” ( Perkins and Wieman, 2008 ) to facilitate inquiry learning and effective, easy self-assessment. An example in the biological sciences would be Neurons in Action ( http://neuronsinaction.com/home/main ). In such educational environments, students gain conceptual understanding of scientific ideas through interactive engagement with materials (real or virtual), with each other, and with instructors. Following the tenets of scientific teaching, students are encouraged to pose and answer their own questions, to make sense of the materials, and to construct their own understanding. The question I pose here is whether an additional focus—guiding students to meet these challenges in a context that explicitly promotes creativity—would enhance learning and advance students' progress toward adaptive expertise?

Assessment of Creativity

To teach creativity, there must be measurable indicators to judge how much students have gained from instruction. Educational programs intended to teach creativity became popular after the Torrance Tests of Creative Thinking (TTCT) was introduced in the 1960s ( Torrance, 1974 ). But it soon became apparent that there were major problems in devising tests for creativity, both because of the difficulty of defining the construct and because of the number and complexity of elements that underlie it. Tests of intelligence and other personality characteristics on creative individuals revealed a host of related traits such as verbal fluency, metaphorical thinking, flexible decision making, tolerance of ambiguity, willingness to take risks, autonomy, divergent thinking, self-confidence, problem finding, ideational fluency, and belief in oneself as being “creative” ( Barron and Harrington, 1981 ; Tardif and Sternberg, 1988 ; Runco and Nemiro, 1994 ; Snyder et al. , 2004 ). Many of these traits have been the focus of extensive research of recent decades, but, as noted above, creativity is not defined by any one trait; there is now reason to believe that it is the interplay among the cognitive and affective processes that underlie inventiveness and the ability to find novel solutions to a problem.

Although the early creativity researchers recognized that assessing divergent thinking as a measure of creativity required tests for other underlying capacities ( Guilford, 1950 ; Torrance, 1974 ), these workers and their colleagues nonetheless believed that a high score for divergent thinking alone would correlate with real creative output. Unfortunately, no such correlation was shown ( Barron and Harrington, 1981 ). Results produced by many of the instruments initially designed to measure various aspects of creative thinking proved to be highly dependent on the test itself. A review of several hundred early studies showed that an individual's creativity score could be affected by simple test variables, for example, how the verbal pretest instructions were worded ( Barron and Harrington, 1981 , pp. 442–443). Most scholars now agree that divergent thinking, as originally defined, was not an adequate measure of creativity. The process of creative thinking requires a complex combination of elements that include cognitive flexibility, memory control, inhibitory control, and analogical thinking, enabling the mind to free-range and analogize, as well as to focus and test.

More recently, numerous psychometric measures have been developed and empirically tested (see Plucker and Renzulli, 1999 ) that allow more reliable and valid assessment of specific aspects of creativity. For example, the creativity quotient devised by Snyder et al. (2004) tests the ability of individuals to link different ideas and different categories of ideas into a novel synthesis. The Wallach–Kogan creativity test ( Wallach and Kogan, 1965 ) explores the uniqueness of ideas associated with a stimulus. For a more complete list and discussion, see the Creativity Tests website ( www.indiana.edu/∼bobweb/Handout/cretv_6.html ).

The most widely used measure of creativity is the TTCT, which has been modified four times since its original version in 1966 to take into account subsequent research. The TTCT-Verbal and the TTCT-Figural are two versions ( Torrance, 1998 ; see http://ststesting.com/2005giftttct.html ). The TTCT-Verbal consists of five tasks; the “stimulus” for each task is a picture to which the test-taker responds briefly in writing. A sample task that can be viewed from the TTCT Demonstrator website asks, “Suppose that people could transport themselves from place to place with just a wink of the eye or a twitch of the nose. What might be some things that would happen as a result? You have 3 min.” ( www.indiana.edu/∼bobweb/Handout/d3.ttct.htm ).

In the TTCT-Figural, participants are asked to construct a picture from a stimulus in the form of a partial line drawing given on the test sheet (see example below; Figure 1 ). Specific instructions are to “Add lines to the incomplete figures below to make pictures out of them. Try to tell complete stories with your pictures. Give your pictures titles. You have 3 min.” In the introductory materials, test-takers are urged to “… think of a picture or object that no one else will think of. Try to make it tell as complete and as interesting a story as you can …” ( Torrance et al. , 2008 , p. 2).

An external file that holds a picture, illustration, etc.
Object name is cbe0030901980001.jpg

Sample figural test item from the TTCT Demonstrator website ( www.indiana.edu/∼bobweb/Handout/d3.ttct.htm ).

How would an instructor in a biology course judge the creativity of students' responses to such an item? To assist in this task, the TTCT has scoring and norming guides ( Torrance, 1998 ; Torrance et al. , 2008 ) with numerous samples and responses representing different levels of creativity. The guides show sample evaluations based upon specific indicators such as fluency, originality, elaboration (or complexity), unusual visualization, extending or breaking boundaries, humor, and imagery. These examples are easy to use and provide a high degree of validity and generalizability to the tests. The TTCT has been more intensively researched and analyzed than any other creativity instrument, and the norming samples have longitudinal validations and high predictive validity over a wide age range. In addition to global creativity scores, the TTCT is designed to provide outcome measures in various domains and thematic areas to allow for more insightful analysis ( Kaufman and Baer, 2006 ). Kim (2006) has examined the characteristics of the TTCT, including norms, reliability, and validity, and concludes that the test is an accurate measure of creativity. When properly used, it has been shown to be fair in terms of gender, race, community status, and language background. According to Kim (2006) and other authorities in the field ( McIntyre et al. , 2003 ; Scott et al. , 2004 ), Torrance's research and the development of the TTCT have provided groundwork for the idea that creative levels can be measured and then increased through instruction and practice.

SCIENTIFIC TEACHING TO PROMOTE CREATIVITY

How could creativity instruction be integrated into scientific teaching.

Guidelines for designing specific course units that emphasize HOCS by using strategies of scientific teaching are now available from the current literature. As an example, Karen Cloud-Hansen and colleagues ( Cloud-Hansen et al. , 2008 ) describe a course titled, “Ciprofloxacin Resistance in Neisseria gonorrhoeae .” They developed this undergraduate seminar to introduce college freshmen to important concepts in biology within a real-world context and to increase their content knowledge and critical-thinking skills. The centerpiece of the unit is a case study in which teams of students are challenged to take the role of a director of a local public health clinic. One of the county commissioners overseeing the clinic is an epidemiologist who wants to know “how you plan to address the emergence of ciprofloxacin resistance in Neisseria gonorrhoeae ” (p. 304). State budget cuts limit availability of expensive antibiotics and some laboratory tests to patients. Student teams are challenged to 1) develop a plan to address the medical, economic, and political questions such a clinic director would face in dealing with ciprofloxacin-resistant N. gonorrhoeae ; 2) provide scientific data to support their conclusions; and 3) describe their clinic plan in a one- to two-page referenced written report.

Throughout the 3-wk unit, in accordance with the principles of problem-based instruction ( Duch et al. , 2001 ), course instructors encourage students to seek, interpret, and synthesize their own information to the extent possible. Students have access to a variety of instructional formats, and active-learning experiences are incorporated throughout the unit. These activities are interspersed among minilectures and give the students opportunities to apply new information to their existing base of knowledge. The active-learning activities emphasize the key concepts of the minilectures and directly confront common misconceptions about antibiotic resistance, gene expression, and evolution. Weekly classes include question/answer/discussion sessions to address student misconceptions and 20-min minilectures on such topics as antibiotic resistance, evolution, and the central dogma of molecular biology. Students gather information about antibiotic resistance in N. gonorrhoeae , epidemiology of gonorrhea, and treatment options for the disease, and each team is expected to formulate a plan to address ciprofloxacin resistance in N. gonorrhoeae .

In this project, the authors assessed student gains in terms of content knowledge regarding topics covered such as the role of evolution in antibiotic resistance, mechanisms of gene expression, and the role of oncogenes in human disease. They also measured HOCS as gains in problem solving, according to a rubric that assessed self-reported abilities to communicate ideas logically, solve difficult problems about microbiology, propose hypotheses, analyze data, and draw conclusions. Comparing the pre- and posttests, students reported significant learning of scientific content. Among the thinking skill categories, students demonstrated measurable gains in their ability to solve problems about microbiology but the unit seemed to have little impact on their more general perceived problem-solving skills ( Cloud-Hansen et al. , 2008 ).

What would such a class look like with the addition of explicit creativity-promoting approaches? Would the gains in problem-solving abilities have been greater if during the minilectures and other activities, students had been introduced explicitly to elements of creative thinking from the Sternberg and Williams (1998) list described above? Would the students have reported greater gains if their instructors had encouraged idea generation with weekly brainstorming sessions; if they had reminded students to cross-fertilize ideas by integrating material across subject areas; built self-efficacy by helping students believe in their own capacity to be creative; helped students question their own assumptions; and encouraged students to imagine other viewpoints and possibilities? Of most relevance, could the authors have been more explicit in assessing the originality of the student plans? In an experiment that required college students to develop plans of a different, but comparable, type, Osborn and Mumford (2006) created an originality rubric ( Figure 2 ) that could apply equally to assist instructors in judging student plans in any course. With such modifications, would student gains in problem-solving abilities or other HOCS have been greater? Would their plans have been measurably more imaginative?

An external file that holds a picture, illustration, etc.
Object name is cbe0030901980002.jpg

Originality rubric (adapted from Osburn and Mumford, 2006 , p. 183).

Answers to these questions can only be obtained when a course like that described by Cloud-Hansen et al. (2008) is taught with explicit instruction in creativity of the type I described above. But, such answers could be based upon more than subjective impressions of the course instructors. For example, students could be pretested with items from the TTCT-Verbal or TTCT-Figural like those shown. If, during minilectures and at every contact with instructors, students were repeatedly reminded and shown how to be as creative as possible, to integrate material across subject areas, to question their own assumptions and imagine other viewpoints and possibilities, would their scores on TTCT posttest items improve? Would the plans they formulated to address ciprofloxacin resistance become more imaginative?

Recall that in their meta-analysis, Scott et al. (2004) found that explicitly informing students about the nature of creativity and offering strategies for creative thinking were the most effective components of instruction. From their careful examination of 70 experimental studies, they concluded that approaches such as social modeling, cooperative learning, and case-based (project-based) techniques that required the application of newly acquired knowledge were positively correlated with high effect sizes. The study was clear in confirming that explicit creativity instruction can be successful in enhancing divergent thinking and problem solving. Would the same strategies work for courses in ecology and environmental biology, as detailed by Ebert-May and Hodder (2008) , or for a unit elaborated by Knight and Wood (2005) that applies classroom response clickers?

Finally, I return to my opening question with the fictional Dr. Dunne. Could a weekly brainstorming “invention session” included in a course like those described here serve as the site where students are introduced to concepts and strategies of creative problem solving? As frequently applied in schools of engineering ( Paulus and Nijstad, 2003 ), brainstorming provides an opportunity for the instructor to pose a problem and to ask the students to suggest as many solutions as possible in a brief period, thus enhancing ideational fluency. Here, students can be encouraged explicitly to build on the ideas of others and to think flexibly. Would brainstorming enhance students' divergent thinking or creative abilities as measured by TTCT items or an originality rubric? Many studies have demonstrated that group interactions such as brainstorming, under the right conditions, can indeed enhance creativity ( Paulus and Nijstad, 2003 ; Scott et al. , 2004 ), but there is little information from an undergraduate science classroom setting. Intellectual Ventures, a firm founded by Nathan Myhrvold, the creator of Microsoft's Research Division, has gathered groups of engineers and scientists around a table for day-long sessions to brainstorm about a prearranged topic. Here, the method seems to work. Since it was founded in 2000, Intellectual Ventures has filed hundreds of patent applications in more than 30 technology areas, applying the “invention session” strategy ( Gladwell, 2008 ). Currently, the company ranks among the top 50 worldwide in number of patent applications filed annually. Whether such a technique could be applied successfully in a college science course will only be revealed by future research.

  • Abrami P. C., Bernard R. M., Borokhovski E., Wadem A., Surkes M. A., Tamim R., Zhang D. Instructional interventions affecting critical thinking skills and dispositions: a stage 1 meta-analysis. Rev. Educ. Res. 2008; 78 :1102–1134. [ Google Scholar ]
  • Amabile T. M. Creativity in Context. Boulder, CO: Westview Press; 1996. [ Google Scholar ]
  • Amabile T. M., Barsade S. G., Mueller J. S., Staw B. M. Affect and creativity at work. Admin. Sci. Q. 2005; 50 :367–403. [ Google Scholar ]
  • Ausubel D. The Psychology of Meaningful Verbal Learning. New York: Grune and Stratton; 1963. [ Google Scholar ]
  • Ausubel B. The Acquisition and Retention of Knowledge: A Cognitive View. Boston, MA: Kluwer Academic Publishers; 2000. [ Google Scholar ]
  • Banaji S., Burn A., Buckingham D. The Rhetorics of Creativity: A Review of the Literature. London: Centre for the Study of Children, Youth and Media; 2006. [accessed 29 December 2008]. www.creativepartnerships.com/data/files/rhetorics-of-creativity-12.pdf . [ Google Scholar ]
  • Barron F., Harrington D. M. Creativity, intelligence and personality. Ann. Rev. Psychol. 1981; 32 :439–476. [ Google Scholar ]
  • Beller M. Quantum Dialogue: The Making of a Revolution. Chicago, IL: University of Chicago Press; 1999. [ Google Scholar ]
  • Blair C., Razza R. P. Relating effortful control, executive function, and false belief understanding to emerging math and literacy ability in kindergarten. Child Dev. 2007; 78 :647–663. [ PubMed ] [ Google Scholar ]
  • Bodrova E., Leong D. J. American Early Childhood and Primary Classrooms. Geneva, Switzerland: UNESCO International Bureau of Education; 2001. The Tool of the Mind: a case study of implementing the Vygotskian approach. [ Google Scholar ]
  • Bransford J. D., Brown A. L., Cocking R. R., editors. How People Learn: Brain, Mind, Experience, and School. Washington, DC: National Academies Press; 2000. [ Google Scholar ]
  • Brophy D. R. A comparison of individual and group efforts to creatively solve contrasting types of problems. Creativity Res. J. 2006; 18 :293–315. [ Google Scholar ]
  • Bruner J. The growth of mind. Am. Psychol. 1965; 20 :1007–1017. [ PubMed ] [ Google Scholar ]
  • Bull K. S., Montgomery D., Baloche L. Teaching creativity at the college level: a synthesis of curricular components perceived as important by instructors. Creativity Res. J. 1995; 8 :83–90. [ Google Scholar ]
  • Burton R. On Being Certain: Believing You Are Right Even When You're Not. New York: St. Martin's Press; 2008. [ Google Scholar ]
  • Cloud-Hanson K. A., Kuehner J. N., Tong L., Miller S., Handelsman J. Money, sex and drugs: a case study to teach the genetics of antibiotic resistance. CBE Life Sci. Educ. 2008; 7 :302–309. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Craft A. Teaching Creativity: Philosophy and Practice. New York: Routledge; 2000. [ Google Scholar ]
  • Crawford V. M. Adaptive expertise as knowledge building in science teachers' problem solving. Proceedings of the Second European Cognitive Science Conference; Delphi, Greece. 2007. [accessed 1 July 2008]. http://ctl.sri.com/publications/downloads/Crawford_EuroCogSci07Proceedings.pdf . [ Google Scholar ]
  • Crawford V. M., Brophy S. Menlo Park, CA: SRI International; 2006. [accessed 1 July 2008]. Adaptive Expertise: Theory, Methods, Findings, and Emerging Issues; September 2006. http://ctl.sri.com/publications/downloads/AESymposiumReportOct06.pdf . [ Google Scholar ]
  • Crossgrove K., Curran K. L. Using clickers in nonmajors- and majors-level biology courses: student opinion, learning, and long-term retention of course material. CBE Life Sci. Educ. 2008; 7 :146–154. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Crowe A., Dirks C., Wenderoth M. P. Biology in bloom: implementing Bloom's taxonomy to enhance student learning in biology. CBE Life Sci. Educ. 2008; 7 :368–381. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Davidson M. C., Amso D., Anderson L. C., Diamond A. Development of cognitive control and executive functions from 4–13 years: evidence from manipulations of memory, inhibition, and task switching. Neuropsychologia. 2006; 44 :2037–2078. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • DeHaan R. L. The impending revolution in undergraduate science education. J. Sci. Educ. Technol. 2005; 14 :253–270. [ Google Scholar ]
  • Diamond A., Barnett W. S., Thomas J., Munro S. Preschool program improves cognitive control. Science. 2007; 318 :1387–1388. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Duch B. J., Groh S. E., Allen D. E. The Power of Problem-based Learning. Sterling, VA: Stylus Publishers; 2001. [ Google Scholar ]
  • Durston S., Davidson M. C., Thomas K. M., Worden M. S., Tottenham N., Martinez A., Watts R., Ulug A. M., Caseya B. J. Parametric manipulation of conflict and response competition using rapid mixed-trial event-related fMRI. Neuroimage. 2003; 20 :2135–2141. [ PubMed ] [ Google Scholar ]
  • Ebert-May D., Hodder J. Pathways to Scientific Teaching. Sunderland, MA: Sinauer; 2008. [ Google Scholar ]
  • Finke R. A., Ward T. B., Smith S. M. Creative Cognition: Theory, Research and Applications. Boston, MA: MIT Press; 1996. [ Google Scholar ]
  • Freeman S., O'Connor E., Parks J. W., Cunningham M., Hurley D., Haak D., Dirks C., Wenderoth M. P. Prescribed active learning increases performance in introductory biology. CBE Life Sci. Educ. 2007; 6 :132–139. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Gabora L. Cognitive mechanisms underlying the creative process. In: Hewett T., Kavanagh E., editors. Proceedings of the Fourth International Conference on Creativity and Cognition; 2002 October 13–16; Loughborough University, United Kingdom. 2002. pp. 126–133. [ Google Scholar ]
  • Gaffney J.D.H., Richards E., Kustusch M. B., Ding L., Beichner R. Scaling up education reform. J. Coll. Sci. Teach. 2008; 37 :48–53. [ Google Scholar ]
  • Gardner H. New York: Harper Collins; 1993. Creating Minds: An Anatomy of Creativity Seen through the Lives of Freud, Einstein, Picasso, Stravinsky, Eliot, Graham, and Ghandi. [ Google Scholar ]
  • Gladwell M. In the air; who says big ideas are rare? The New Yorker. 2008. [accessed 19 May 2008]. www.newyorker.com/reporting/2008/05/12/080512fa_fact_gladwell .
  • Guilford J. P. Creativity. Am. Psychol. 1950; 5 :444–454. [ PubMed ] [ Google Scholar ]
  • Hake R. The physics education reform effort: a possible model for higher education. Natl. Teach. Learn. Forum. 2005; 15 :1–6. [ Google Scholar ]
  • Halpern D. E., Hakel M. D. Applying the science of learning to the university and beyond. Change. 2003; 35 :36–42. [ Google Scholar ]
  • Handelsman J. Scientific teaching. Science. 2004; 304 :521–522. [ PubMed ] [ Google Scholar ]
  • Handelsman J, Miller S., Pfund C. Scientific Teaching. New York: W. H. Freeman and Co; 2007. [ PubMed ] [ Google Scholar ]
  • Haring-Smith T. Creativity research review: some lessons for higher education. Association of American Colleges and Universities. Peer Rev. 2006; 8 :23–27. [ Google Scholar ]
  • Hatano G., Ouro Y. Commentary: reconceptualizing school learning using insight from expertise research. Educ. Res. 2003; 32 :26–29. [ Google Scholar ]
  • Hrepic Z., Zollman D. A., Rebello N. S. Comparing students' and experts' understanding of the content of a lecture. J. Sci. Educ. Technol. 2007; 16 :213–224. [ Google Scholar ]
  • Hunsaker S. L. Outcomes of creativity training programs. Gifted Child Q. 2005; 49 :292–298. [ Google Scholar ]
  • Kaufman J. C., Baer J. Intelligent testing with Torrance. Creativity Res. J. 2006; 18 :99–102. [ Google Scholar ]
  • Kaufman J. C., Beghetto R. A. Exploring mini-C: creativity across cultures. In: DeHaan R. L., Narayan K.M.V., editors. Education for Innovation: Implications for India, China and America. Rotterdam, The Netherlands: Sense Publishers; 2008. pp. 165–180. [ Google Scholar ]
  • Kaufman J. C., Sternberg R. J. Creativity. Change. 2007; 39 :55–58. [ Google Scholar ]
  • Kim K. H. Can we trust creativity tests: a review of the Torrance Tests of Creative Thinking (TTCT) Creativity Res. J. 2006; 18 :3–14. [ Google Scholar ]
  • Knight J. K., Wood W. B. Teaching more by lecturing less. Cell Biol. Educ. 2005; 4 :298–310. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cetina Knorr K. Laboratory studies: the cultural approach to the study of science. In: Jasanoff S., Markle G., Petersen J., Pinch T., editors. Handbook of Science and Technology Studies. Thousand Oaks, CA: Sage Publications; 1995. pp. 140–166. [ Google Scholar ]
  • Koestler A. The Act of Creation. New York: Macmillan; 1964. [ Google Scholar ]
  • Latour B., Woolgar S. Laboratory Life: The Construction of Scientific Facts. Princeton, NJ: Princeton University Press; 1986. [ Google Scholar ]
  • MacKinnon D. W. What makes a person creative? In: MacKinnon D. W., editor. In Search of Human Effectiveness. New York: Universe Books; 1978. pp. 178–186. [ Google Scholar ]
  • Martindale C. Biological basis of creativity. In: Sternberg R. J., editor. Handbook of Creativity. Cambridge, United Kingdom: Cambridge University Press; 1999. pp. 137–152. [ Google Scholar ]
  • Mazur E. Peer Instruction: A User's Manual. Upper Saddle River, NJ: Prentice Hall; 1996. [ Google Scholar ]
  • McFadzean E. Developing and supporting creative problem-solving teams: Part 1—a conceptual model. Manage. Decis. 2002; 40 :463–475. [ Google Scholar ]
  • McGregor G. D., Jr Creative thinking instruction for a college study skills program: a case study. Dissert Abstr. Intl. 2001; 62 :3293A. UMI No. AAT 3027933. [ Google Scholar ]
  • McIntyre F. S., Hite R. E., Rickard M. K. Individual characteristics and creativity in the marketing classroom: exploratory insights. J. Mark. Educ. 2003; 25 :143–149. [ Google Scholar ]
  • Mestre J. P., editor. Transfer of Learning: From a Modern Multidisciplinary Perspective. Greenwich, CT: Information Age Publishing; 2005. [ Google Scholar ]
  • Mumford M. D., Mobley M. I., Uhlman C. E., Reiter-Palmon R., Doares L. M. Process analytic models of creative capacities. Creativity Res. J. 1991; 4 :91–122. [ Google Scholar ]
  • National Research Council. Washington, DC: National Academies Press; 2007. Rising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future, Committee on Science, Engineering and Public Policy. [ Google Scholar ]
  • Neisser U. The multiplicity of thought. Br. J. Psychol. 1963; 54 :1–14. [ PubMed ] [ Google Scholar ]
  • Nelson C. E. Teaching evolution (and all of biology) more effectively: strategies for engagement, critical reasoning, and confronting misconceptions. Integrative and Comparative Biology Advance Access. 2008. [accessed 15 September 2008]. http://icb.oxfordjournals.org/cgi/reprint/icn027v1.pdf . [ PubMed ]
  • Novak G, Gavrin A., Christian W, Patterson E. Just-in-Time Teaching: Blending Active Learning with Web Technology. San Francisco, CA: Pearson Benjamin Cummings; 1999. [ Google Scholar ]
  • Osborn A. F. Your Creative Power. New York: Scribner; 1948. [ Google Scholar ]
  • Osborn A. F. Applied Imagination. New York: Scribner; 1979. [ Google Scholar ]
  • Osburn H. K., Mumford M. D. Creativity and planning: training interventions to develop creative problem-solving skills. Creativity Res. J. 2006; 18 :173–190. [ Google Scholar ]
  • Paulus P. B., Nijstad B. A. Group Creativity: Innovation through Collaboration. New York: Oxford University Press; 2003. [ Google Scholar ]
  • Perkins K. K., Wieman C. E. Innovative teaching to promote innovative thinking. In: DeHaan R. L., Narayan K.M.V., editors. Education for Innovation: Implications for India, China and America. Rotterdam, The Netherlands: Sense Publishers; 2008. pp. 181–210. [ Google Scholar ]
  • Plucker J. A., Renzulli J. S. Psychometric approaches to the study of human creativity. In: Sternberg R. J., editor. Handbook of Creativity. Cambridge, United Kingdom: Cambridge University Press; 1999. pp. 35–61. [ Google Scholar ]
  • Quitadamo I. J., Faiola C. L., Johnson J. E., Kurtz M. J. Community-based inquiry improves critical thinking in general education biology. CBE Life Sci. Educ. 2008; 7 :327–337. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Runco M. A. Creativity. Annu. Rev. Psychol. 2004; 55 :657–687. [ PubMed ] [ Google Scholar ]
  • Runco M. A., Nemiro J. Problem finding, creativity, and giftedness. Roeper Rev. 1994; 16 :235–241. [ Google Scholar ]
  • Sawyer R. K. Educating for Innovation. [accessed 13 August 2008]; Thinking Skills Creativity. 2005 1 :41–48. www.artsci.wustl.edu/∼ksawyer/PDFs/Thinkjournal.pdf . [ Google Scholar ]
  • Sawyer R. K. Explaining Creativity: The Science of Human Innovation. New York: Oxford University Press; 2006. [ Google Scholar ]
  • Schwartz D. L., Bransford J. D., Sears D. Efficiency and innovation in transfer. In: Mestre J. P., editor. Transfer of Learning from a Modern Multidisciplinary Perspective. Greenwich, CT: Information Age Publishing; 2005. pp. 1–51. [ Google Scholar ]
  • Scott G., Leritz L. E., Mumford M. D. The effectiveness of creativity training: a quantitative review. Creativity Res. J. 2004; 16 :361–388. [ Google Scholar ]
  • Simonton D. K. Sociocultural context of individual creativity: a transhistorical time-series analysis. J. Pers. Soc. Psychol. 1975; 32 :1119–1133. [ PubMed ] [ Google Scholar ]
  • Simonton D. K. Creativity in Science: Chance, Logic, Genius, and Zeitgeist. Oxford, United Kingdom: Cambridge University Press; 2004. [ Google Scholar ]
  • Sloman S. The empirical case for two systems of reasoning. Psychol. Bull. 1996; 9 :3–22. [ Google Scholar ]
  • Smith G. F. Idea generation techniques: a formulary of active ingredients. J. Creative Behav. 1998; 32 :107–134. [ Google Scholar ]
  • Snyder A., Mitchell J., Bossomaier T., Pallier G. The creativity quotient: an objective scoring of ideational fluency. Creativity Res. J. 2004; 16 :415–420. [ Google Scholar ]
  • Sternberg R. J. What is an “expert student?” Educ. Res. 2003; 32 :5–9. [ Google Scholar ]
  • Sternberg R., Williams W. M. Teaching for creativity: two dozen tips. 1998. [accessed 25 March 2008]. www.cdl.org/resource-library/articles/teaching_creativity.php .
  • Tardif T. Z., Sternberg R. J. What do we know about creativity? In: Sternberg R. J., editor. The Nature of Creativity. New York: Cambridge University Press; 1988. pp. 429–440. [ Google Scholar ]
  • Torrance E. P. Norms and Technical Manual for the Torrance Tests of Creative Thinking. Bensenville, IL: Scholastic Testing Service; 1974. [ Google Scholar ]
  • Torrance E. P. The Torrance Tests of Creative Thinking Norms—Technical Manual Figural (Streamlined) Forms A and B. Bensenville, IL: Scholastic Testing Service; 1998. [ Google Scholar ]
  • Torrance E. P., Ball O. E., Safter H. T. Torrance Tests of Creative Thinking: Streamlined Scoring Guide for Figural Forms A and B. Bensenville, IL: Scholastic Testing Service; 2008. [ Google Scholar ]
  • Treffinger D. J., Isaksen S. G. Creative problem solving: the history, development, and implications for gifted education and talent development. Gifted Child Q. 2005; 49 :342–357. [ Google Scholar ]
  • Vandervert L. R., Schimpf P. H., Liu H. How working memory and the cerebellum collaborate to produce creativity and innovation. Creativity Res. J. 2007; 9 :1–18. [ Google Scholar ]
  • Wallach M. A., Kogan N. Modes of Thinking in Young Children: A Study of the Creativity-Intelligence Distinction. New York: Holt, Rinehart and Winston; 1965. [ Google Scholar ]
  • Wood W. B. Innovations in undergraduate biology teaching and why we need them. Annu. Rev. Cell Dev. Biol. 2009 in press. [ PubMed ] [ Google Scholar ]

Help | Advanced Search

Computer Science > Artificial Intelligence

Title: mm-phyrlhf: reinforcement learning framework for multimodal physics question-answering.

Abstract: Recent advancements in LLMs have shown their significant potential in tasks like text summarization and generation. Yet, they often encounter difficulty while solving complex physics problems that require arithmetic calculation and a good understanding of concepts. Moreover, many physics problems include images that contain important details required to understand the problem's context. We propose an LMM-based chatbot to answer multimodal physics MCQs. For domain adaptation, we utilize the MM-PhyQA dataset comprising Indian high school-level multimodal physics problems. To improve the LMM's performance, we experiment with two techniques, RLHF (Reinforcement Learning from Human Feedback) and Image Captioning. In image captioning, we add a detailed explanation of the diagram in each image, minimizing hallucinations and image processing errors. We further explore the integration of Reinforcement Learning from Human Feedback (RLHF) methodology inspired by the ranking approach in RLHF to enhance the human-like problem-solving abilities of the models. The RLHF approach incorporates human feedback into the learning process of LLMs, improving the model's problem-solving skills, truthfulness, and reasoning capabilities, minimizing the hallucinations in the answers, and improving the quality instead of using vanilla-supervised fine-tuned models. We employ the LLaVA open-source model to answer multimodal physics MCQs and compare the performance with and without using RLHF.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Social science takes the stage in a live storytelling event at the Cantor Arts Center

Stanford researchers shared stories of psychotic breaks, economic disparities, and criminal justice reform at an event Tuesday hosted by Stanford Impact Labs in collaboration with The Story Collider.

problem solving in science

Dr. Rania Awaad retells the event that encouraged her to pursue a career in psychiatry. (Image credit: Christine Baker)

Late one night years ago, Rania Awaad and her husband were at home when they heard a loud and sudden knock at their front door. When they opened it, Awaad saw a young woman she’d met before at their local Muslim community center.

“Before I could say anything, she runs right past me into the apartment,” Awaad, now a clinical professor of psychiatry and behavioral sciences at Stanford’s School of Medicine, recalled on Tuesday evening at a show titled Testing Ground Live! Social Science on Stage , held at the Cantor Arts Center.

Awaad shared how she and her husband found the woman ducked behind their couch, her eyes wide and terrified. “I need to speak to the imam, my religious leader!” the woman said. Awaad told her that the imam was not in their apartment.

Moments later the woman ran out of the apartment to the community center across the street, still searching for the imam. After deliberating on how to help the woman, some members at the center began to pray for her. Meanwhile, Awaad’s husband contacted a community elder.

“This is a psychotic episode,” the elder said. “She needs to go to the emergency room.”

The woman eventually got the help she needed, but the event left a lasting impact on Awaad, who was struck that no one at the center recognized the woman’s psychiatric emergency and her need for medical attention.

It made Awaad, who was a fourth-year medical student studying to become an obstetrician, realize the importance of mental health, and led her to switch her studies to psychiatry.

Tuesday’s event was hosted by Stanford Impact Labs (SIL) in collaboration with The Story Collider . It featured Stanford researchers like Awaad, a former SIL design fellow , sharing stories of pivotal moments in their lives that changed how they approached their work in mental health, digital literacy, and criminal justice reform, among other societal issues.

SIL is a cross-university initiative that launched in the 2019-20 academic year as part of the university’s  Long-range Vision  to train and support researchers to serve the public good by using data-driven, social science research to develop actionable ways to address pernicious and pervasive social problems.

problem solving in science

Hannah Melville-Rea, a PhD student from Australia, shares what she’s learned about America’s home insurance system and the impact it has on various communities. (Image credit: Christine Baker)

Wild wild west

Another presenter was Hannah Melville-Rea, a PhD student from Australia. At Stanford, she’s studying environment and resources at the Stanford Doerr School of Sustainability, is a Knight-Hennessy Scholar, and was a 2023 SIL Summer PhD Fellow . Taking the stage, she shared that to better understand America’s home insurance system and how (or whether) it serves communities impacted by significant flood risk, she attended two local events. The first was a workshop in Menlo Park for residents to learn how to get the most out of their insurance. She recalled expensive cars parked outside the event. Inside were tables with Tiffany-blue tablecloths and appetizers.

“I look around the room at the other attendees. Everyone is white. Everyone is over the age of 65. I think everyone knows each other because they only asked me to introduce myself,” Melville-Rea recalled.

“I realize this [event] is only applicable if you’re a homeowner with insurance,” she said.

A couple of weeks later, Melville-Rea attended a crowded community meeting in East Palo Alto where residents shared their frustration with flooding and a lack of support from FEMA, the Federal Emergency Management Agency, tasked with responding to natural disasters.

“I cannot get over how different these two community meetings were [and] only three miles apart,” she said. “Up the hill, a bunch of homeowners with good insurance, who honestly, probably could weather a storm without it. Down the road, a bunch of renters who we now know had no insurance, who are really at the frontlines of these climate impacts, and now they’re being ghosted by FEMA.”

She said that as an Australian, she assumed the government would always step in to provide quality security for residents, regardless of their economic status. But she was surprised to learn the opposite was true in the United States.

“We live in the wild west. It is up to the individual. Everyone needs their own safety net,” she said. “And we urgently need to get everyone insurance.”

problem solving in science

Alex Chohlas-Wood speaks at Testing Ground Live! Social Science on Stage at the Cantor Arts Center. (Image credit: Christine Baker)

High stakes decisions

Alex Chohlas-Wood is the executive director of the Computational Policy Lab (CPL), which has twice been funded by SIL and where he uses technology and data science to support criminal justice reform. He spoke about a pilot project he worked on for the San Francisco District Attorney’s office focused on “race-blind charging.” The idea, he explained, was to develop an artificial intelligence tool that could automatically redact race-related information from police reports that prosecutors review when deciding whether to charge or dismiss a crime.

His team got to work developing an algorithm for redacting potential indicators of race in police reports, including names and addresses. By the summer of 2019, they had a reliable system, but the project was cut short due to the COVID-19 pandemic. Officials in Yolo County, California expressed interest in the system, so Chohlas-Wood’s team developed one for their district attorney.

“He was so excited about this idea, that he got his own legislator to write a bill mandating that all prosecutors across the state of California use race blind charging by the beginning of 2025,” Chohlas-Wood said.

In 2022, the bill passed unanimously in both state houses and Governor Newsom signed it into law. Chohlas-Wood said he was excited to see his work lead to such meaningful policy changes.

“At the same time, I felt a real sense of responsibility to make sure this thing was done right, and to make sure that we could actually evaluate its impacts and charging decisions – really high stakes decisions – that prosecutors make, that can have profound impacts on people’s lives,” he said.

The next steps CPL is taking to evaluate and scale race-blind charging across California have been funded by a Stanford Impact Labs Stage 2 investment .

In total, six storytellers shared five stories on stage at Tuesday’s event. Recordings of each will soon be available on SIL’s website .

Your browser is not supported. Please upgrade your browser to one of our supported browsers . You can try viewing the page, but expect functionality to be broken.

Computer Science Fundamentals

Free set of elementary curricula that introduces students to the foundational concepts of computer science and challenges them to explore how computing and technology can impact the world.

problem solving in science

Free, and fun, elementary courses for each grade

  • Six courses, one for each elementary grade
  • Equitable introductory CS courses
  • Use the same course for all students in the same grade, regardless of their experience
  • All courses make suitable entry points for students

Curricula at a glance

Grades: K-5

Level: Beginner

Duration: Month or Quarter

Devices: Laptop, Chromebook, Tablet

Topics: Programming, Internet, Games and Animation, Art and Design, App Design

Programming Tools: Sprite Lab, Play Lab

Professional Learning: Facilitator-led Workshops, Self-paced Modules

Accessibility: Text-to-speech, Closed captioning, Immersive reader

Languages Supported: Arabic, Bahasa Indonesian, Catalán, Chinese Simplified, Chinese Traditional, Czech, French, German, Hindi, Italian, Japanese, Korean, Kannada, Malay, Marathi, Mongolian, Polish, Portuguese-BR, Romanian, Russian, Slovak, Tagalog, Tamil, Thai, Turkish, Ukrainian, Spanish Latam, Urdu, Spanish-ES, Uzbek, Vietnamese

I've been teaching the course since the Monday after the workshop. The students and I LOVE it (and so do their classroom teachers!!!)

CS Fundamentals Teacher

Picking the right CS Fundamentals course for your classroom

With the diverse set of options offered for CS Fundamentals, there is a course for all different needs.

How will your students engage with the content?

Courses specifically designed for your elementary classroom.

Find the course for the grade you teach. Each course is approximately a month long.

Kindergarten

problem solving in science

Program using commands like loops and events. Teach students to collaborate with others, investigate different problem-solving techniques, persist in the face of challenging tasks, and learn about internet safety.

problem solving in science

Through unplugged activities and a variety of puzzles, students will learn the basics of programming, collaboration techniques, investigation and critical thinking skills, persistence in the face of difficulty, and internet safety.

problem solving in science

Create programs with sequencing, loops, and events. Investigate problem-solving techniques and develop strategies for building positive communities both online and offline. Create interactive games that students can share.

problem solving in science

Review of the concepts found in earlier courses, including loops and events. Afterward, students will develop their understanding of algorithms, nested loops, while loops, conditionals, and more.

problem solving in science

Make fun, interactive projects that reinforce learning about online safety. Engage in more complex coding such as nested loops, functions, and conditionals.

problem solving in science

Look at how users make choices in the apps they use. Make a variety of Sprite Lab apps that also offer choices for the user. Learn more advanced concepts, including variables and “for” loops.

Self-paced elementary curriculums

Teachers play a critical role in student learning by teaching our unplugged activities and leading whole class discussions, however, we recognize that CS Fundamentals isn't always taught in a traditional classroom setting. We provide two self-paced express courses alongside Courses A-F. These express courses are designed for situations where teachers allow each student to work at their own pace independently.

Grades: K-1

Pre-Reader Express

problem solving in science

Learn the basics of drag-and-drop block coding by solving puzzles and creating animated scenes. Make art and simple games to share with friends, family, and teachers.

Grades: 2-5

problem solving in science

Learn to create computer programs, develop problem-solving skills, and work through fun challenges! Make games and creative projects to share with friends, family, and teachers.

No devices? We have you covered

problem solving in science

Go ahead, cut the cord (for a while)!

CS education does not always need to be in front of a screen and device access shouldn't be a barrier to learning computer science concepts.

Resources that support you every step of the way

Sign up for a Code.org account to get access to materials that will help you teach computer science with confidence. Code.org has extensive resources designed to support educators, even those without prior CS teaching experience.

Lesson Plans

Get step-by-step guidance, learning objectives, and assessment strategies for effective teaching.

Helpful resources include slide decks, activity guides, rubrics, and more — all organized in one place. Each lesson plan is accompanied by tips for classroom implementation, differentiation ideas, and extension activities to cater to students of all abilities.

Instructional Videos

Watch easy-to-understand overviews of computer science and programming concepts.

Code.org video series are designed specifically to support your classroom and are engaging and fun to watch.

Slide Decks

We offer educators an organized, visually engaging, and pedagogically sound framework to deliver computer science lessons.

Code.org slide decks provide step-by-step instructions, examples, and interactive activities that align with curricular objectives.

problem solving in science

Assessments

Our curricula includes a comprehensive system of formative and summative assessment resources.

These include rubrics, checklists, mini-projects, end-of-chapter projects, student-facing rubrics, sample projects, and post-project tests — all designed to support teachers in measuring student growth, providing feedback, and evaluating student understanding.

problem solving in science

Programming Tools

Code.org's integrated development environments (IDEs) cater to students of all skill levels.

We offer a versatile and user-friendly platform that supports a variety of programming paradigms. This enables learners to seamlessly transition from block-based coding to text-based languages, and fosters creativity and innovation.

Professional learning that meets your needs

Get the support you need as you prepare to teach. Teachers love it, with over 90% ranking it the best professional development ever!

Facilitator-led Workshops

problem solving in science

Join local teachers for inspiring and hands-on support to implement computer science in your classroom. Our Regional Partners offer high-quality, one-day Code.org workshops for individual teachers or for schoolwide PD. Sign up for a professional development workshop near you!

Self-Paced Online Modules

problem solving in science

Through reading, viewing videos, completing interactive puzzles, and reflecting on your learning, you will develop your own understanding while preparing to teach computer science in your classroom.

Frequently asked questions

CS Fundamentals was written using both the K-12 Framework for Computer Science and the CSTA standards as guidance. Currently, every lesson in CS Fundamentals contains mappings to the relevant CSTA standards. The summary of all CSTA mappings for each course can be found at:

  • Course A Standards
  • Course B Standards
  • Course C Standards
  • Course D Standards
  • Course E Standards
  • Course F Standards

A Google Sheets version of the standards can be found at CSF Standards .

The leading K-12 CS curriculum in the United States, our elementary program has been proven effective in major urban school districts like Dallas, as well as small rural districts in Iowa. There is no need to hire specialists to teach CS. Our program is uniquely designed to support teachers new to CS while offering the flexibility to evolve lessons to fit student needs. Share this brochure with your school and district administrators, or suggest they take a look at our administrators page specially designed to answer administrators' most common questions.

Our curriculum and platform are available at no cost for anyone, anywhere, to teach!

New to teaching computer science? No worries! Most of our teachers have never taught computer science before. Join local teachers for inspiring and hands-on support to implement computer science in your classroom. Our Regional Partners offer high-quality, one-day Code.org workshops for individual teachers or for schoolwide PD. Sign up for a professional development workshop near you !

Join over 100,000 teachers who have participated in our workshops. The majority of our workshop attendees say, 'It's the best professional development I've ever attended.' In fact, 90% of attendees would recommend our program to other teachers !

Each CSF course includes 13-17 lessons designed for 45-minute periods. We recommend all students move from lesson to lesson at a pace set by the teacher. There are many teacher-led project levels designed to be experienced in unison while the skill-building lessons can be completed by students at their own pace.

Many lessons have handouts that guide students through activities. These resources can be printed or assigned digitally. Some lessons call for typical classroom supplies and manipulatives. Visit the CSF Syllabus to learn more .

Support and questions

problem solving in science

Still have questions? Reach out to us! We are here to help.

Our support team is here to answer any questions you may have about starting teaching with Code.org. You can also ask other teachers about their experience on our teacher forums.

Subscribe for updates

Sign up to receive monthly emails about Code.org's Computer Science Fundamentals and get helpful reminders, tips, and updates sent right to your inbox.

You can unsubscribe at any time.

problem solving in science

Please select your language

Why couples have problems communicating with each other

Communication failures often cause problems in personal relationships. It is the No. 1 reason people seek marital therapy . It hurts parent-child relationships . And it leads to rifts in close friendships .

My experiences in treating couples in the therapy room, and relationship science, reveal that these problems often result from a mismatch in two specific types of conversations.

Most conversations can be categorized as either understanding or problem-solving, according to researchers who study cognitive behavioral couples therapy (CBCT) and other mental health treatments. Understanding or empathy conversations aim to better understand thoughts, feelings, opinions, hopes, values and experiences. Problem-solving, decision-making or strategizing conversations aim to solve a problem or make a decision.

Despite evidence for gender differences in language , everyone is likely to want, at various times, to have one kind of conversation or the other. We want to share emotions to relieve stress, connect and be deeply understood. We also want to be able to discuss what isn’t going well in a relationship and what can be done to make it go better.

Get Well+Being tips straight to your inbox

problem solving in science

But people often don’t agree on an optimal conversation type or know what the other person wants. Studies show that people are overconfident in their knowledge of what others want or intend in their conversation, particularly in close relationships.

I teach my patients that if they can have clarity on what they want and convey that to their partner, they can ease many of the communication problems that arise in their relationships.

Repetitive conversation patterns

In therapy sessions, I have observed instances when one partner wants the other to understand them better only to be met with a problem-solving response (or vice versa). And when this pattern becomes repetitive , partners grow frustrated, increasingly disconnected and even bitterly resentful.

For instance, in one couple’s therapy session, a husband wanted to discuss how hurtful and scary it was that his wife had hidden purchases from him. His wife, perceiving his frustration as a problem to be solved, said, “fine, I’ll give you the passcode to my account — you can check as often as you need.” Rather than being mollified, the husband grew angrier because his wife missed his concern by jumping into problem-solving mode. He wanted his wife to understand that he felt their relationship was lacking in shared goals around both spending and proactive transparency.

A different couple struggled with a conversation mismatch about prioritizing intimacy. One woman expressed frustration that both weren’t making time to connect, which made it difficult to cultivate emotional and physical intimacy. Assuming that this was a conversation to promote better understanding, her wife validated her experience, then began to share her own. But instead of appreciating the efforts to understand, the first woman became irritated and said, “we never exit insight mode and do the conversation where we actually figure out how to find time for one another.”

Mismatch can lead to disconnection

Charles Duhigg , a Pulitzer Prize-winning journalist and author of “Supercommunicators,” told me he was years into his marriage and had just become a manager, when his wife and direct reports began to express that they did not feel heard by him.

This feedback troubled him; he cared about and prioritized listening to those close to him, he said. These were also skills he relied on for his work as a writer and manager — and as a partner. “It felt like this plague that I could not get rid of,” he said.

Not being able to match conversation types with others can contribute to a sense of profound disconnection, as well as communication ineffectiveness. It’s like rowing a boat with someone else when, unwittingly, you’ve each decided on a different destination. The effort expended by each person isn’t aligning with the progress being made.

That dissatisfaction is more than behavioral; it’s also neurological. One study showed that in conversations with better connectedness (measured as consensus), people’s brains become synchronized.

But different types of conversations activate different parts of our brains, making it hard to synchronize. While sharing emotions, core structures of the brain such as the amygdala fire, whereas in problem-solving mode, prefrontal cortex activity heightens. Different conversations place us on different wavelengths, neurologically speaking.

Without brain alignment, we may not be truly heard. But we can use some strategies to better match our conversation type, helping us get back into alignment with one another.

Understand what you want

Sometimes we don’t know what we want out of a conversation, particularly around more challenging conversations . Get curious and ask yourself, “do I want to vent or get support, or do I want a thought partner to help me problem-solve?”

Share what you want

It is natural for one person to incorrectly guess what the other wants. So be explicit. You can say, “I had a hard day, I’d really love to vent, not problem-solve, okay?” Or, “Ugh, I’m really struggling with my health issue. I don’t just want sympathetic support; I want a thought partner to help me figure out next steps. Would you be willing?”

Look out for clashing communication types

We will miss opportunities to be clear, we will assume we know what others want even when we don’t, and we are sure to clash in communication desires. Recognizing these inevitable blunders can help us remain open to a course correction.

When you notice yourself getting frustrated that your partner is not hearing you — even though they seem engaged — pause and ask, “What kind of conversation are we having, or should we try to have: understanding or problem-solving?”

Duhigg said he now frequently checks in with his wife about whether they are having a brainstorming session or an emotional understanding conversation. Some might think it would annoy others to be asked, but it helps people clarify their conversation desires for themselves, as well as for the other person. And, he added, “When someone asks you what do you want, it’s kind of nice. It shows that they care about what you want.”

Yael Schonbrun, PhD, is a clinical psychologist and assistant professor at Brown University. Her work, including her weekly newsletter Relational Riffs , focuses on the science of growing a happier, healthier relational life.

We welcome your comments on this column at [email protected] .

Read more from Well+Being

Well+Being shares news and advice for living well every day. Sign up for our newsletter to get tips directly in your inbox.

Are you taking your meds wrong ? Many patients make these common mistakes.

Centenarians give their advice about everything.

The wall sit is a simple exercise that can lower your blood pressure.

Tart cherries — more specifically, tart cherry juice — may help with inflammation and pain.

Do you self-sabotage ? Here’s how to stop.

  • Some 11 percent of U.S. children have been diagnosed with ADHD 1 hour ago Some 11 percent of U.S. children have been diagnosed with ADHD 1 hour ago
  • Want a mood boost? Get out there and try something new. April 8, 2024 Want a mood boost? Get out there and try something new. April 8, 2024
  • How flotation therapy may help your mental health April 5, 2024 How flotation therapy may help your mental health April 5, 2024

problem solving in science

  • Research & Faculty
  • Offices & Services
  • Information for:
  • Faculty & Staff
  • News & Events
  • Contact & Visit
  • About the Department
  • Message from the Chair
  • Computer Science Major (BS/BA)
  • Computer Science Minor
  • Data Science and Engineering Minor
  • Combined BS (or BA)/MS Degree Program
  • Intro Courses
  • Special Programs & Opportunities
  • Student Groups & Organizations
  • Undergraduate Programs
  • Undergraduate Research
  • Senior Thesis
  • Peer Mentors
  • Curriculum & Requirements
  • MS in Computer Science
  • PhD in Computer Science
  • Admissions FAQ
  • Financial Aid
  • Graduate Programs
  • Courses Collapse Courses Submenu
  • Research Overview
  • Research Areas
  • Systems and Networking
  • Security and Privacy
  • Programming Languages
  • Artificial Intelligence
  • Human-Computer Interaction
  • Vision and Graphics
  • Groups & Labs
  • Affiliated Centers & Institutes
  • Industry Partnerships
  • Adobe Research Partnership
  • Center for Advancing Safety of Machine Intelligence
  • Submit a Tech Report
  • Tech Reports
  • Tenure-Track Faculty
  • Faculty of Instruction
  • Affiliated Faculty
  • Adjunct Faculty
  • Postdoctoral Fellows
  • PhD Students
  • Outgoing PhDs and Postdocs
  • Visiting Scholars
  • News Archive
  • Weekly Bulletin
  • Monthly Student Newsletter
  • All Public Events
  • Seminars, Workshops, & Talks
  • Distinguished Lecture Series
  • CS Colloquium Series
  • CS + X Events
  • Tech Talk Series
  • Honors & Awards
  • External Faculty Awards
  • University Awards
  • Department Awards
  • Student Resources
  • Undergraduate Student Resources
  • MS Student Resources
  • PhD Student Resources
  • Student Organization Resources
  • Faculty Resources
  • Postdoc Resources
  • Staff Resources
  • Purchasing, Procurement and Vendor Payment
  • Expense Reimbursements
  • Department Operations and Facilities
  • Initiatives
  • Student Groups
  • CS Faculty Diversity Committee
  • Broadening Participation in Computing (BPC) Plan
  • Northwestern Engineering

WildHacks 2024 Showcases Students’ Dynamic Problem-solving Skills

Students from 14 universities completed 43 innovative software projects april 5 – 7 during the wildhacks 2024 event.

WildHacks 2024 Opening Ceremony | Photos by Alessandra Esquivel

Collaborative, creative, and fast-paced, WildHacks is Northwestern’s largest hackathon. The annual coding-based competition is designed for all students to learn and broaden their programming skills. Teams solve problems and innovate in a convivial, inclusive, and supportive atmosphere.

During WildHacks 2024 , held April 5 - 7, approximately 300 participants from 14 universities dedicated the weekend to building functional and compelling software.

Dilan Nair

University students across the country were invited to participate. Alongside more than 200 Northwestern participants, hackers joined the in-person event from Colgate University, DePaul University, Emory University, Illinois Institute of Technology, Loyola University Chicago, Rice University, Stanford University, University of Illinois Chicago, University of Illinois Urbana-Champaign, University of Iowa, University of Pennsylvania, University of Toronto, and the University of Wisconsin–Madison.

WildHacks 2024 judges Sruti Bhagavatula and Joseph Hummel

The WildHacks organizing team also included:

  • Ebube Okonji , director of logistics and programming
  • Juwon Park , director of marketing and design
  • Kris Yun , director of sponsorship
  • Riva Lakkadi , logistics and programming support
  • Defne Deda and Hunter Tran, marketing and design team
  • Vivian Chen and Dahyun Kang , sponsorship team
  • Annabel Edwards , Andrew Li , and Angela Zheng , technology team

To support participants with varying degrees of experience, the WildHacks team shared resources and beginner-friendly tutorials. Student organizations including Develop + Innovate for Social Change (DISC NU), Emerging Coders , Northwestern University's Student Chapter of the Institute of Electrical and Electronics Engineers (NU IEEE), and Responsible AI Student Organization (RAISO) hosted workshops. A Discord-based help channel was also available during the event to aid hackers.

WildHacks submissions

A total of 43 software projects were submitted.

In the first round of judging, all team submissions were evaluated on the criteria of technical complexity, utility, originality, design, and presentation within three tracks: urban planning, productivity, and wellness. The 10 top-rated teams were then invited to present an in-depth demo of their projects’ functionality in an interactive session with the panel of judges.

Music Notes , developed by Northwestern students Alex Feng, Shubhanshi Gaudani, and Carolyn Zou, along with Anthony Xie (Stanford University), won first place overall. The program is a study aid that leverages a language model to extract the core ideas of course readings and turn them into the titles and lyrics for catchy songs.

Northwestern team members Samarth Arul, Jeffrey Ryan, and Jonathan Zhang earned second place for FYND , an intuitive application that builds community by matching the hobbies and interests of new college graduates who are relocating to unfamiliar cities.

Angel Shot , built by Northwestern students Jack Burkhardt, Alexis Diaz-Waterman, Lev Rosenberg, and Finn Wintz, earned third place. The program helps people in uneasy or potentially unsafe situations, such as a rideshare driver taking an unfamiliar route, make a realistic-sounding phone call using an OpenAI text-to-speech system.

"I was really impressed with how far our students came in one day and the quality of their projects,” said Sruti Bhagavatula , a WildHacks 2024 judge and assistant professor of instruction at Northwestern Engineering. “It was great to hear how much students learned in the process even if they couldn't get a working project scraped together by the end. Hackathons are a chance for students to be thrown into something new and get their hands dirty and it's always successful in that respect whether they get a working product or not."

Additional winning projects included:

  • The project Remap.City , by Northwestern students Ryan Newkirk, Andrew Pulver, and Franklin Zhao, earned the urban planning track award.
  • Northwestern team Darin Cheng and Eliseu Kloster Filho won the productivity track award for their MathTyper tool.
  • Northwestern’s Timothy Fu and James Kim earned the wellness track prize for their Fitlial fitness app.
  • Best design was awarded to the web app Heartland , developed by Northwestern students Iris Ely, Miya Liu, Elysia Lopez, and Chelsey Yiran Tao.
  • The iOS app MoodGenie , by Northwestern students Irena Liu, Michelle Sheen, Michelle Wang, Ziye Wang, won the best technology award.
  • Netpet , a virtual internet pet, won the crowd favorite award. The screen time companion was built by Northwestern students Alison Bai, Jasmine Meyer, and Hannah Zhao.

Additional WildHacks 2024 project judges included Brylan Donaldson , associate director at The Garage at Northwestern , and Lydia Tse , software engineer at Google and adjunct lecturer in computer science at Northwestern Engineering.

Representatives from industry and the event sponsors also volunteered their time to evaluate project submissions, including Ronit Basu (Domino Data Lab), Andrew Seto, Sahar Siddiqui, and Ian Wallace (Accenture).

Hackers also had the opportunity to enter their projects into extra challenges presented and judged by Major League Hacking (MLH), a student hackathon league platform that provided support for WildHacks 2024. The MLH challenge winners included Version Control Add-on (most creative Adobe Express add-on), Bearly (best domain name from GoDaddy registry), Urban Pulse (best DEI hack sponsored by Fidelity), PocketCats.Co (best use of Kintone), and Scholarly (best use of AI in education). The project FundsFly earned both the MLH ‘best use of Starknet’ award and the Capital One challenge for best financial hack.

Sponsors of the event included Northwestern Computer Science, The Garage at Northwestern, Accenture, Call for Code, Capital One, CodeCrafters, Deloitte, and Major League Hacking. Event partners included Coca-Cola, Insomnia Cookies, and StandOut Stickers.

McCormick News Article

  • Engineering Home
  • CS Department
  • News Article

Recent Stories

problem solving in science

Semiconductors at scale: New processor achieves remarkable speedup in problem solving

A nnealing processors are designed specifically for addressing combinatorial optimization problems, where the task is to find the best solution from a finite set of possibilities. This holds implications for practical applications in logistics, resource allocation, and the discovery of drugs and materials.

In the context of CMOS (a type of semiconductor technology), it is necessary for the components of annealing processors to be fully "coupled." However, the complexity of this coupling directly affects the scalability of the processors.

In a new IEEE Access study led by Professor Takayuki Kawahara from Tokyo University of Science, researchers have developed and successfully tested a scalable processor that divides the calculation into multiple LSI chips. The innovation was also presented in IEEE 22nd World Symposium on Applied Machine Intelligence and Informatics (SAMI 2024) on 25 January 2024.

According to Prof. Kawahara, "We want to achieve advanced information processing directly at the edge, rather than in the cloud or performing preprocessing at the edge for the cloud. Using the unique processing architecture announced by the Tokyo University of Science in 2020, we have realized a fully coupled LSI (Large Scale Integration) on one chip using 28nm CMOS technology. Furthermore, we devised a scalable method with parallel-operating chips and demonstrated its feasibility using FPGAs (Field-Programmable Gate Arrays) in 2022."

The team created a scalable annealing processor. It used 36 22nm CMOS calculation LSI (Large Scale Integration) chips and one control FPGA. This technology enables the construction of large-scale, fully coupled semiconductor systems following the Ising model (a mathematical model of magnetic systems) with 4096 spins.

The processor incorporates two distinct technologies developed at the Tokyo University of Science. This includes a "spin thread method" that enables 8 parallel solution searches, coupled with a technique that reduces chip requirements by about half compared to conventional methods. Its power needs are also modest, operating at 10MHz with a power consumption of 2.9W (1.3W for the core part). This was practically confirmed using a vertex cover problem with 4096 vertices.

In terms of power performance ratio, the processor outperformed simulating a fully coupled Ising system on a PC (i7, 3.6GHz) using annealing emulation by 2,306 times. Additionally, it surpassed the core CPU and arithmetic chip by 2,186 times.

The successful machine verification of this processor suggests the possibility of enhanced capacity. According to Prof. Kawahara, who holds a vision for the social implementation of this technology (such as initiating a business, joint research, and technology transfer), "In the future, we will develop this technology for a joint research effort targeting an LSI system with the computing power of a 2050-level quantum computer for solving combinatorial optimization problems."

"The goal is to achieve this without the need for air conditioning, large equipment, or cloud infrastructure using current semiconductor processes. Specifically, we would like to achieve 2M (million) spins by 2030 and explore the creation of new digital industries using this."

In summary, researchers have developed a scalable, fully coupled annealing processor incorporating 4096 spins on a single board with 36 CMOS chips. Key innovations, including chip reduction and parallel operations for simultaneous solution searches, played a crucial role in this development.

More information: Taichi Megumi et al, Scalable Fully-Coupled Annealing Processing System Implementing 4096 Spins Using 22nm CMOS LSI, IEEE Access (2024). DOI: 10.1109/ACCESS.2024.3360034

Provided by Tokyo University of Science

(a) The die photo of a 22nm fully-coupled Ising LSI chip; (b) the front and back views of the board of a 4096-spin scalable full- coupled Ising LSI system. Credit: Takayuki Kawahara from TUS

  • Trending Now
  • Foundational Courses
  • Data Science
  • Practice Problem
  • Machine Learning
  • System Design
  • DevOps Tutorial

Welcome to the daily solving of our PROBLEM OF THE DAY with Siddharth Hazra . We will discuss the entire problem step-by-step and work towards developing an optimized solution. This will not only help you brush up on your concepts of Arrays but also build up problem-solving skills. In this problem, we are given, an array of size n and a range [a, b]. The task is to partition the array around the range such that the array is divided into three parts. 1) All elements smaller than a come first. 2) All elements in range a to b come next. 3) All elements greater than b appear in the end. The individual elements of three sets can appear in any order. You are required to return the modified array. Note: The generated output is 1 if you modify the given array successfully. Geeky Challenge: Solve this problem in O(n) time complexity. Example : Input:  n = 5 array[] = {1, 2, 3, 3, 4} [a, b] = [1, 2] Output:  1 Explanation:  One possible arrangement is: {1, 2, 3, 3, 4}. If you return a valid arrangement, output will be 1.

Give the problem a try before going through the video. All the best!!! Problem Link:   https://www.geeksforgeeks.org/problems/three-way-partitioning/1

Video Thumbnail

IMAGES

  1. PPT

    problem solving in science

  2. science problem solving examples

    problem solving in science

  3. Introduction

    problem solving in science

  4. Problem solving infographic 10 steps concept Vector Image

    problem solving in science

  5. 5 step problem solving method

    problem solving in science

  6. How to Be a Better Physics Problem Solver: Tips for High School and

    problem solving in science

VIDEO

  1. What does PRAN supawadee channel learning?

  2. One simple tool to help your science students problem solve #scienceteacher #teacher #shorts

  3. Programming for beginners

  4. Problem Solving and Reasoning: Polya's Steps and Problem Solving Strategies

  5. Science Quiz // teaser IQ test // brain game // puzzle IQ level // riddles new question and answer

  6. 🧠 Decipher the Puzzle: Dive Deep into Biology Riddles! 🌱

COMMENTS

  1. Problem Solving in STEM

    Problem Solving in STEM. Solving problems is a key component of many science, math, and engineering classes. If a goal of a class is for students to emerge with the ability to solve new kinds of problems or to use new problem-solving techniques, then students need numerous opportunities to develop the skills necessary to approach and answer ...

  2. The scientific method (article)

    The scientific method. At the core of biology and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis.

  3. Problem-Solving in Science and Technology Education

    Abstract. This chapter focuses on problem-solving, which involves describing a problem, figuring out its root cause, locating, ranking and choosing potential solutions, as well as putting those solutions into action in science and technology education. This chapter covers (1) what problem-solving means for science and technology education; (2 ...

  4. Teaching Creativity and Inventive Problem Solving in Science

    Engaging learners in the excitement of science, helping them discover the value of evidence-based reasoning and higher-order cognitive skills, and teaching them to become creative problem solvers have long been goals of science education reformers. But the means to achieve these goals, especially methods to promote creative thinking in scientific problem solving, have not become widely known ...

  5. The Problem Solving Approach in Science Education

    Details the learner-centered problem-solving approach, outlining steps from problem definition to hypothesis testing and conclusion formulation. It exemplifies how this approach engages learners in active problem solving, enhancing their scientific skills and higher-order thinking through practical experiments and discussions.

  6. Problem Solving in Science Learning

    The traditional teaching of science problem solving involves a considerable amount of drill and practice. Research suggests that these practices do not lead to the development of expert-like problem-solving strategies and that there is little correlation between the number of problems solved (exceeding 1,000 problems in one specific study) and the development of a conceptual understanding.

  7. 1.12: Scientific Problem Solving

    Ask a question - identify the problem to be considered. Make observations - gather data that pertains to the question. Propose an explanation (a hypothesis) for the observations. Make new observations to test the hypothesis further. Figure 1.12.2 1.12. 2: Sir Francis Bacon.

  8. STEM Problem Solving: Inquiry, Concepts, and Reasoning

    Balancing disciplinary knowledge and practical reasoning in problem solving is needed for meaningful learning. In STEM problem solving, science subject matter with associated practices often appears distant to learners due to its abstract nature. Consequently, learners experience difficulties making meaningful connections between science and their daily experiences. Applying Dewey's idea of ...

  9. Problem solving (video)

    Problem-solving skills are essential in our daily lives. The video explains different problem-solving methods, including trial and error, algorithm strategy, and heuristics. It also discusses concepts like means-end analysis, working backwards, fixation, and insight. These techniques help us tackle both well-defined and ill-defined problems ...

  10. Teaching and learning problem solving in science. Part I: A general

    A systematic approach to solving problems and on designing instruction where students learn this approach. ... Linking factual and procedural knowledge in solving science problems: A case study in a thermodynamics course. Instructional Science 1981, 10 (4) , ...

  11. 1.2: Scientific Approach for Solving Problems

    In doing so, they are using the scientific method. 1.2: Scientific Approach for Solving Problems is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts. Chemists expand their knowledge by making observations, carrying out experiments, and testing hypotheses to develop laws to summarize their results and ...

  12. Brilliant

    Brilliant - Build quantitative skills in math, science, and computer science with hands-on, interactive lessons. Log in. Learn by doing Guided interactive problem solving that's effective ... and build your problem solving skills one concept at a time. Stay motivated. Form a real learning habit with fun content that's always well-paced ...

  13. Using the Scientific Method to Solve Problems

    The Scientific Method. The scientific method is a process used to explore observations and answer questions. Originally used by scientists looking to prove new theories, its use has spread into many other areas, including that of problem-solving and decision-making. The scientific method is designed to eliminate the influences of bias ...

  14. Science Changes Your Perspective

    Bill Nye. Lesson time 04:07 min. Discover how science gives you a new perspective, helps solve problems, and offers a more optimistic outlook on the future. Bill shares a story from his high school physics class and uses an augmented reality hologram to illustrate his point. Students give MasterClass an average rating of 4.7 out of 5 stars.

  15. A Problem-Solving Experiment

    A problem-solving experiment is a learning activity that uses experimental design to solve an authentic problem. It combines two evidence-based teaching strategies: problem-based learning and inquiry-based learning. The use of problem-based learning and scientific inquiry as an effective pedagogical tool in the science classroom has been well established and strongly supported by research ...

  16. Problem solving

    Mathematics LibreTexts - Problem solving; Verywell Mind - Overview of the Problem-Solving Mental Process; The University of Hawaiʻi Pressbooks - Problem Solving; Massachusetts Institute of Technology - CCMIT - Introduction to Problem Solving Skills; The Balance - What are problem-solving skills?

  17. A Detailed Characterization of the Expert Problem-Solving Process in

    A primary goal of science and engineering (S&E) education is to produce good problem solvers, but how to best teach and measure the quality of problem solving remains unclear. The process is complex, multifaceted, and not fully characterized. Here, we present a detailed characterization of the S&E problem-solving process as a set of specific interlinked decisions. This framework of decisions ...

  18. Teaching Critical Thinking and Problem-Solving in the Science Classroom

    Use Real-World Problems to Teach Critical Thinking and Problem-Solving in the Science Classroom. Published On: September 6, 2023. Say goodbye to the "sage on a stage" in the front of the science classroom and welcome educators who encourage students to ask questions and discover the answers independently. The inquiry-based educational model ...

  19. Solving Complex Problems

    Regardless of topic, the students in a section of Solving Complex Problems all work together in the first few class sessions to predict what challenges will arise and to parse the overall problem into a series of 5 to 10 themes. For example, themes might include the environmental context of the problem, engineering challenges, public relations ...

  20. Identifying problems and solutions in scientific text

    Introduction. Problem solving is generally regarded as the most important cognitive activity in everyday and professional contexts (Jonassen 2000).Many studies on formalising the cognitive process behind problem-solving exist, for instance (Chandrasekaran 1983).Jordan argues that we all share knowledge of the thought/action problem-solution process involved in real life, and so our writings ...

  21. The Science of Problem-Solving

    An emergency medicine physician, Dhaliwal is one of the leaders in a field known as clinical reasoning, a type of applied problem solving. In recent years, Dhaliwal has mapped out a better way to ...

  22. Art of Problem Solving

    Science. Science refers to the formulation of causal theories and mathematical models based on statistical methods and disciplined experimentation. Disciplined experimentation refers to experimentation that incorporates the three Rs: repetition, randomisation, and regulation (i.e. control .) If data is obtained from some source that does not ...

  23. Teaching Creativity and Inventive Problem Solving in Science

    Abstract. Engaging learners in the excitement of science, helping them discover the value of evidence-based reasoning and higher-order cognitive skills, and teaching them to become creative problem solvers have long been goals of science education reformers. But the means to achieve these goals, especially methods to promote creative thinking ...

  24. [2404.12926] MM-PhyRLHF: Reinforcement Learning Framework for

    Recent advancements in LLMs have shown their significant potential in tasks like text summarization and generation. Yet, they often encounter difficulty while solving complex physics problems that require arithmetic calculation and a good understanding of concepts. Moreover, many physics problems include images that contain important details required to understand the problem's context. We ...

  25. Social science takes the stage

    Social Science on Stage, held at the Cantor Arts Center. Awaad shared how she and her husband found the woman ducked behind their couch, her eyes wide and terrified. "I need to speak to the imam ...

  26. Computer Science Fundamentals

    Computer Science Fundamentals ... Investigate problem-solving techniques and develop strategies for building positive communities both online and offline. Create interactive games that students can share. See course details Grade 3. Course D. Review of the concepts found in earlier courses, including loops and events. Afterward, students will ...

  27. Why couples have problems communicating with each other

    Communication failures often cause problems in personal relationships. It is the No. 1 reason people seek marital therapy. It hurts parent-child relationships. And it leads to rifts in close ...

  28. WildHacks 2024 Showcases Students' Dynamic Problem-solving Skills

    By bringing the power of computer science to fields such as journalism, education, robotics, and art, Northwestern University computer scientists are exponentially accelerating research and innovation. ... WildHacks 2024 Showcases Students' Dynamic Problem-solving Skills Students from 14 universities completed 43 innovative software projects ...

  29. Semiconductors at scale: New processor achieves remarkable ...

    Semiconductors at scale: New processor achieves remarkable speedup in problem solving. (a) The die photo of a 22nm fully-coupled Ising LSI chip; (b) the front and back views of the board of a 4096 ...

  30. PROBLEM OF THE DAY : 21/04/2024

    In this problem, we are given, an array of size n and a range [a, b]. The task is to partition the array around the range such that the array is divided into three parts. 1) All elements smaller than a come first. 2) All elements in range a to b come next. 3) All elements greater than b appear in the end.