Our Research Methods

Research across many fields of psychology demonstrates that individuals’ mindsets –their attitudes, beliefs, perceptions about themselves and the world – play a key role in their ability to fulfill their potential. Psychological theory can help us understand the types of mindsets most likely to be relevant in a given setting, but psychological theory alone is not enough to customize effective solutions for specific individuals and varied contexts. In order to understand how to best leverage the power of mindsets in a particular setting, we use a variety of key research methods.


Randomized Control Trials.jpg

Often referred to as the “gold standard” of experimental design, randomized controlled trials are a research method that involves randomly assigning participants to receive the treatment condition (i.e., an intervention) or the control condition (i.e., the condition the treatment condition is compared to). The purpose of this method is to test whether a particular intervention or program causes differences between people on a particular outcome. For example, to examine whether a growth mindset intervention improves test scores for 7th grade students, researchers could randomly assign some students to participate in the growth mindset intervention (i.e., the treatment condition) and the other students to a condition that did not include a growth mindset intervention (i.e., the control condition). Randomization allows researchers to weed out all systematic differences between the treatment and control groups except one: the experimental group they are assigned to. In this way, randomized controlled trials give researchers more confidence that the outcomes they observe are, in fact, attributable to their intervention. In the case of our growth mindset example, if we randomize students to the growth mindset or control conditions in the first week of the semester, and then compare their grades at the end of the semester, we can determine if the growth mindset intervention caused differences in their grades at the end of the semester. Because we used random assignment, we would expect the academic ability and performance of students in both the growth mindset and control group to be equivalent prior to the intervention, meaning that if differences existed after the intervention, they could be attributed to the intervention.


Conceptual overviews:

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.

Empirical examples:

Growth Mindset

Yeager, D.S., Romero, C., Paunesku, D., Hulleman, C. S., Schneider, B., *Hinojosa, C., *Lee, H. Y., *O’Brien, J., Flint, K., Roberts, A., Trott, J., Walton, G., & Dweck, C. (2016). Designing Social-Psychological Interventions for Full-Scale Implementation: The Case of Growth Mindset During the Transition to High School. Journal of Educational Psychology, 108(3), 374-391.

Purpose and Value

Hulleman, C. S., & Harackiewicz, J. M. (2009). Promoting interest and performance in high school science classes. Science, 326, 1410–1412. doi:10.1126/science.1177067.

Tibbetts, Y., Priniski, S.J., Hecht, C.A., Borman, G.D., & Harackiewicz, J.M. (in press). Different institutions and different values: Exploring student fit at two-year colleges. Frontiers in Psychology.


Walton, G. M., & Cohen, G. L. (2011). A brief social-belonging intervention improves academic and health outcomes of minority students. Science, 331, 1447–1451. doi:10.1126/science.1198364


Infographic | PDF

Copy of Randomized Control Trials (1).png

Because researchers can’t directly observe students’ thoughts and feelings, they measure it by asking many different questions in surveys. The problem is that completing long surveys can take up valuable time that could otherwise be spent on instruction or other types of enrichment. Although teachers may be interested in their students’ motivation, they may not be able or willing to give up 30 minutes of class time to have students fill out a survey. On the other hand, well-designed surveys that ask more questions – and usually take longer to complete – usually provide more accurate information. We face the same challenge in health care. If you doctor wants to know if you have a healthy heart, s/he could take your blood pressure and pulse. Although these measures are less expensive and time-consuming compared to an EEG or ultrasound, they are not as good at detecting serious issues. The more costly measures take much more time but produce more focused and precise information about your heart. The pragmatic measurement approach tries to strike a balance between the brevity of short measures and the precision of longer ones. To accomplish this balance, pragmatic measurement systematically identifies items that best represent an overall scale. Items are often selected because they are most strongly related to outcomes or sensitive to change. For instance, we created a 10-item scale of students’ expectancy, value, and cost that could be administered to middle school students in under seven minutes. Pragmatic measurement can help make surveys more realistic to implement in real-world settings and help us draw valid conclusions.


Conceptual Overviews:

Yeager, D., Bryk, A., Muhich, J., Hausman, H., & Morales, L. (2013). Practical measurementPalo Alto, CA: Carnegie Foundation for the Advancement of Teaching78712.

Kosovich, J. J., Hulleman, C. S., & Barron, K. E. (2017). Measuring motivation in educational settings: A Case for pragmatic measurement. To appear in K. A. Renninger and S. E. Hidi (Eds.), The Cambridge Handbook on Motivation and Learning (pp. 39-60). New York, NY: Routledge.

Empirical Examples:

Kosovich, J. J., Hulleman, C. S., Barron, K. E., & Getty, S. (2015). A practical measure of student motivation: Establishing validity evidence for the expectancy-value-cost scale in middle school. Journal of Early Adolescence, 35(5-6), 790-816.

Krumm, A. E., Beattie, R., Takahashi, S., D'Angelo, C., Feng, M., & Cheng, B. (2016). Practical measurement and productive persistence: Strategies for using digital learning system data to drive improvementJournal of Learning Analytics3(2), 116-138.


Infographic | PDF

Person Oriented.png

One goal of educational research is to understand which factors are most strongly related to student learning in real-life settings, like a middle school classroom. Decades of research suggest that student learning is a combination of personal factors (e.g., motivation, gender, background knowledge) and situational factors (e.g., teaching practices, peer engagement, school safety. To understand the motivational dynamics of classrooms, researchers often use a variable-oriented approach, which focuses on how each individual factor relates to an outcome (e.g., Will I get better grades if I feel like I belong in math class?). In reality, however, student learning is likely a complex combination of many factors working together. A person-oriented approach tries to account for this complexity by considering how a number of factors work together to influence an outcome. Rather than focusing on how each factor works on its own to relate to an outcome, person-oriented approaches compare people with a common combination of factors, or profile, on an outcome of interest (e.g., Will I get better grades if I feel like I belong in math class, think I can succeed, and have a low level of anxiety? Or is it more important for me to feel like I belong in math class, see value in what I’m learning, and have friends who also like math?). Because person-oriented approaches try to explain how multiple factors work together, they can be a useful tool for understanding the complexities of the real world.

Conceptual overviews:

Linnenbrink-Garcia, L., & Wormington, S. V. (in press). An integrative perspective for studying motivation and engagement. Chapter to appear in K. A. Renninger and S. E. Hidi (Eds.), The Cambridge Handbook on Motivation and Learning.

Linnenbrink-Garcia, L., & Wormington, S. V. (in press). Key challenges and potential solutions for studying the complexity of motivation in school: A person-oriented integrative motivational perspective. British Journal of Educational Psychology.

Empirical examples:

Wormington, S. V. & Linnenbrink-Garcia, L. (2016). A new look at multiple goal pursuit: The promise of a person-centered approach. Educational Psychology Review. doi: 10.1007/s10648-016-9358-2

Corpus, J. H., Wormington, S. V., & Haimovitz, K. (2016). Creating rich portraits: A mixed-methods approach to understanding profiles of intrinsic and extrinsic motivations. The Elementary School Journal, 116, 365-390.  

Wormington, S. V., Corpus, J. H., & Anderson, K. A. (2012). A person-centered investigation of academic motivation and school-related correlates in high school. Learning and Individual Differences, 22, 429-438. doi: 10.1016/j.lindif.2012.03.004j

Intervention Fidelity.png

Often, researchers assume that their programs or interventions will be implemented and received the same way each time. However, that is rarely the case; the way an intervention is implemented usually differs from the way the intervention was designed. Computers stop working, facilitators go off script, and participants draw different conclusions from the same experience. Any one of these factors has the potential to undermine the effectiveness of an intervention.

The extent to which an intervention or program is implemented and received as intended is known as intervention fidelity (a.k.a. program integrity, implementation fidelity). For example, we designed an intervention to encourage high school students to enroll in more math and science courses by educating parents about the value of taking math and science courses. We intended for parents to receive a brochure about the topic, access a companion website with more information, and have conversations with their children about the relevance of math and science to their current and future lives.

When an intervention or program is implemented, it will be effective to the extent that it instigates two types of processes (See Figure 1). Psychological processes refer to the changes in participants, such as depth of knowledge and attitudes (e.g., parents perceiving more value for math and science courses, thereby having more conversations with their teen about the importance of math and science). Psychological processes, in turn, lead to the desired outcome (e.g., students taking more math and science courses in high school). Intervention processes are the core components of the intervention theorized to drive changes in participants (e.g., materials distributed to parents explaining why math and science courses are valuable for high school students). Intervention processes impact key psychological processes within the participants. 

For a successful intervention, assessing intervention fidelity can help program developers, researchers, and practitioners know which aspects of a program are necessary for successful outcomes (e.g., whether providing a brochure, website, or both resources to parents is effective). For an unsuccessful intervention, assessing intervention fidelity can help researchers understand why an intervention or program did not work as intended (e.g., lack of specific information in brochures, broken link to website, parents not being able to effectively influence their high school students’ decisions). Without examining intervention fidelity, it is difficult to determine whether unfavorable intervention results are due to an ineffective intervention design or incomplete implementation of the program.


Conceptual overviews

Murrah, H. M., Kosovich, J., & Hulleman, C. S. (2017). A framework for incorporating intervention fidelity in educational evaluation studies. In G. Roberts, S. Vaughn, T. Beretvas, & V. Wong (Eds.), Treatment fidelity in studies of educational intervention (pp. 39-60). New York: Routledge.

Nelson, M. C., Cordray, D. S., Hulleman, C. S., *Darrow, C. L., & *Sommer, E. C. (2012). A procedure for assessing intervention fidelity in experiments testing educational and behavioral interventions. Journal of Behavioral Health Services and Research, 39(4), 374-396. DOI: 10.1007/s11414-012-9295-x

Hulleman, C. S., Rimm-Kaufman, S. E., & *Abry, T. D. S. (2013). Whole-part-whole: Construct validity, measurement, and analytical issues for fidelity assessment in education research. In T. Halle, A. Metz, and I. Martinez-Beck (Eds.), Applying implementation science in early childhood programs and systems (pp. 65-93). Baltimore, MD: Paul H. Brookes Publishing Co.

O’Donnell, C. L. (2008). Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K–12 curriculum intervention research. Review of Educational Research78(1), 33-84.

Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American journal of community psychology41(3-4), 327-350.

Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control. Clinical Psychology Review, 18, 23–45

Empirical examples

Abry, T., Hulleman, C. S., & Rimm-Kaufman, S. E. (2015). Using indices of fidelity to intervention core components to identify program active ingredients. American Journal of Evaluation, 36(3), 320-338.

Nagengast,B., *Brisson, B.M., Hulleman, C.S., Gaspard, H., *Häfner, I., & Trautwein, U. (2018). Learning more from educational interventions studies: Estimating complier average causal effects in a relevance intervention. Journal of Experimental Education, 86, 105-123.


Infographic | PDF

Mixed Methods.jpg

To better understand complex issues in education, Motivate Lab employs mixed methods research. This involves collecting and analyzing both qualitative (e.g., focus groups responses) and quantitative data (e.g., numeric survey questions). By integrating both types of data, mixed methods research provides us with more comprehensive insights than either could provide alone. Because there is no one standard way to conduct mixed-methods research, below is an example of how Motivate Lab approaches integrating quantitative and qualitative data to better understand belonging uncertainty at a Tennessee community college.

Belonging uncertainty refers to the extent to which students feel insecure in their learning environments, and higher levels of uncertainty undermine student learning outcomes. We can use quantitative data gathered from student surveys to better understand average levels of students’ belonging uncertainty, how it changes overtime and connects to academic outcomes, and whether certain groups of students experience more belonging uncertainty than others. These are important questions to answer, but they don’t give us the full picture. To know what is causing these students’ belonging uncertainty and how we can help, asking more open-ended questions of students can provide more detailed data to understand the source of belonging uncertainty for different groups of students. Qualitative data gives a voice to our quantitative data by allowing students’ own perspectives to be captured. For example, conducting focus groups can help us better understand students’ individual experiences and how those might contribute to belonging uncertainty. From having structured conversations with individual students, we might find that a particular student feels uncertain because he feels academically underprepared for college, while another thinks she is over-prepared for community college and wishes she were at a four-year institution.

In this approach to mixed methods research, quantitative data allows us to establish the problem and understand patterns of belonging uncertainty, whereas qualitative data gives us deeper insight into its causes and potential solutions. If we exclusively focus on qualitative or quantitative data, we may miss part of the larger picture. By integrating them, we can develop more nuanced understandings of issues in education and how to address them.


Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed Methods Research: A Research Paradigm Whose Time Has ComeEducational Researcher33(7), 14–26. 

Leech, N. L., & Onwuegbuzie, A. J. (2009). A typology of mixed methods research designs. Quality & Quantity, 43(2), 265-275. doi:10.1007/s11135-007-9105-3


Corpus, J. H., Wormington, S. V., & Haimovitz, K. (2016). Creating rich portraits: A mixed-methods approach to understanding profiles of intrinsic and extrinsic motivations. The Elementary School Journal, 116, 365-390.

Qualitative Coding.jpg

Qualitative coding is the process of identifying themes in student essay responses, interviews, and other qualitative data sources. While qualitative data on its own may give a researcher more in-depth insights into why a particular phenomenon is happening (see Mixed Methods section), these insights rely on the individual researcher’s interpretation of the data. In the same way that two people can walk away from the same conversation with very different understandings of what was said, two researchers can read the same student essay and interpret it in vastly different ways. Although there are a multitude of approaches to qualitative coding, the purpose is to make the interpretation of text and other non-numeric data systematic, transparent, and replicable.

To put it more simply, qualitative coding pushes researchers to come up with rules for interpreting their data. At Motivate Lab, we accomplish this through a rigorous process of iteratively developing a “coding scheme” to categorize qualitative data using both psychological theory and practically-informed insights. While this work is time-intensive—collaboratively developing a coding scheme, pilot testing the scheme to establish how consistently different researchers apply this coding scheme, applying the coding scheme to the qualitative data, and then reviewing how each researcher applied the scheme once all the data have been coded—qualitative coding allows us to not only interpret our data with greater confidence, but also use conclusions from the coding to inform future work.

For instance, if we ask students to write about how math is related to their real lives, qualitative coding would allow us to more objectively interpret their responses. We might have coding rules for what constitutes a connection between math and real life, what criteria an essay needs to meet to be considered a good connection, and what type of connection students made. Through the consistent application of these rules, we would be better equipped to understand how students think that math is related to their lives. By coding qualitative data in a systematic and rigorous way, we can be confident that the information we glean from it is more objective and actionable.