Our Research Methods

Research across many fields of psychology demonstrates that individuals’ mindsets –their attitudes, beliefs, perceptions about themselves and the world – play a key role in their ability to fulfill their potential. Psychological theory can help us understand the types of mindsets most likely to be relevant in a given setting, but psychological theory alone is not enough to customize effective solutions for specific individuals and varied contexts. In order to understand how to best leverage the power of mindsets in a particular setting, we use a variety of key research methods.


 
 
 

Randomized Control Trials.png

Often referred to as the “gold standard” of experimental design, randomized controlled trials are a research method that involves randomly assigning participants to receive the treatment condition (i.e., an intervention) or the control condition (i.e., the condition the treatment condition is compared to). The purpose of this method is to test whether a particular intervention or program causes differences between people on a particular outcome. For example, to examine whether a growth mindset intervention improves test scores for 7th grade students, researchers could randomly assign some students to participate in the growth mindset intervention (i.e., the treatment condition) and the other students to a condition that did not include a growth mindset intervention (i.e., the control condition). Randomization allows researchers to weed out all systematic differences between the treatment and control groups except one: the experimental group they are assigned to. In this way, randomized controlled trials give researchers more confidence that the outcomes they observe are, in fact, attributable to their intervention. In the case of our growth mindset example, if we randomize students to the growth mindset or control conditions in the first week of the semester, and then compare their grades at the end of the semester, we can determine if the growth mindset intervention caused differences in their grades at the end of the semester. Because we used random assignment, we would expect the academic ability and performance of students in both the growth mindset and control group to be equivalent prior to the intervention, meaning that if differences existed after the intervention, they could be attributed to the intervention.

 

Conceptual overviews:

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.

Empirical examples:

Growth Mindset

Yeager, D.S., Romero, C., Paunesku, D., Hulleman, C. S., Schneider, B., *Hinojosa, C., *Lee, H. Y., *O’Brien, J., Flint, K., Roberts, A., Trott, J., Walton, G., & Dweck, C. (2016). Designing Social-Psychological Interventions for Full-Scale Implementation: The Case of Growth Mindset During the Transition to High School. Journal of Educational Psychology, 108(3), 374-391.

Purpose and Value

Hulleman, C. S., & Harackiewicz, J. M. (2009). Promoting interest and performance in high school science classes. Science, 326, 1410–1412. doi:10.1126/science.1177067.

Tibbetts, Y., Priniski, S.J., Hecht, C.A., Borman, G.D., & Harackiewicz, J.M. (in press). Different institutions and different values: Exploring student fit at two-year colleges. Frontiers in Psychology.

Belonging

Walton, G. M., & Cohen, G. L. (2011). A brief social-belonging intervention improves academic and health outcomes of minority students. Science, 331, 1447–1451. doi:10.1126/science.1198364

 

Infographic | PDF


Copy of Randomized Control Trials (1).png

Because researchers can’t directly observe students’ thoughts and feelings, they measure it by asking many different questions in surveys. The problem is that completing long surveys can take up valuable time that could otherwise be spent on instruction or other types of enrichment. Although teachers may be interested in their students’ motivation, they may not be able or willing to give up 30 minutes of class time to have students fill out a survey. On the other hand, well-designed surveys that ask more questions – and usually take longer to complete – usually provide more accurate information. We face the same challenge in health care. If you doctor wants to know if you have a healthy heart, s/he could take your blood pressure and pulse. Although these measures are less expensive and time-consuming compared to an EEG or ultrasound, they are not as good at detecting serious issues. The more costly measures take much more time but produce more focused and precise information about your heart. The pragmatic measurement approach tries to strike a balance between the brevity of short measures and the precision of longer ones. To accomplish this balance, pragmatic measurement systematically identifies items that best represent an overall scale. Items are often selected because they are most strongly related to outcomes or sensitive to change. For instance, we created a 10-item scale of students’ expectancy, value, and cost that could be administered to middle school students in under seven minutes. Pragmatic measurement can help make surveys more realistic to implement in real-world settings and help us draw valid conclusions.

 

Conceptual Overviews:

Yeager, D., Bryk, A., Muhich, J., Hausman, H., & Morales, L. (2013). Practical measurementPalo Alto, CA: Carnegie Foundation for the Advancement of Teaching78712.

Kosovich, J. J., Hulleman, C. S., & Barron, K. E. (2017). Measuring motivation in educational settings: A Case for pragmatic measurement. To appear in K. A. Renninger and S. E. Hidi (Eds.), The Cambridge Handbook on Motivation and Learning (pp. 39-60). New York, NY: Routledge.

Empirical Examples:

Kosovich, J. J., Hulleman, C. S., Barron, K. E., & Getty, S. (2015). A practical measure of student motivation: Establishing validity evidence for the expectancy-value-cost scale in middle school. Journal of Early Adolescence, 35(5-6), 790-816.

Krumm, A. E., Beattie, R., Takahashi, S., D'Angelo, C., Feng, M., & Cheng, B. (2016). Practical measurement and productive persistence: Strategies for using digital learning system data to drive improvementJournal of Learning Analytics3(2), 116-138.

 

Infographic | PDF


Person Oriented.png

One goal of educational research is to understand which factors are most strongly related to student learning in real-life settings, like a middle school classroom. Decades of research suggest that student learning is a combination of personal factors (e.g., motivation, gender, background knowledge) and situational factors (e.g., teaching practices, peer engagement, school safety. To understand the motivational dynamics of classrooms, researchers often use a variable-oriented approach, which focuses on how each individual factor relates to an outcome (e.g., Will I get better grades if I feel like I belong in math class?). In reality, however, student learning is likely a complex combination of many factors working together. A person-oriented approach tries to account for this complexity by considering how a number of factors work together to influence an outcome. Rather than focusing on how each factor works on its own to relate to an outcome, person-oriented approaches compare people with a common combination of factors, or profile, on an outcome of interest (e.g., Will I get better grades if I feel like I belong in math class, think I can succeed, and have a low level of anxiety? Or is it more important for me to feel like I belong in math class, see value in what I’m learning, and have friends who also like math?). Because person-oriented approaches try to explain how multiple factors work together, they can be a useful tool for understanding the complexities of the real world.

Conceptual overviews:

Linnenbrink-Garcia, L., & Wormington, S. V. (in press). An integrative perspective for studying motivation and engagement. Chapter to appear in K. A. Renninger and S. E. Hidi (Eds.), The Cambridge Handbook on Motivation and Learning.

Linnenbrink-Garcia, L., & Wormington, S. V. (in press). Key challenges and potential solutions for studying the complexity of motivation in school: A person-oriented integrative motivational perspective. British Journal of Educational Psychology.

Empirical examples:

Wormington, S. V. & Linnenbrink-Garcia, L. (2016). A new look at multiple goal pursuit: The promise of a person-centered approach. Educational Psychology Review. doi: 10.1007/s10648-016-9358-2

Corpus, J. H., Wormington, S. V., & Haimovitz, K. (2016). Creating rich portraits: A mixed-methods approach to understanding profiles of intrinsic and extrinsic motivations. The Elementary School Journal, 116, 365-390.  

Wormington, S. V., Corpus, J. H., & Anderson, K. A. (2012). A person-centered investigation of academic motivation and school-related correlates in high school. Learning and Individual Differences, 22, 429-438. doi: 10.1016/j.lindif.2012.03.004j


Intervention Fidelity.png

Often, researchers assume that their programs or interventions will be implemented and received the same way each time. However, that is rarely the case; the way an intervention is implemented usually differs from the way the intervention was designed. Computers stop working, facilitators go off script, and participants draw different conclusions from the same experience. Any one of these factors has the potential to undermine the effectiveness of an intervention.

The extent to which an intervention or program is implemented and received as intended is known as intervention fidelity (a.k.a. program integrity, implementation fidelity). For example, we designed an intervention to encourage high school students to enroll in more math and science courses by educating parents about the value of taking math and science courses. We intended for parents to receive a brochure about the topic, access a companion website with more information, and have conversations with their children about the relevance of math and science to their current and future lives.

When an intervention or program is implemented, it will be effective to the extent that it instigates two types of processes (See Figure 1). Psychological processes refer to the changes in participants, such as depth of knowledge and attitudes (e.g., parents perceiving more value for math and science courses, thereby having more conversations with their teen about the importance of math and science). Psychological processes, in turn, lead to the desired outcome (e.g., students taking more math and science courses in high school). Intervention processes are the core components of the intervention theorized to drive changes in participants (e.g., materials distributed to parents explaining why math and science courses are valuable for high school students). Intervention processes impact key psychological processes within the participants. 

For a successful intervention, assessing intervention fidelity can help program developers, researchers, and practitioners know which aspects of a program are necessary for successful outcomes (e.g., whether providing a brochure, website, or both resources to parents is effective). For an unsuccessful intervention, assessing intervention fidelity can help researchers understand why an intervention or program did not work as intended (e.g., lack of specific information in brochures, broken link to website, parents not being able to effectively influence their high school students’ decisions). Without examining intervention fidelity, it is difficult to determine whether unfavorable intervention results are due to an ineffective intervention design or incomplete implementation of the program.

 

Conceptual overviews

Murrah, H. M., Kosovich, J., & Hulleman, C. S. (2017). A framework for incorporating intervention fidelity in educational evaluation studies. In G. Roberts, S. Vaughn, T. Beretvas, & V. Wong (Eds.), Treatment fidelity in studies of educational intervention (pp. 39-60). New York: Routledge.

Nelson, M. C., Cordray, D. S., Hulleman, C. S., *Darrow, C. L., & *Sommer, E. C. (2012). A procedure for assessing intervention fidelity in experiments testing educational and behavioral interventions. Journal of Behavioral Health Services and Research, 39(4), 374-396. DOI: 10.1007/s11414-012-9295-x

Hulleman, C. S., Rimm-Kaufman, S. E., & *Abry, T. D. S. (2013). Whole-part-whole: Construct validity, measurement, and analytical issues for fidelity assessment in education research. In T. Halle, A. Metz, and I. Martinez-Beck (Eds.), Applying implementation science in early childhood programs and systems (pp. 65-93). Baltimore, MD: Paul H. Brookes Publishing Co.

O’Donnell, C. L. (2008). Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K–12 curriculum intervention research. Review of Educational Research78(1), 33-84.

Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American journal of community psychology41(3-4), 327-350.

Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control. Clinical Psychology Review, 18, 23–45

Empirical examples

Abry, T., Hulleman, C. S., & Rimm-Kaufman, S. E. (2015). Using indices of fidelity to intervention core components to identify program active ingredients. American Journal of Evaluation, 36(3), 320-338.

Nagengast,B., *Brisson, B.M., Hulleman, C.S., Gaspard, H., *Häfner, I., & Trautwein, U. (2018). Learning more from educational interventions studies: Estimating complier average causal effects in a relevance intervention. Journal of Experimental Education, 86, 105-123.

 

Infographic | PDF