Our Research Process

The following is placeholder text known as “lorem ipsum,” which is scrambled Latin used by designers to mimic real copy. Fusce at massa nec sapien auctor gravida in in tellus. Donec eu est non lacus lacinia semper. Lorem ipsum dolor sit amet, consectetur adipiscing elit.

 
 

randomized controlled trials

Often referred to as the “gold standard” of experimental design, randomized controlled trials are a research method that involves randomly assigning participants to receive the treatment condition (i.e., an intervention) or the control condition (i.e., the condition the treatment condition is compared to).

The purpose of this method is to test whether a particular intervention or program causes differences between people on a particular outcome. For example, to examine whether a growth mindset intervention improves test scores for 7th grade students, researchers could randomly assign some students to participate in the growth mindset intervention (i.e., the treatment condition) and the other students to a condition that did not include a growth mindset intervention (i.e., the control condition). Randomization allows researchers to weed out all systematic differences between the treatment and control groups except one: the experimental group they are assigned to. In this way, randomized controlled trials give researchers more confidence that the outcomes they observe are, in fact, attributable to their intervention. In the case of our growth mindset example, if we randomize students to the growth mindset or control conditions in the first week of the semester, and then compare their grades at the end of the semester, we can determine if the growth mindset intervention caused differences in their grades at the end of the semester. Because we used random assignment, we would expect the academic ability and performance of students in both the growth mindset and control group to be equivalent prior to the intervention, meaning that if differences existed after the intervention, they could be attributed to the intervention.

Conceptual overviews:

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.

Empirical examples:

Growth Mindset

Yeager, D.S., Romero, C., Paunesku, D., Hulleman, C. S., Schneider, B., *Hinojosa, C., *Lee, H. Y., *O’Brien, J., Flint, K., Roberts, A., Trott, J., Walton, G., & Dweck, C. (2016). Designing Social-Psychological Interventions for Full-Scale Implementation: The Case of Growth Mindset During the Transition to High School. Journal of Educational Psychology, 108(3), 374-391.

Purpose and Value

Hulleman, C. S., & Harackiewicz, J. M. (2009). Promoting interest and performance in high school science classes. Science, 326, 1410–1412. doi:10.1126/science.1177067.

Tibbetts, Y., Priniski, S.J., Hecht, C.A., Borman, G.D., & Harackiewicz, J.M. (in press). Different institutions and different values: Exploring student fit at two-year colleges. Frontiers in Psychology.

Belonging

Walton, G. M., & Cohen, G. L. (2011). A brief social-belonging intervention improves academic and health outcomes of minority students. Science, 331, 1447–1451. doi:10.1126/science.1198364

 

Person-oriented approach

One goal of educational research is to understand which factors are most strongly related to student learning in real-life settings, like a middle school classroom. Decades of research suggest that student learning is a combination of personal factors (e.g., motivation, gender, background knowledge) and situational factors (e.g., teaching practices, peer engagement, school safety. To understand the motivational dynamics of classrooms, researchers often use a variable-oriented approach, which focuses on how each individual factor relates to an outcome (e.g., Will I get better grades if I feel like I belong in math class?). In reality, however, student learning is likely a complex combination of many factors working together. A person-oriented approach tries to account for this complexity by considering how a number of factors work together to influence an outcome. Rather than focusing on how each factor works on its own to relate to an outcome, person-oriented approaches compare people with a common combination of factors, or profile, on an outcome of interest (e.g., Will I get better grades if I feel like I belong in math class, think I can succeed, and have a low level of anxiety? Or is it more important for me to feel like I belong in math class, see value in what I’m learning, and have friends who also like math?). Because person-oriented approaches try to explain how multiple factors work together, they can be a useful tool for understanding the complexities of the real world.

Conceptual overviews:

Linnenbrink-Garcia, L., & Wormington, S. V. (in press). An integrative perspective for studying motivation and engagement. Chapter to appear in K. A. Renninger and S. E. Hidi (Eds.), The Cambridge Handbook on Motivation and Learning.

Linnenbrink-Garcia, L., & Wormington, S. V. (in press). Key challenges and potential solutions for studying the complexity of motivation in school: A person-oriented integrative motivational perspective. British Journal of Educational Psychology.

Empirical examples:

Wormington, S. V. & Linnenbrink-Garcia, L. (2016). A new look at multiple goal pursuit: The promise of a person-centered approach. Educational Psychology Review. doi: 10.1007/s10648-016-9358-2

Corpus, J. H., Wormington, S. V., & Haimovitz, K. (2016). Creating rich portraits: A mixed-methods approach to understanding profiles of intrinsic and extrinsic motivations. The Elementary School Journal, 116, 365-390.  

Wormington, S. V., Corpus, J. H., & Anderson, K. A. (2012). A person-centered investigation of academic motivation and school-related correlates in high school. Learning and Individual Differences, 22, 429-438. doi: 10.1016/j.lindif.2012.03.004j

 

Pragmatic measurement

Because researchers can’t directly observe students’ thoughts and feelings, they measure it by asking many different questions in surveys. The problem is that completing long surveys can take up valuable time that could otherwise be spent on instruction or other types of enrichment. Although teachers may be interested in their students’ motivation, they may not be able or willing to give up 30 minutes of class time to have students fill out a survey. On the other hand, well-designed surveys that ask more questions – and usually take longer to complete – usually provide more accurate information. We face the same challenge in health care. If you doctor wants to know if you have a healthy heart, s/he could take your blood pressure and pulse. Although these measures are less expensive and time-consuming compared to an EEG or ultrasound, they are not as good at detecting serious issues. The more costly measures take much more time but produce more focused and precise information about your heart. The pragmatic measurement approach tries to strike a balance between the brevity of short measures and the precision of longer ones. To accomplish this balance, pragmatic measurement systematically identifying items that best represent an overall scale. Items are often selected because they are most strongly related to outcomes or sensitive to change. For instance, we created a 10-item scale of students’ expectancy, value, and cost that could be administered to middle school students in under seven minutes. Pragmatic measurement can help make surveys more realistic to implement in real-world settings and help us draw valid conclusions.

Conceptual Overviews:

Yeager, D., Bryk, A., Muhich, J., Hausman, H., & Morales, L. (2013). Practical measurementPalo Alto, CA: Carnegie Foundation for the Advancement of Teaching78712.

Kosovich, J. J., Hulleman, C. S., & Barron, K. E. (2017). Measuring motivation in educational settings: A Case for pragmatic measurement. To appear in K. A. Renninger and S. E. Hidi (Eds.), The Cambridge Handbook on Motivation and Learning (pp. 39-60). New York, NY: Routledge.

Empirical Examples:

Kosovich, J. J., Hulleman, C. S., Barron, K. E., & Getty, S. (2015). A practical measure of student motivation: Establishing validity evidence for the expectancy-value-cost scale in middle school. Journal of Early Adolescence, 35(5-6), 790-816.

Krumm, A. E., Beattie, R., Takahashi, S., D'Angelo, C., Feng, M., & Cheng, B. (2016). Practical measurement and productive persistence: Strategies for using digital learning system data to drive improvementJournal of Learning Analytics3(2), 116-138.