Imperial College London

DrMichaelYeomans

Business School

Assistant Professor in Strategy and Organisational Behaviour
 
 
 
//

Contact

 

m.yeomans

 
 
//

Location

 

Business School BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

7 results found

Cho JY, Tao Y, Yeomans M, Tingley D, Kizilcec RFet al., 2023, Which planning tactics predict online course completion?, 14th Learning Analytics and Knowledge Conference (LAK 2024), Publisher: ACM

Planning is a self-regulated learning strategy and widely used behavior change technique that can help learners achieve academic goals (e.g., pass an exam, apply to college, or complete an online course). Numerous studies have tested the effects of planning interventions, but few have examined the content of learners’ plans and how it relates to their academic outcomes. Building on a large-scale intervention study, we conducted a qualitative content analysis of 650 learner plans sampled from 15 massive open online courses (MOOCs). We identified a number of planning tactics, compared their prevalence, and examined which ones significantly predict course progress and completion using regression analyses. We found that learners whose plans specify a time of day (e.g., morning, afternoon, night) are significantly more likely to complete a MOOC, but only 25% of the learners in our sample used this tactic. The high degree of variation in the effectiveness of planning tactics may contribute to mixed intervention findings in scale-up studies. Models of plan effectiveness can be used to provide feedback on the quality of learners’ plans and encourage them to use effective tactics to achieve their learning goals.

Conference paper

Yeomans M, Wood Brooks A, Boland K, Collins H, Abi-Esber Net al., 2023, A practical guide to conversation research: how to study what people say to each other, Advances in Methods and Practices in Psychological Science, Vol: 6, ISSN: 2515-2459

Conversation—a verbal interaction between two or more people—is a complex,pervasive, and consequential human behavior. Conversations have been studied across manyacademic disciplines. However, advances in recording and analysis techniques over the lastdecade have allowed researchers to more directly and precisely examine conversations, in naturalcontexts and at a larger scale than ever before, and these advances open new paths to understandhumanity and the social world. Existing reviews of text analysis and conversation research havefocused on text generated by a single author (e.g. product reviews, news articles, and publicspeeches), and thus leave open questions about the unique challenges presented by interactiveconversation data (i.e., dialogue). In this article, we suggest approaches to overcome commonchallenges in the workflow of conversation science, including recording and transcribingconversations, structuring data (to merge turn-level and speaker-level datasets), extracting andaggregating linguistic features, estimating effects, and sharing data. This practical guide is meantto shed light on current best practices and empower more researchers to study conversationsmore directly—to expand the community of conversation scholars and contribute to a greatercumulative scientific understanding of the social world.

Journal article

Yeomans M, 2022, The straw man effect: partisan misrepresentation in natural language, Group Processes and Intergroup Relations, Vol: 25, Pages: 1905-1924, ISSN: 1368-4302

Political discourse often seems divided not just by different preferences, but by entirely different representations of the debate. Are partisans able to accurately describe their opponents’ position, or do they instead generate unrepresentative “straw man” arguments? In this research we examined an (incentivized) political imitation game, by asking partisans on both sides of the US health care debate to describe the most common arguments for and against ObamaCare. We used natural language processing algorithms to benchmark the biases and blind spots of our participants. Overall, partisans showed a limited ability to simulate their opponents’ perspective, or to distinguish genuine from imitation arguments. In general, imitations were less extreme than their genuine counterparts. Individual difference analyses suggest that political sophistication only improves the representations of one's own side, but not of an opponents' side, exacerbating the straw man effect. Our findings suggest that false beliefs about partisan opponents may be pervasive.

Journal article

Yeomans M, Schweitzer ME, Brooks AW, 2021, The Conversational Circumplex: Identifying, prioritizing, and pursuing informational and relational motives in conversation., Current Opinion in Psychology, Vol: 44, Pages: 293-302, ISSN: 2352-250X

The meaning of success in conversation depends on people's goals. Often, individuals pursue multiple goals simultaneously, such as establishing shared understanding, making a favorable impression, having fun, or persuading a conversation partner. In this article, we introduce a novel theoretical framework, the Conversational Circumplex, to classify conversational motives along two key dimensions: 1) informational: the extent to which a speaker's motive focuses on giving and/or receiving accurate information; and 2) relational: the extent to which a speaker's motive focuses on building the relationship. We use the Conversational Circumplex to underscore the multiplicity of conversational goals that people hold and highlight the potential for individuals to have conflicting conversational goals (both intrapersonally and interpersonally) that make successful conversation a difficult challenge.

Journal article

Yeomans M, 2021, A concrete example of construct construction in natural language, Organizational Behavior and Human Decision Processes, Vol: 162, Pages: 81-94, ISSN: 0749-5978

Concreteness is central to theories of learning in psychology and organizational behavior. However, the literature provides many competing measures of concreteness in natural language. Indeed, researcher degrees of freedom are often large in text analysis. Here, we use concreteness as an example case for how language measures can be systematically evaluated across many studies. We compare many existing measures across datasets from several domains, including written advice, and plan-making (total N = 9,780). We find that many previous measures have surprisingly little measurement validity in our domains of interest. We also show that domain-specific machine learning models consistently outperform domain-general measures. Text analysis is increasingly common, and our work demonstrates how reproducibility and open data can improve measurement validity for high-dimensional data. We conclude with robust guidelines for measuring concreteness, along with a corresponding R package, doc2concrete, as an open-source toolkit for future research.

Journal article

Yeomans M, Minson J, Collins H, Chen F, Gino Fet al., 2020, Conversational receptiveness: Improving engagement with opposing views, Organizational Behavior and Human Decision Processes, Vol: 160, Pages: 131-148, ISSN: 0749-5978

We examine “conversational receptiveness” – the use of language to communicate one’s willingness to thoughtfully engage with opposing views. We develop an interpretable machine-learning algorithm to identify the linguistic profile of receptiveness (Studies 1A-B). We then show that in contentious policy discussions, government executives who were rated as more receptive - according to our algorithm and their partners, but not their own self-evaluations - were considered better teammates, advisors, and workplace representatives (Study 2). Furthermore, using field data from a setting where conflict management is endemic to productivity, we show that conversational receptiveness at the beginning of a conversation forestalls conflict escalation at the end. Specifically, Wikipedia editors who write more receptive posts are less prone to receiving personal attacks from disagreeing editors (Study 3). We develop a “receptiveness recipe” intervention based on our algorithm. We find that writers who follow the recipe are seen as more desirable partners for future collaboration and their messages are seen as more persuasive (Study 4). Overall, we find that conversational receptiveness is reliably measurable, has meaningful relational consequences, and can be substantially improved using our intervention (183 words).

Journal article

Kizilcec RF, Reich J, Yeomans M, Dann C, Brunskill E, Lopez G, Turkay S, Williams JJ, Tingley Det al., 2020, Scaling up behavioral science interventions in online education, Proceedings of the National Academy of Sciences, Vol: 117, Pages: 14900-14905, ISSN: 0027-8424

Online education is rapidly expanding in response to rising demand for higher and continuing education, but many online students struggle to achieve their educational goals. Several behavioral science interventions have shown promise in raising student persistence and completion rates in a handful of courses, but evidence of their effectiveness across diverse educational contexts is limited. In this study, we test a set of established interventions over 2.5 y, with one-quarter million students, from nearly every country, across 247 online courses offered by Harvard, the Massachusetts Institute of Technology, and Stanford. We hypothesized that the interventions would produce medium-to-large effects as in prior studies, but this is not supported by our results. Instead, using an iterative scientific process of cyclically preregistering new hypotheses in between waves of data collection, we identified individual, contextual, and temporal conditions under which the interventions benefit students. Self-regulation interventions raised student engagement in the first few weeks but not final completion rates. Value-relevance interventions raised completion rates in developing countries to close the global achievement gap, but only in courses with a global gap. We found minimal evidence that state-of-the-art machine learning methods can forecast the occurrence of a global gap or learn effective individualized intervention policies. Scaling behavioral science interventions across various online learning contexts can reduce their average effectiveness by an order-of-magnitude. However, iterative scientific investigations can uncover what works where for whom.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=01056619&limit=30&person=true