A blogpost by Emma Holliday, student on MSc HCI 2019-2021
Getting into the video games industry was originally my inspiration for studying Computer Science. As I learned about the often toxic work environments (and sexism) in the gaming industry, I decided to leave this dream behind and went into software development instead. That’s why I’m so amazed and thrilled to have actually designed and built my own game and, even better, to have won an award for it.
The game itself was built as part of my coursework for the Serious and Persuasive Games module (led by Prof Anna Cox) on the Human-Computer Interaction course at UCL. (Even though I had somewhat given up on a career in the gaming industry, I still really loved games and would take any excuse to learn about them and play them.) Something that really stood out to me during the lectures was the concept of the “Magic Circle”, an idea that games exist within set boundaries that are separate to the real world. This means that different rules apply within game (e.g. violence is allowed), actions in the game don’t have consequences in the real world, and actions in the real world don’t apply to the game. I immediately challenged this – I’m sure we’ve all had that gaming session that left friendships a little sour even after the game had finished, learnt something new or even strengthened our relationships. I personally believe, particularly with narrative-based games, it is very unlikely to not leave some lasting impression in the real world long after the game has been played. As such, I designed a game that would exploit breaking the “Magic Circle” by including people’s real-world actions into gameplay. My hopes were that, by breaking the “Magic Circle” upfront, this would weaken the boundaries between the game and the real world, encouraging players to take experiences from the game back into real life.
My time studying this module was during the COVID-19 pandemic which also provided a lot of inspiration, largely because it was inescapable and incredibly topical. Other games which we studied during the module were also influential, particularly those on blame culture in nursing (such as Nurse’s Dilemma and Patient Panic). All of this culminated in my game, COVID Ward, where players take on the role of a nurse working in the intensive care unit during the COVID-19 pandemic. The gameplay is quite simple, players use arrow keys to control the nurse and spacebar to administer aid to patients in the ward. Patient health deteriorates over time and they will eventually die if their health becomes too low while fully healed patients are able to leave the ward. The game plays out as levels representing 12 hour shifts, so time is limited and the nurse character can only do so much in a day. In between levels is where the “Magic Circle” is broken; the player is asked a question relating to their real-world actions during the pandemic and their answer affects the number of patients admitted to the ward during the next level. For example, the question may ask if the player always wore a mask while on public transport. If the player answers “no”, there would be more patients in the ward the next day than if they had answered “yes”. Through this, players see the their actions directly associated with the effects on other people and healthcare staff.
The game was refined iteratively thanks to playtesting with my friends, fellow students, and staff on the Serious and Persuasive Games course. Finally, a user study was completed to test if the game had the desired effects of making people feel more responsible for their actions, encouraging them to follow COVID-19 safety guidance more closely and increasing empathy for nursing staff. Though it was a small scale study, initial results were very positive and suggested that the game had achieved its goal in terms of attitude change. Qualitative responses from participants repeatedly mentioned the use of their real world actions and suggested it was both engaging and encouraged them to reflect more deeply on their actions. As such, breaking the “Magic Circle” is a promising technique for persuasive games which warrants further research.
In terms of development, the game was built using Game Salad (a no-code game authoring tool). Though I’m familiar with code, (and there were times I wished I was using code!) GameSalad did provide a really quick way to get started and put ideas together without too much learning overhead. Having since tried to pick up Unity, I can say Game Salad is definitely a lot quicker to get your ideas into something working, which is crucial for iterative development. Game Salad also made it very easy to publish my game so that others could play it online, which helped enormously when getting feedback and conducting the user study as there was no installation or downloads required.
In an early version of the game, everything was represented by incredibly simple shapes and numbers. While I support the concept of primarily using the mechanic to deliver the message, as demonstrated remarkably in Brenda Romero’s games, I felt my game would benefit from graphics and sound effects to help immerse the player and reinforce the narrative that these were real people affected by their actions, encouraging stronger empathy. Given the short time frame (and that I am only one person who is not talented enough to do everything from scratch!) I was exceedingly grateful for assets created by other artists which allowed me to create something that was much more complete than I could have achieved on my own. I found the music by Bio_Unit on Free Music Archive and got sound effects from Kenney.nl (a fantastic source for several free assets!) I paid for some pixel-art assets from Malibu Darby through Humble Bundle which, very fortunately, had a game dev bundle just as I was building the game. I also dabbled in editing the pixel art myself to customise it to my game. If you’d like to give that a go, I recommend aseprite.
Overall, I was really happy with my game and the results of the user study. Prof. Anna Cox had suggested that the class submit their work to the CHI PLAY student game competition. Earlier in my time at UCL, Prof. Catherine Holloway had encouraged us to always try and share our work with the academic community and introduced me to Student Research Competitions. These are often held across different ACM conferences (such as SIGCHI, SIGACCESS, CHIPLAY) and felt much more approachable to me as the level of work expected was closer to what I had already done for my coursework. As such, I decided to go for it and submit my game and a paper detailing the user study to CHI PLAY 2021’s Student Game Design Competition. I didn’t really expect to win anything, but I was just happy to share my game in the hopes it might have a life beyond my university coursework grades.
So I was very excited when I heard back to find out my game had been accepted! It turns out, 8 of the 19 submissions were accepted to the conference as finalists. This did mean, however, that I had a bit more work to do! I updated my paper to respond to the reviewers’ comments, battled with the proper formatting and TAPS submission system (though the support staff are really helpful), and created a reaction video to be played at the conference. I’d be lying if I said it wasn’t stressful (largely because it fell right across the deadline for my final MSc dissertation!!) but it was definitely worth it to see my work presented at the conference and all the interest it generated. Even more so when I was announced as receiving an honourable mention for the competition! I also got to attend the entire conference – though you don’t have to be an author to do so – and joined lots of interesting talks and presentations across all aspects of games.
It was really rewarding finding myself back in the games industry, in a sense, but from a completely different direction than I had ever imagined and one that was much better suited to me and my passions. It was even more rewarding to know that I had contributed to it and that, maybe one day, my research will have an impact on the games I end up playing on my sofa.
Me this week: why is this guy chasing me for a reply today when he only emailed me on Thursday?
Also me this week: how come that person I emailed on Wednesday last week STILL hasn’t replied?
We’ve all spent time hitting the refresh button on an inbox waiting for a reply to an email we’ve sent and wondering what’s holding things up. The truth is, people send urgent emails before others and if they think a reply to your email is not urgent and is going to take them ages to write, you might be waiting a really long time!
Our frustration waiting for others to reply to our emails led us to investigate which factors influence how quickly people respond to emails and whether there’s anything we can do as a sender to get them to choose to reply to us before answering someone else’s message!The results of our study of 45 people responding to 16,200 e-mails sent over a 3 week period show that when e-mail replies are not urgent, people wait to a later time to send replies rather than responding immediately. However, when they do respond they are more likely to tackle the messages that are easier to respond to (eg needing a short reply) and those that carry the greatest importance (eg when there’s something in it for the sender). In contrast, when presented with e-mails that need an urgent reply, people prioritize these and disregard factors such as length of reply.
Our results are important for all of us who use e-mail and want timely responses. Composing e-mails that clearly signal that an urgent response is needed is the best way to ensure that the receiver will deal with it promptly. If it’s not urgent, making clear that you just need a short response will mean your email gets replied to before others.
Anna L. Cox, Jon Bird, Duncan P. Brumby, Marta E. Cecchinato & Sandy J. J. Gould (2021) Prioritizing unread e-mails: people send urgent responses before important or short ones, Human–Computer Interaction, 36:5-6, 511-534, DOI: 10.1080/07370024.2020.1835481
Our paper “The new normals of work: a framework for understanding responses to disruptions created by new futures of work” has just come out in Human-Computer Interaction Journal.
Open access to the paper is available here: https://www.tandfonline.com/doi/full/10.1080/07370024.2021.1982391
In the paper, we explore how people adapted to work during the pandemic and how we might understand people’s response to disruption in the new future of work. We highlight a number of issues, tools and strategies that people used in their work to support them while working remotely. For example, virtual commutes, having dedicated space, new scheduling techniques or staying connected with colleagues through virtual chats and async chats
Exploring these with the Genuis and Bronstein model of “new normal” we show 3 kinds of responses:
- waiting to return to old normal,
- finding a new normal and
- anticipating a new future of work.
These new normals of work help us to understand how we can help workers going forward.
We’d like to thank our reviewers for their feedback and our participants for helping develop our work within eworklife.co.uk and a special shoutout to @DilishaBP whose work with the new normal model inspired this work 😀 You can find their paper on Finding a “New Normal” for Men Experiencing Fertility Issues here: dl.acm.org/doi/abs/10.114…
If you find this interesting you might also like our other papers on work during the pandemic:
- Disengaged From Planning During the Lockdown? An Interview Study in an Academic Setting Yoana Ahmetoglu; Duncan P. Brumby; Anna L. Cox (2021) IEEE Pervasive Computing
- Staying Active While Staying Home: The Use of Physical Activity Technologies During Life Disruptions Joseph W. Newbold, Anna Rudnicka and Anna Cox (2021) Frontiers in Digital Health
- Eworklife: Developing effective strategies for remote working during the COVID-19 pandemic A Rudnicka, JW Newbold, D Cook, ME Cecchinato, S Gould, AL Cox (2020) The New Future of Work Online Symposium
The first draft of this blogpost was written as a twitter thread by Joe Newbold and unrolled using ThreadReader
When the Social Becomes Non-Human
Have you ever interacted with non-humans? A team of researchers from the University of Oslo and the SINTEF examined how young people perceive various types of social support provided by chatbots. Their results indicate that chatbots can be a daily source of social support for young people, helping them think about themselves more constructively, and stimulating self-disclosure without social judgment by offering a safe and anonymous space for conversation.
Read the research paper here: https://dl.acm.org/doi/10.1145/3411764.3445318
Young people are increasingly suffering from mental health issues, but they tend not to seek out professional help. Despite needing social support, young people often struggle to reach out to others. This problem has become more acute during the COVID-19 pandemic. Unexpected changes to professional and personal lives have placed a burden on people’s mental health. Pandemic-related restrictions such as lockdowns and social distancing have made it more difficult to receive in-person support from friends and professionals. As a result of all this, we now urgently need effective online tools that can provide people with the social support they need.
Chatbots, especially those designed for social and mental health support (social chatbots, or emotionally aware chatbots), can help meet these demands. As artificial agents, chatbots interact with users through natural language dialogue (text, speech, or both). Social chatbots, such as Replika, Woebot, and Mitusuku, imitate conversations with friends, partners, therapists, or family members to be humanlike, with the potential to perceive, integrate, understand, and express emotions.
Other online channels (e.g., Instagram, Facebook, online groups, and health forums, etc.) can also provide social support to young people, however they carry certain limitations such as the risk of receiving inaccurate guidance or the possibility of not receiving help from others despite reaching out.
The researchers conducted in-depth interviews with sixteen young people aging from 16-25 years old. They found that after using Woebot for two weeks, most participants reported that the chatbot provided appraisal support and informational support; around half of them received emotional support; and some received perceived instrumental support.
- Appraisal support: Support offered in the form of feedback, social comparison, and affirmation.
- Emotional support: Expressions of empathy, love, trust, and caring.
- Informational support: Advice, suggestions, and information given by the chatbot to help the user solve a problem.
- Instrumental support: Tangible aid, which is characterized by the provision of resources in offering help or assistance in a tangible and/or physical way, such as providing money or people.
So more specifically, what did the sixteen young people’s find was good about these chatbots? First of all, as a non-human agent, the conversational chatbot can make people feel like they are writing a diary entry or speaking in a way to themselves, thereby facilitating self-reflection, and making self-disclosure easier, safer, and more honest. Using chatbot for social and emotional support is also thought to be more reliable than talking to a human, as well as being a good choice for discussing worries that are more personal or private.
As artificial agents, chatbots can easily provide users with lots of relevant, immediate, and efficient information, without being constrained by time or space. This may be useful when our worries extend beyond the scope of our friends’ knowledge or expertise. Moreover, different sources of support can act collaboratively, as the chatbot Woebot was reported to motivate users to contact others for help as well as guiding people in their search for information, which indicates that a chatbot may have the potential to help solve practical problems.
However, despite the many positive comments from users, using current chatbots for social support is not a perfect solution. Current chatbots may have biases, or inadequate or failed responses, affecting the quality of the user experience. Psychologically, getting support from others makes some people feel ‘cared for and loved’, which is not the case when receiving response from a chatbot, as some may see chatbots as merely robots and not emotional. People who need chatbots as a source of support, may need time to develop relationships, become familiar with and develop trust towards chatbots. Through conversations about personal stories, user’s private data is collected and stored in chatbots. Ensuring users’ privacy and maintaining a relationship of trust is another challenge.
Indeed, chatbots provide us with a new way to get connected and supported besides the traditional human-human context. It could be further evaluated and studied in a larger sample and different user groups. This brings further questions about how chatbots for social support may influence the future of human communication. Imagine the future – when one day chatbots can provide social support like real people. What will be the human-human relationship at that time? When we have another place (chatbots) to talk about our distress or happiness, how will it affect our interpersonal relationships?
How AI is helping us connect in digital spaces
With the COVID-19 pandemic resulting in stringent social restrictions around the world, I was fascinated by how individuals (including myself) adapted to our newfound situation. Living in a new reality where I could not meet up with friends and family, I, like others, turned to social technologies to satiate my need for connection. However, with all other social affordance stripped away, I was left unsatisfied with the current capabilities of online socialising. Online messaging and video-calling did not seem to satisfy my social cravings. At times, it was dry, awkward, and overwhelming. It truly took a pandemic to realise the deficiencies of digital devices in emulating the deep, rich, exciting (and often messy) offline interactions we took for granted in our pre-pandemic life.
Therefore, as the CHI 2021 conference came around, I was excited to see what new research was being undertaken to make the online socialising experience more meaningful. As I scoured the programme, favouriting talks relating to online communication, social media, etc., I noticed that two studies incorporated artificial intelligence (AI) to help make online interactions more affect sensitive through ‘AI-mediated communication’. ‘Affect’ refers to the psychological common-denominator of our emotional lives, underpinning emotions, moods, feelings, etc. (Russell, 2003). So how were researchers leveraging AI to help make our online interactions more affective, and should they be?
The first study by Murali et al. (2021) titled: AffectiveSpotlight: Facilitating the Communication of Affective Responses from Audience Members during Online Presentations, developed and investigated the efficacy of a Microsoft Teams embedded affect-sensitive AI-bot (named AffectiveSpotlight). The study aimed to address the problem of limited audience-presenter interaction during online presentations by implementing AffectiveSpotlight which was able to capture and communicate the affective responses from the audience to the presenter. The bot would analyse emotive responses (valued by presenters) of each audience member in real-time and spotlight the most expressive to the presenter without labelling the emotion; allowing the presenter to interpret it themselves. The study found that usage of AffectiveSpotlight improved the presenter’s experience: making them feel more aware of their audience, speak for longer periods (implying reduced speaker anxiety), and also led to self-rating of their presentation quality that was closer to audience members responses. Whilst these were promising results, participants were solely from the tech sector, limiting the overall generalisability of findings to other social groups who also perform online presentations e.g., teacher-student interactions.
The second study by Liu et al. (2021) titled: Significant Otter: Understanding the Role of Biosignals in Communication, explored the usage of a smartwatch-based AI software (named Significant Otter) that analysed users bio-signals (heart rate) to generate a set of possible emotional states that the user could choose from and send to their romantic partner via animated otters. The qualitative study followed romantic couples for over a month to investigate the role of Significant Otter’s bio-signal sharing capability in communication. Couples reported that the ability to share bio-signals in this manner supported easier, authentic communication and nurtured a greater sense of social connection. These were exciting results; however, the paper did mention how some participants questioned themselves when the presented possible emotional states did not match what they felt internally. Also, other participants would blindly accept Significant Otters suggestions leading them to reflect less on their actual state.
These papers fascinated me; I had never thought that AI could help close the emotional gap between individuals in digital space, and how this helped to facilitate richer online interactions. Research into affect-sensitive AI is integral to the HCI field of affective computing. There have been past calls in this field to frame affect not as discrete units of information to be processed by a computer (affect as information), but as dynamic, socio-culturally embedded outcomes experienced through interaction (affect as interaction) (Boehner et al., 2005). It was interesting to see how the new technologies introduced in CHI 2021 were veering to the latter framing, where participants in Liu et al (2021 and Murali et al. (2021) were left free to interpret the emotions presented by the technologies and ascribe meaning themselves. It is exciting to see current affect-sensitive technologies taking this perspective as from a wider lens, it signals a shift from technologies being representational tools, to being participatory tools. I cannot wait to see what is in store next!
Our new research paper coauthored with Dr Diego Garaialde and Dr Ben Cowan from UCD, identifies the best location to place rewards when using gamification to motivate users. The research paper, which was published in the International Journal of Human-Computer Studies and is available online on Science Direct (sciencedirect.com), highlighted that placing rewards early on in the user interaction was more effective and encouraged users to use the app more.
Gamification has become a common technique for incentivising users to engage with an application however the placement of this tactic and how it impacts on a user is often not considered during the design process. We found that the value a user places on a reward diminishes the further into the future it is and placing rewards early in the interaction sequence leads to an improvement in the perceived value of that reward.
Speaking about the findings Diego Garaialde said: “Rather than rewarding after longer interactions with the app, which is common in current gamified applications, designers should consider rewarding users early for deciding to interact with the app in the first place.”
In November 2020, we conducted focus groups with 38 incoming undergraduates. Here, we present the findings from this research – which may be helpful for lecturers seeking to support university students in remote learning.
Camera on or off?
Students missed seeing and being seen by peers and educators. Yet they found keeping a camera on during lectures intrusive. And seeing into the rooms of students who weren’t paying attention was demotivating.
Positive social media use during lectures
Social isolation led to social media use during lectures. Scrolling through feeds provided the stimulation needed to keep students at least partly engaged in the lectures they didn’t enjoy.
Online procrastination got out of hand
Pre-recorded lectures allowed a far more disruptive use of social media: bored students would pause the lecture and stream video content instead.
Technology doesn’t guarantee interaction
Students appreciated chat facilities and online breakout rooms. But when lecturers didn’t respond to questions in chat, or if starting a conversation in a breakout room felt awkward, students would lose motivation to engage in this mode of learning.
What can university lecturers do?
Use online quizzes and live polls! For the students we interviewed, interactivity was useful for helping to avoid distractions, and even more so for maintaining focus on lecture content. They wished that polls and quizzes were more often incorporated into online learning.
was supported by funding from the Medical Research Council (MR/T046864/1). It was conducted by Year 3 UCL Psychology students Selina He, Eloise May, Simran Suden, and Ella Verrells, supervised by Professor Anna Cox, Dr Anna Rudnicka, Elahi Hossain, and Professor Yvonne Rogers from UCL Interaction Centre.
Applications are invited for a PhD studentship at the UCL Interaction Centre (UCLIC), funded by an EPSRC/Microsoft iCASE studentship, for up to 4 years, from October 2021. Minimum enhanced stipend of £22,109 per annum, plus fees.
Spreadsheet applications, such as Excel, are deep and feature-rich software. We want users to learn and understand spreadsheet applications to take full advantage and be empowered by them. An important technique for doing so is ‘in-app teaching’, where we introduce new features and suggest tutorials through pop-up dialogs. However, we do not fully understand the optimal timing and level of information to provide in these dialogs. Nor do we understand how these dialogs participate in the wider learning experience of the user, which may involve consulting documentation, video tutorials, training courses, and help from colleagues.
If we do it right – we create an empowering moment for the user, who learns something new and useful. If we do it wrong (e.g., wrong timing, or wrong level of detail), we create a frustrating and irrelevant distraction that results in decreased user trust and satisfaction.
This PhD would develop a theory of interruptibility and spreadsheet mastery from observational studies and experimentally test one or more design interventions that improve the timing/design of in-app teaching dialogs. Beyond the immediate application of helping us better teach users our newest and best features in spreadsheet applications, like Excel, the results may have profound implications for how we design trustworthy tutorials for all feature-rich software. See more information about the project.
Applicants should be interested in Human-Computer Interaction (HCI), and must possess a strong Bachelor’s (1st or 2:1) or Master’s degree in a related discipline (e.g., Computer Science, HCI, Psychology). The ideal candidate for this project will be a deep analytical thinker who is also equipped with the necessary technical skills to conduct research using one or more of empirical methods (i.e., quantitative experiments conducted in the lab or the field or qualitative observational studies). Good programming skills, experience of software development of interactive applications or analytical models, and relevant previous research experience are also desirable.
To be considered for this scholarship applicants need to be meet the eligibility requirements defined by the UK Research and Innovation (please see linked document)). In particular, any applicants “classed as a home student” would be eligible for funding; applicants “classed as an International student” could be eligible for funding in exceptional circumstances (for example, if a candidate has an outstanding track record of very relevant research, including publications in top-tier venues). Please refer to the linked document for definitions of “home” and “international” student.
- Personal statement (1 – 2 pages).
- Research proposal (1 – 4 pages): a summary of relevant literature to motivate a research question and a description of the type of research to be conducted (including ideas about the methodology and data analysis that could be used).
- Name and email contact details of two referees.
- Academic transcripts.
Help with your proposal
Know what kind of contribution you want to make:
Use Seven Research Contributions in HCI by Jacob O. Wobbrock to help you think about what you want to do.
How should you structure your proposal?
The following advice is based on Andrew Derrington’s PIPPIN magic formula for structuring a research proposal.
- Briefly state the PROMISE. What will your programme of research deliver?
- Say why it is IMPORTANT. What gap in the literature does it address? Or which applied problem does it aim to solve?
- State up to 3 sub-PROBLEMS. What are the things you need to find the answer to in order to deliver on your promise?
- Introduce your PROJECT. Briefly say what sort of approach you will take.
- Next decribe how you intend to IMPLEMENT your programme of research. Which methods will you use to find the answer to your 3 sub-problems.
- And finally, say what will happen NEXT. What is the potential impact of your project?
Interviews will take place around 21 June 2021.
We’re part of a group (Sandy J.J. Gould, Lewis L. Chuang, Ioanna Iacovides, Diego Garaialde, Marta E. Cecchinato, Benjamin R. Cowan, Anna L. Cox) running a special interest group meeting at CHI2021 on the idea of adding ‘friction’ to interactions. Most of the time designers and engineers try to make interactions with technology less effortful. Frictions are about doing the opposite in order to change the way people interact with something.
Human-computer interactions are implicitly designed to be smooth and efficient. The implicit objective is to enhance performance, improve safety, and promote satisfaction of use. Few designers would intentionally create systems that induce frustration or are inefficient or evendangerous. Nonetheless, optimizing usability can lead to automatic and thoughtless behaviour. In other words, an over-optimization of performance and satisfaction could imply or encourage behaviours that compromises individual users and their communities.
Frictions —changes to an interaction to make it more taxing in some way— are one potential solution to the risks of over-optimisation and over-proceduralisation. The content warnings placed on social media posts on platforms like Facebook and Twitter are an example of a a friction. These frictions have been added in response to particularly ‘risky’ scenarios, where, for instance, widespread misinformation may significantly influence democratic processes. Twitter, for instance, added friction to the process of ‘retweeting’ (i.e., relaying a message to other users) for certain messages. If a user tried to retweet a message containing a link without having opened the link then Twitter would produce an interstitial dialog asking users if they wanted to read the link before retweeting (Andrew Hutchinson 2020).
In the short proposal we submitted, we consider the perspectives of different academic disciplines’ accounts (and usages) of tensions between automatic and deliberate behaviour. We explore the limits on theoretical frameworks that can plausibly describe the mechanism of designed frictions. Following this, we enumerate some effective designs for intentional frictions in human-computer interactions, identify abstract principles from their real-world use, and expand on how they could be generalized for innovations in designed frictions. Finally, we hope to address how current practices for evaluating usability can be modified to consider the potential costs of automatic behaviour and how they could be mitigated with designed frictions.
There a number of open questions about the use of frictions. One of the goals of the SIG is to determine which are most pressing. As we see it, the most important questions about frictions are:
- What kinds of interactional contexts are frictions most suited to?
- What are the most effective ways to get people to switch to a slower, more deliberative way of thinking?
- How quickly do people become habituated to frictions, and how do we manage and/or mitigate the effects of friction habituation?
- Should we be focusing on changing people’s behaviour instead of steering them with frictions?
- How do we calibrate frictions so that they give people space to think, but are not excessively frustrating or negative to user experience?
To find out more go to https://www.sjjg.uk/frictions-sig/