Stockholm university

Donald McMillan

About me

Donald McMillan is an Assistant Professor in Human-Computer Interaction. His research interests are centred around the adoption, adaptation, and use of novel technology in everyday settings. His current focus is on expanding the interactional repertoire of conversational interfaces to better understand both how fluid and fluent human-human communication can provide insights into how we can develop better human-computer interaction paradigms through the application of machine learning.  Currently, Donald is a PI on the projects Advanced Adaptive Intelligent Systems and Designing New Speech Interfaces with Ambient Audio. He is also involved in the Implicit Interaction project.

 

Website: https://mcmillan.it
Email: donald.mcmillan@dsv.su.se
 

Research projects

Publications

A selection from Stockholm University publication database

  • Conversational User Interfaces on Mobile Devices

    2020. Razan Jaber, Donald McMillan. Proceedings of the 2nd Conference on Conversational User Interfaces (CUI 2020)

    Conference

    Conversational User Interfaces (CUI) on mobile devices are the most accessible and widespread examples of voice-based interaction in the wild. This paper presents a survey of mobile conversation user interface research since the commercial deployment of Apple's Siri, the first readily available consumer CUI. We present and discuss Text Entry & Typing, Application Control, Speech Analysis, Conversational Agents, Spoken Output, & Probes as the prevalent themes of research in this area. We also discuss this body of work in relation to the domains of Health & Well-being, Education, Games, and Transportation. We conclude this paper with a discussion on Multi-modal CUIs, Conversational Repair, and the implications for CUIs of greater access to the context of use.

    Read more about Conversational User Interfaces on Mobile Devices
  • Against Ethical AI

    2019. Donald McMillan, Barry Brown. Proceedings of the Halfway to the Future Symposium 2019

    Conference

    In this paper we use the EU guidelines on ethical AI, and the responses to it, as a starting point to discuss the problems with our community’s focus on such manifestos, principles, and sets of guidelines. We cover how industry and academia are at times complicit in ‘Ethics Washing’, how developing guidelines carries the risk of diluting our rights in practice, and downplaying the role of our own self interest. We conclude by discussing briefly the role of technical practice in ethics.

    Read more about Against Ethical AI
  • Patterns of gaze in speech agent interaction

    2019. Razan Jaberibraheem (et al.). Proceedings of the 1st International Conference on Conversational User Interfaces

    Conference

    While gaze is an important part of human to human interaction, it has been neglected in the design of conversational agents. In this paper, we report on our experiments with adding gaze to a conventional speech agent system. Tama is a speech agent that makes use of users' gaze to initiate a query, rather than a wake word or phrase. In this paper, we analyse the patterns of detected gaze when interacting with the device. We use k-means clustering of the log data from ten users tested in a dual-participant discussion tasks. These patterns are verified and explained through close analysis of the video data of the trials. We present similarities of patterns between conditions both when querying the agent and listening to the answers. We also present the analysis of patterns detected when only in the gaze condition. Users can take advantage of their understanding of gaze in conversation to interact with a gaze-enabled agent but are also able to fluently adjust their use of gaze to interact with the technology successfully. Our results point to some patterns of interaction which can be used as a starting point to build gaze-awareness into voice-user interfaces.

    Read more about Patterns of gaze in speech agent interaction
  • Text in Talk

    2018. Barry Brown (et al.). ACM Transactions on Computer-Human Interaction 24 (6)

    Article

    While lightweight text messaging applications have been researched extensively, new messaging applications such as iMessage, WhatsApp, and Snapchat offer some new functionality and potential uses. Moreover, the role messaging plays in interaction and talk with those who are co-present has been neglected. In this article, we draw upon a corpus of naturalistic recordings of text message reading and composition to document the face-to-face life of text messages. Messages, both sent and received, share similarities with reported speech in conversation; they can become topical resource for local conversation-supporting verbatim reading aloud or adaptive summaries. Yet with text messages, their verifiability creates a distinctive resource. Similarly, in message composition, what to write may be discussed with collocated others. We conclude with discussion of designs for messaging in both face-to-face, and remote, communication.

    Read more about Text in Talk
  • Bio-Sensed and Embodied Participation in Interactive Performance

    2017. Asreen Rostami (et al.). Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, 197-208

    Conference

    Designing for interactive performances is challenging both in terms of technology design, and of understanding the interplay between technology, narration, and audience interactions. Bio-sensors and bodily tracking technologies afford new ways for artists to engage with audiences, and for audiences to become part of the artwork. Their deployment raises a number of issues for designers of interactive performances. This paper explores such issues by presenting five design ideas for interactive performance afforded by bio-sensing and bodily tracking (i.e. Microsoft Kinect) developed during two design workshops. We use these ideas, and the related scenarios to discuss three emerging issues namely: temporality of input, autonomy and control, and visibility of input in relation to the deployment of bio-sensors and bodily tracking technologies in the context of interactive performances.

    Read more about Bio-Sensed and Embodied Participation in Interactive Performance
  • Situating Wearables

    2017. Donald McMillan (et al.). Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 3582-3594

    Conference

    Drawing on 168 hours of video recordings of smartwatch use, this paper studies how context influences smartwatch use. We explore the effects of the presence of others, activity, location and time of day on 1,009 instances of use. Watch interaction is significantly shorter when in conversation than when alone. Activity also influences watch use with significantly longer use while eating than when socialising or performing domestic tasks. One surprising finding is that length of use is similar at home and work. We note that usage peaks around lunchtime, with an average of 5.3 watch uses per hour throughout a day. We supplement these findings with qualitative analysis of the videos, focusing on how use is modified by the presence of others, and the lack of impact of watch glances on conversation. Watch use is clearly a context-sensitive activity and in discussion we explore how smartwatches could be designed taking this into consideration.

    Read more about Situating Wearables
  • Five Provocations for Ethical HCI Research

    2016. Barry Brown (et al.). Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 852-863

    Conference

    We present five provocations for ethics, and ethical research, in HCI. We discuss, in turn, informed consent, the researcher-participant power differential, presentation of data in publications, the role of ethical review boards, and, lastly, corporate-facilitated projects. By pointing to unintended consequences of regulation and oversimplifications of unresolvable moral conflicts, we propose these provocations not as guidelines or recommendations but as instruments for challenging our views on what it means to do ethical research in HCI. We then suggest an alternative grounded in the sensitivities of those being studied and based on everyday practice and judgement, rather than one driven by bureaucratic, legal, or philosophical concerns. In conclusion, we call for a wider and more practical discussion on ethics within the community, and suggest that we should be more supportive of low-risk ethical experimentation to further the field.

    Read more about Five Provocations for Ethical HCI Research
  • Smartwatch in vivo

    2016. Stefania Pizza (et al.). Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 5456-5469

    Conference

    In recent years, the smartwatch has returned as a form factor for mobile computing with some success. Yet it is not clear how smartwatches are used and integrated into everyday life differently from mobile phones. For this paper, we used wearable cameras to record twelve participants' daily use of smartwatches, collecting and analysing incidents where watches were used from over 34 days of user recording. This allows us to analyse in detail 1009 watch uses. Using the watch as a timepiece was the most common use, making up 50% of interactions, but only 14% of total watch usage time. The videos also let us examine why and how smartwatches are used for activity tracking, notifications, and in combination with smartphones. In discussion, we return to a key question in the study of mobile devices: how are smartwatches integrated into everyday life, in both the actions that we take and the social interactions we are part of?

    Read more about Smartwatch in vivo
  • Repurposing Conversation

    2015. Donald McMillan, Antoine Loriette, Barry Brown. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 3953-3962

    Conference

    Voice interaction with mobile devices has been focused on hands-free interaction or situations where visual interfaces are not applicable. In this paper we explore a subtler means of interaction -- speech recognition from continual, in the background, audio recording of conversations. We call this the 'continuous speech stream' and explore how it could be repurposed as user input. We analyse ten days of recorded audio from our participants, alongside corresponding interviews, to explore how systems might make use of extracts from this stream. Rather than containing directly actionable items, our data suggests that the continuous speech stream is a rich resource for identifying users' next actions, along with the interests and dispositions of those being recorded. Through design workshops we explored new interactions using the speech stream, and describe concepts for individual, shared and distributed use.

    Read more about Repurposing Conversation
  • 100 days of iPhone use

    2014. Moira McGregor, Barry Brown, Donald McMillan. CHI '14 Extended Abstracts on Human Factors in Computing Systems, 2335-2340

    Conference

    This report presents preliminary results from an unobtrusive video study of iPhone use--totalling over 100 days of everyday device usage. The data gives us a uniquely detailed view on how messages, social media and internet use are integrated and threaded into daily life, our interaction with others, and everyday events such as transport, communication and entertainment. These initial results seek to address the when, who and what of situated mobile phone use--beginning with understanding the impact of context

    Read more about 100 days of iPhone use

Show all publications by Donald McMillan at Stockholm University