IDC seminar – BCS-HCI 2016 Practice Talks (15 min each)

IDC seminar – BCS-HCI 2016 Practice Talks (15 min each)

Understanding 3D Mid-Air Hand Gestures with Interactive Surfaces and Displays: A Systematic Literature Review

Celeste Groenewald, Middlesex

Presentation slides

Celeste Groenewald Profile3D gesture based systems are becoming ubiquitous and there are many mid-air hand gestures that exist for interacting with digital surfaces. There is no well-defined gesture set for 3D mid-air hand gestures which makes it difficult to develop applications that have consistent gestures. To understand what gestures exist we conducted the first comprehensive systematic literature review on mid-air hand gestures following existing research methods. The results of the review identified 65 papers where the mid-air hand gestures supported tasks for selection, navigation, and manipulation. We also classified the gestures according to a gesture classification scheme and identified how these gestures have been empirically evaluated. The results of the review provide a richer understanding of what mid-air hand gestures have been designed, implemented, and evaluated in the literature which can help developers design better user experiences for digital interactive surfaces and displays.

Celeste is a second year PhD student at Middlesex University and is working on the VALCRI project (WP3): Insight and Sense-making in Criminal Intelligence Analysis.

Common ground in collaborative intelligence analysis: an empirical study

Dr Sean Xavier Laurence

Presentation slides

Paper

sean

In this talk, I will briefly cover an empirical exploration of how different configurations of collaboration technology affect peoples’ ability to construct and maintain common ground while conducting collaborative intelligence analysis work. Unlike prior studies of collaboration technology that have typically focused on simpler conversational tasks, or ones that involve physical manipulation, the tasks in this study presented focuses on the complex sensemaking and inference involved in intelligence work. The study explores the effects of video communication and shared visual workspace (SVW) on the negotiation of common ground by distributed teams collaborating in real time on intelligence analysis tasks. We theorised that the effect and value of communication cues, visual cues and awareness nuances, is attenuated when communication is mediated via video, more so than it is for shared visual workspace mediated-communication. In this sense, teams utilizing a remote collaborative framework with fewer visual cues might be expected to work harder to maintain common ground, and vice versa.

The experimental study uses a 2×2 factorial, between-subjects design involving two independent variables: presence or absence of Video and SVW. Two-member teams were randomly assigned to one of four experimental media conditions and worked to complete several intelligence analysis tasks involving multiple, complex intelligence artefacts. Teams with access to the shared visual workspace could view their teammates’ eWhiteboards. Our results demonstrate a significant effect for the shared visual workspace: the effort of conversational grounding is reduced in the cases where SVW is available. However, there were no main effects for video and no interaction between the two variables. Also, we found that the “conversational grounding effort” required tended to decrease over the course of the task.

Dr Sean Xavier Laurence has recently taken up a lecturing position at the Joint Intelligence Training Group, Royal School of Military Survey, THATCHAM, UK, where he leads the MSc teaching modules in Human-Computer Interaction & Information Systems.

Towards an Approach for Analysing External Representations Created During Sensemaking Using Generative Grammar

Efeosasere Okoro and Simon Attfield (presenting)

Presentation slides

Defe_okorouring sensemaking, users often create external representations to help them make sense of what they know, and what they need to know. In doing so, they necessarily adopt or construct some form of representational language using the tools at hand. By describing such languages implicit in representations we believe that we are better able to describe and differentiate what users do and better able to describe and differentiate interfaces that might support them. Drawing on approaches to the analysis of language, and in particular, Mann and Thompson’s Rhetorical Structure Theory, we analyse the representations that users create to expose their underlying ‘visual grammar’. We do this in the context of a user study involving evidential reasoning. Participants were asked to address an adapted version of the IEEE VAST 2011 mini challenge 3 (interpret a potential terrorist plot implicit in a set of news reports). We show  how our approach enables the unpacking of  heterogeneous and embedded nature of user-generated representations and allows us to show how  visual grammars can evolve and become more complex over time in response to evolving sensemaking needs.

simonDr Simon Attfield is Associate Professor of Human Centred Technology at the Interaction Design Centre, Middlesex University. His research involves understanding how people think about and work with information, processes involved in sensemaking, and implications for interactive systems design, including the design and evaluation of information visualisation. He has conducted user-research in military signals intelligence and patterns of life analysis, crime-analysis, news writing, corporate investigations and healthcare. He teaches Human Computer Interaction and Interaction Design. He received a B.A. in Philosophy and a BSc. in Experimental Psychology from Sussex University, and a PhD in Human Computer Interaction from University College London.

Rethinking Decision Making in Complex Work Settings: Beyond Human Cognition to the Social Landscape

Nallini Selvaraj (presenting) and Bob Fields

Presentation slides

Nallini-smallThis paper presents a fusion of ideas across disciplines to study and conceptualize decision making. Typically, decision making is approached as a cognitive process. Nevertheless, there is a growing shift in perception towards decision making as more than a cerebral activity to one being situated, embedded and embodied in the social landscape of work activities. Research addressing these aspects is still in its infancy and more work is required to develop the notions. The research presented in here makes a theoretical contribution to this shift. Taking a Computer Supported Cooperative Work (CSCW) perspective, this paper explores how decision making is articulated in the cooperative arrangement of a complex work setting. In the process, it explicates the situated, embedded and embodied nature of decision making. The paper reflects on conventional notions of decision making and demonstrates its differentiated nature during every day work performance in a real-world complex work setting.

Kai Xu

Leave a Reply