IDC Seminar (CG04) – Visualisation for Facilitating Human-Machine Cooperation in Data Science

Dr Cagatay Turkay

giCentre, Department of Computer Science, City Univeristy



The unprecedented increase in the amount, variety and value of data has been significantly transforming the way that scientific research is carried out and businesses operate. As data sources become increasingly diverse and complex, analysis approaches where the human and the computer operate in collaboration have proven to be an effective approach to derive actionable observations. Interactive visual methods offer novel means to facilitate such a cooperation. This talk will discuss the scope, strengths, and limitations of such methods, and walk you through a number of approaches over applied examples.


Cagatay Turkay is a Lecturer in Applied Data Science at giCentre in the Computer Science Department at City, University of London. He has a PhD in visualisation from University of Bergen, and served as a visiting research fellow at the Visual Computing group at Harvard University in 2013. His research mainly focuses on designing visualisations, interactions and computational methods to enable an effective combination of human and machine capabilities to facilitate data-intensive problem solving. He works together with experts in various domains such as biomedicine, transport, intelligence, cyber security and social science, to name a few.  He actively contributes in various roles to journals and conferences within visualisation and computer graphics and, leads and contributes to a number of national, international, and industry-funded research projects.

IDC Seminar (V104) – Computational stories and arguments: an A.I. approach to sense making

Room: V104

Floris Bex

Intelligent Systems group, Department of Information and Computing Sciences, Utrecht University (the Netherlands)

Sense making in intelligence analysis often involves stories or scenarios (e.g. the alternative hypotheses in ACH) and arguments (e.g. the pro and con reasons in IBIS). Many of the existing sense making techniques treat stories and arguments in a structured but informal way, that is, they do not provide any formal mathematical semantics. This lack of formal underpinning means that it is not possible to use powerful techniques – such as sensitivity analysis and process verification – to improve the intelligence analyses. However, care must be taken when introducing more complex mathematical models, as they can seriously impede the sense making process.


In this talk, I discuss a hybrid theory of stories and arguments, and specifically how this theory provides a logical or probabilistic semantics for stories and arguments. I show how ideas from Artificial Intelligence can be used to improve and (partly) automate the sense making process, whilst at the same time sticking close to natural and familiar concepts in existing sense making techniques. I illustrate the theory with a case study from the Dutch National Police, with whom we are working together to improve the intake and investigation processes surrounding cyber- and e-crime.


Floris Bex is a lecturer at the Intelligent Systems group of the Department of Information and Computing Sciences, Utrecht University (the Netherlands). He is interested in how people reason, how this reasoning can be captured in formal models and how it can be supported and improved using smart technologies. A core aim of his is to develop tools that can be used to disseminate and analyse complex reasoning involving large amounts of data, such as legal & forensic reasoning, reasoning in the design of complex systems and opinions on the Web.


IDC seminar – Cultural Differences in the acceptance of inclusive ICT

Dr. Elke Duncker-Gassen

Department of Computer Science, Middlesex University

elkeThe aim of the research is to show, whether or not culture influences the acceptance of inclusive technology either directly or indirectly – through a different attitude toward peoples with disabilities, towards traditional assistive technologies (for instance hearing aids) and towards ICT in general. A pilot study has been carried out, the findings of which will be presented as well as the plans for the proper study.

IDC seminar – SenseMap: Supporting Browser-based Sensemaking through Analytic Provenance

Phong Nguyen, PhD student with IDC

Phong Sensemaking is described as the process of comprehension, finding meaning and gaining insight from information, producing new knowledge and informing further action. Very often, users get lost when solving a complicated task using a big dataset over a long period of exploration and analysis. They may forget what they have done, are not aware of where they are in the context of the overall task, and do not know where to continue. In this paper, we introduce a tool, SenseMap, to address these issues in the context of browser-based online sensemaking. We conducted a semi-structured interview with nine participants to explore how they search, manage, and synthesize online information for their daily work activities. This was followed by a series of design workshops to walk the user scenarios, generate design questions, and formulate solutions relating to user interactions, tool features and manifestation. A simplified model based on Pirolli and Card’s sensemaking model is derived to better represent the browser behaviors we found and to guide the development of design requirements: users iteratively collect information sources relevant to the task, curate them in a way that makes sense, and finally communicate the findings to others. SenseMap automatically captures a user’s sensemaking actions, i.e., analytic provenance, and provides multi-linked views to visualize and curate the collected information, and communicate the findings. To explore how SenseMap is used, we conducted a user study in a naturalistic work setting with five participants completing the same sensemaking task related to their daily work activities. Most of the participants found the tool intuitive to use. It helped them to organize information sources, to quickly navigate to the sources they wanted, and enabled them to effectively communicate their findings. A process model is also derived based on both quantitative and qualitative data analysis.

IDC seminar – BCS-HCI 2016 Practice Talks (15 min each)

Understanding 3D Mid-Air Hand Gestures with Interactive Surfaces and Displays: A Systematic Literature Review

Celeste Groenewald, Middlesex

Presentation slides

Celeste Groenewald Profile3D gesture based systems are becoming ubiquitous and there are many mid-air hand gestures that exist for interacting with digital surfaces. There is no well-defined gesture set for 3D mid-air hand gestures which makes it difficult to develop applications that have consistent gestures. To understand what gestures exist we conducted the first comprehensive systematic literature review on mid-air hand gestures following existing research methods. The results of the review identified 65 papers where the mid-air hand gestures supported tasks for selection, navigation, and manipulation. We also classified the gestures according to a gesture classification scheme and identified how these gestures have been empirically evaluated. The results of the review provide a richer understanding of what mid-air hand gestures have been designed, implemented, and evaluated in the literature which can help developers design better user experiences for digital interactive surfaces and displays.

Celeste is a second year PhD student at Middlesex University and is working on the VALCRI project (WP3): Insight and Sense-making in Criminal Intelligence Analysis.

Common ground in collaborative intelligence analysis: an empirical study

Dr Sean Xavier Laurence

Presentation slides



In this talk, I will briefly cover an empirical exploration of how different configurations of collaboration technology affect peoples’ ability to construct and maintain common ground while conducting collaborative intelligence analysis work. Unlike prior studies of collaboration technology that have typically focused on simpler conversational tasks, or ones that involve physical manipulation, the tasks in this study presented focuses on the complex sensemaking and inference involved in intelligence work. The study explores the effects of video communication and shared visual workspace (SVW) on the negotiation of common ground by distributed teams collaborating in real time on intelligence analysis tasks. We theorised that the effect and value of communication cues, visual cues and awareness nuances, is attenuated when communication is mediated via video, more so than it is for shared visual workspace mediated-communication. In this sense, teams utilizing a remote collaborative framework with fewer visual cues might be expected to work harder to maintain common ground, and vice versa.

The experimental study uses a 2×2 factorial, between-subjects design involving two independent variables: presence or absence of Video and SVW. Two-member teams were randomly assigned to one of four experimental media conditions and worked to complete several intelligence analysis tasks involving multiple, complex intelligence artefacts. Teams with access to the shared visual workspace could view their teammates’ eWhiteboards. Our results demonstrate a significant effect for the shared visual workspace: the effort of conversational grounding is reduced in the cases where SVW is available. However, there were no main effects for video and no interaction between the two variables. Also, we found that the “conversational grounding effort” required tended to decrease over the course of the task.

Dr Sean Xavier Laurence has recently taken up a lecturing position at the Joint Intelligence Training Group, Royal School of Military Survey, THATCHAM, UK, where he leads the MSc teaching modules in Human-Computer Interaction & Information Systems.

Towards an Approach for Analysing External Representations Created During Sensemaking Using Generative Grammar

Efeosasere Okoro and Simon Attfield (presenting)

Presentation slides

Defe_okorouring sensemaking, users often create external representations to help them make sense of what they know, and what they need to know. In doing so, they necessarily adopt or construct some form of representational language using the tools at hand. By describing such languages implicit in representations we believe that we are better able to describe and differentiate what users do and better able to describe and differentiate interfaces that might support them. Drawing on approaches to the analysis of language, and in particular, Mann and Thompson’s Rhetorical Structure Theory, we analyse the representations that users create to expose their underlying ‘visual grammar’. We do this in the context of a user study involving evidential reasoning. Participants were asked to address an adapted version of the IEEE VAST 2011 mini challenge 3 (interpret a potential terrorist plot implicit in a set of news reports). We show  how our approach enables the unpacking of  heterogeneous and embedded nature of user-generated representations and allows us to show how  visual grammars can evolve and become more complex over time in response to evolving sensemaking needs.

simonDr Simon Attfield is Associate Professor of Human Centred Technology at the Interaction Design Centre, Middlesex University. His research involves understanding how people think about and work with information, processes involved in sensemaking, and implications for interactive systems design, including the design and evaluation of information visualisation. He has conducted user-research in military signals intelligence and patterns of life analysis, crime-analysis, news writing, corporate investigations and healthcare. He teaches Human Computer Interaction and Interaction Design. He received a B.A. in Philosophy and a BSc. in Experimental Psychology from Sussex University, and a PhD in Human Computer Interaction from University College London.

Rethinking Decision Making in Complex Work Settings: Beyond Human Cognition to the Social Landscape

Nallini Selvaraj (presenting) and Bob Fields

Presentation slides

Nallini-smallThis paper presents a fusion of ideas across disciplines to study and conceptualize decision making. Typically, decision making is approached as a cognitive process. Nevertheless, there is a growing shift in perception towards decision making as more than a cerebral activity to one being situated, embedded and embodied in the social landscape of work activities. Research addressing these aspects is still in its infancy and more work is required to develop the notions. The research presented in here makes a theoretical contribution to this shift. Taking a Computer Supported Cooperative Work (CSCW) perspective, this paper explores how decision making is articulated in the cooperative arrangement of a complex work setting. In the process, it explicates the situated, embedded and embodied nature of decision making. The paper reflects on conventional notions of decision making and demonstrates its differentiated nature during every day work performance in a real-world complex work setting.