Dr Cagatay Turkay
giCentre, Department of Computer Science, City Univeristy
The unprecedented increase in the amount, variety and value of data has been significantly transforming the way that scientific research is carried out and businesses operate. As data sources become increasingly diverse and complex, analysis approaches where the human and the computer operate in collaboration have proven to be an effective approach to derive actionable observations. Interactive visual methods offer novel means to facilitate such a cooperation. This talk will discuss the scope, strengths, and limitations of such methods, and walk you through a number of approaches over applied examples.
Cagatay Turkay is a Lecturer in Applied Data Science at giCentre in the Computer Science Department at City, University of London. He has a PhD in visualisation from University of Bergen, and served as a visiting research fellow at the Visual Computing group at Harvard University in 2013. His research mainly focuses on designing visualisations, interactions and computational methods to enable an effective combination of human and machine capabilities to facilitate data-intensive problem solving. He works together with experts in various domains such as biomedicine, transport, intelligence, cyber security and social science, to name a few. He actively contributes in various roles to journals and conferences within visualisation and computer graphics and, leads and contributes to a number of national, international, and industry-funded research projects.
Intelligent Systems group, Department of Information and Computing Sciences, Utrecht University (the Netherlands)
Sense making in intelligence analysis often involves stories or scenarios (e.g. the alternative hypotheses in ACH) and arguments (e.g. the pro and con reasons in IBIS). Many of the existing sense making techniques treat stories and arguments in a structured but informal way, that is, they do not provide any formal mathematical semantics. This lack of formal underpinning means that it is not possible to use powerful techniques – such as sensitivity analysis and process verification – to improve the intelligence analyses. However, care must be taken when introducing more complex mathematical models, as they can seriously impede the sense making process.
In this talk, I discuss a hybrid theory of stories and arguments, and specifically how this theory provides a logical or probabilistic semantics for stories and arguments. I show how ideas from Artificial Intelligence can be used to improve and (partly) automate the sense making process, whilst at the same time sticking close to natural and familiar concepts in existing sense making techniques. I illustrate the theory with a case study from the Dutch National Police, with whom we are working together to improve the intake and investigation processes surrounding cyber- and e-crime.
Floris Bex is a lecturer at the Intelligent Systems group of the Department of Information and Computing Sciences, Utrecht University (the Netherlands). He is interested in how people reason, how this reasoning can be captured in formal models and how it can be supported and improved using smart technologies. A core aim of his is to develop tools that can be used to disseminate and analyse complex reasoning involving large amounts of data, such as legal & forensic reasoning, reasoning in the design of complex systems and opinions on the Web.
Dr. Elke Duncker-Gassen
Department of Computer Science, Middlesex University
The aim of the research is to show, whether or not culture influences the acceptance of inclusive technology either directly or indirectly – through a different attitude toward peoples with disabilities, towards traditional assistive technologies (for instance hearing aids) and towards ICT in general. A pilot study has been carried out, the findings of which will be presented as well as the plans for the proper study.
There will be an IDC seminar next Tuesday (26th) given by Ian Kruger.
Title: Self-service Analytics: A Social Perspective
Abstract: Visual Analytics is coming of age as a tool for aiding business users, helping them exploit data as an asset and make informed decisions. There is of late a general shift towards self-service in Business Intelligence analytics, supported by vendors providing increasingly user-friendly, visually driven tools and the demand by end-users for timeous and flexible access to transactional data. What remains less well understood are the dynamics surrounding the design of information dashboards under these conditions. This presentation explores the findings of an ethnographic-inspired study at a University in the process of adopting Tableau as a management reporting tool, during a period in which there were significant changes to HE environment.
In large hierarchical bureaucracies that typify the modern knowledge-intensive organisation, like a University, responding to complex and dynamic environments, there are two important social dynamics to the creation and sharing of organisational knowledge. On the one hand there is a need for communities or business units to develop a deeper understanding of their own problem domain (perspective making) through a rich interplay of narrative and analytic forms of cognition. This results in increasingly specialised and but rich domain specific knowledge and practises that are not easily accessible to other communities within the organisation. And so there emerges a second social process (perspective taking) to communicate and use this knowledge across functional and other organisational boundaries as a basis for their co-operation and collaboration. Allowing communities of knowing to be intimately involved in the building of their own dashboards (through the concept of self-service) allows the dashboards to support the social sense-making roles of perspective making and perspective taking. Understanding the use to which information visualisations (in this case dashboards) are put in management and operational functions, is important for gaining insights that will guide their design, adoption and adaption in these organisations.
Bio: Ian Kruger is currently research assistant at Middlesex University, having submitted his MSc in Visual Analytics at the University.
Phong Nguyen, PhD student with IDC
Sensemaking is described as the process of comprehension, finding meaning and gaining insight from information, producing new knowledge and informing further action. Very often, users get lost when solving a complicated task using a big dataset over a long period of exploration and analysis. They may forget what they have done, are not aware of where they are in the context of the overall task, and do not know where to continue. In this paper, we introduce a tool, SenseMap, to address these issues in the context of browser-based online sensemaking. We conducted a semi-structured interview with nine participants to explore how they search, manage, and synthesize online information for their daily work activities. This was followed by a series of design workshops to walk the user scenarios, generate design questions, and formulate solutions relating to user interactions, tool features and manifestation. A simplified model based on Pirolli and Card’s sensemaking model is derived to better represent the browser behaviors we found and to guide the development of design requirements: users iteratively collect information sources relevant to the task, curate them in a way that makes sense, and finally communicate the findings to others. SenseMap automatically captures a user’s sensemaking actions, i.e., analytic provenance, and provides multi-linked views to visualize and curate the collected information, and communicate the findings. To explore how SenseMap is used, we conducted a user study in a naturalistic work setting with five participants completing the same sensemaking task related to their daily work activities. Most of the participants found the tool intuitive to use. It helped them to organize information sources, to quickly navigate to the sources they wanted, and enabled them to effectively communicate their findings. A process model is also derived based on both quantitative and qualitative data analysis.