Submissions Phase II (statements, sketches, future paper abstracts)
As brain-computer interaction became available it was clear people didn’t want more information gushing toward their brain liked fire-hose aimed at a teacup (see Dilbert comic strip) but a piece of electronic plugged near the back of the brain found a much clever way to improve our capabilities : increase pre-attentive features we are able to recognize ! The processor take information flow directly at the start of our optical nerve and process all kind of computer vision processing in parallel. Results are then added to the normal flow of information our brain receive from tour eyes. If this technology was first created to give back sight to blind people it allows now to distinguish an unhappy face among 200 happy one like a red square among blue ones. Fortunately the pre-attentive processor can be switched off or be tweaked to reveal boolean combination of features (with no interference between features obviously !). Anyway anyone can now have a look at a very messy scatterplot or any hairball graph and patterns just pop up to our eyes 2.0.
We humbly attach here the results of an Exquisite Corpse experiment conducted with our students, envisioning a grey-sky future of small people & large computers, nasal and inflatable and null future visualisation strategies. We note also an increasing convergence with the practice of ‘visualisation’ as practiced by witches and druids in the west of England.
We establish the positive effects of visualizing information about current performance and location on sports garments using TechStyle – a densely woven highly wicking fabric that permits visualization. Our design uses broad single-colour bands near the cuffs of long-sleeved garments to aid navigation. Lighter colours on the left and right sleeves are used to direct wearers left or right respectively. Additional bands on the forearm are used to show heart-rate and power output as detected by the fabric.
These subtle means of informing wearers of their location and performance were tested ‘in the wild’ in four contexts involving three sports and both professional and recreational levels of performance: in orienteering at a training day, in triathlon at a regional event and in cycling with the UK Youth Development Squad and on a recreational ride during Le Tour de VIS 2023.
In the first case (orienteering/performance) all athletes wore TechStyle with a visualization group outperforming a control group in terms of the extent to which actual race times improved upon times predicted using historical performance data. Analysis of literals suggests that the navigation visualization was beneficial. In the triathlon event (recreational) ten athletes wore TechStyle and our qualitative analysis showed no effect in terms of navigation and no benefit to the performance visualization. Qualitative analysis in the performance cycling case indicated that the feedback on heart rate was useful to riders and an improvement upon the existing ways of accessing this information. Cyclists suggested alternative means of providing this information both numerically and graphically and requested a time series. However, the navigation aids were not deemed to be beneficial. Qualitative analysis in the recreational cycling case found the opposite – cyclists considered the navigational aids to be effective and useful but not the performance indicators. The riders liked the jerseys and the fabric nevertheless and suggested the possibility of a placebo effect on performance.
We conclude that TechStyle can provide advantage in some sporting contexts, even with relatively simple graphical depictions of data. The extent to which this is the case will depend upon the event, level of performance and priorities of the participants. This suggests the need for designs that are more context and perhaps athlete specific or more flexible than those used here to improve performance more widely.
In just five short years, visualization, HPC and data analysis users will need to constantly remind themselves which resource they’re working on as they run dozens of virtualization clients on devices from tablets and laptops to wristwatches and headsets. The desktop will not be truly dead — just lonely — living either in an empty office or in a machine room. But it will be busy. New CPU and manycore architectures, cheap memory, and the demands of scalable ray tracing and database algorithms will bring about a return of mid-scale shared-memory computing. Visualization clusters and cloud resources will increasingly consist of “fat nodes”, operated interactively as opposed to in batch. For the foreseeable future, the occasional need for root access, large displays and a real mouse and keyboard to interact and code will bring researchers back to the office to work on their personal machines, before heading off to the next conference to actually do work on them.
In a popular dystopian future where life is tough for young adults, there is still street crime, and still a need for crime analysts. There was once a time where analysts used screens no bigger than 17 inches, and visual analytic systems simply supported decisions, rather than making their own. Let’s jump to 2029 and spend a day with police crime analyst Alex Murphy, who doesn’t always get on with modern technology, to see how things have changed.
What happens when data are ubiquitous in our lives, our homes are completely networked, and information pertinent to decisions we make in daily life are virtually at our fingertips? The approach that hallmarked the first decades of the digital home has been to supply residents with an assortment of standalone apps, websites and social media for retrieving, monitoring and and analysing data from a plethora of sources, and to design specialised views that are particular to devices like tablets and phones. But this doesn’t work in an increasingly fluid and dynamic information landscape where information-driven decisions happen throughout the home in the course of daily life. New light technologies let us paint displays onto the surfaces and materials in our homes, embedding visualization capabilities into the very fabric of our living spaces, and extending the affordances of a display to the actual building envelope, appliances and furniture that comprise that home. Visualization has now become both a functional and an aesthetic consideration in how we design and use our living spaces: instead of the quaint, over-automated “Smart home” of the 1990s, we now have “informative homes” capable of receiving, capturing and communicating data right at the points, times and activities when we need it. Instead of getting regular but infrequent reports of our data (an energy bill, a school report card, the financial records of our building council), the informative home subscribes to data feeds and visualization services that can mash-up and slice the data into meaningful forms based on use and constraints specified by the resident. Architects are working with visualization researchers, municipal planners, and social scientists to explore how well-known principles of automated visualization design can transfer to this broader space, extending the notion of a display to include surface properties (e.g. a stainless steel fridge door), contexts of use (e.g. a kitchen backsplash, a social table) and aesthetic and affective constraints (the “persuasive meter”. )
The movies, it turns out, are a terrible model for data visualization.
Let me step back a moment. If I want to know what the future of, say, intelligent agents might look, I have a lot of choices. KITT from Knight Rider, or the Star Trek computer, or HAL from 2001, or any of a thousand other films and television shows will all give me examples of how speech recognition and intelligent agents might look. A designer of current systems can push back, or pick points on the spectrum—“I’d think it can be more mechanical, less humanoid.”
What about computer graphics? The Holodeck. R2D2 projecting the Princess Leia. Infinite zooming in Blade Runner. 3D worlds in Jurassic Park and a million other movies.
And so it goes for lots of developing technologies. Flying cars and self-driving cars. Robots and tablet computers. Movies have shown us visions of the future for power plants, and long distance transport, and food preparation–and even for how doors might work. Film directors, screenwriters, and effects teams have done a wonderful job of portraying the a computerized, high-technology future.
Now, since the beginning, we’ve all understood that computers are very good at presenting and storing information- Or, at least, we’ve believed that we understand that. Sadly, we have only the poorest of examples to work from.
In this electronic document, we present our latest results in regards to the development of the dynamic data tattoo. The dynamic data tattoo is a semi-permanent body modification made out of permanent data ink. The data ink can be placed on any part of the body and take on any color. It is connected through micro-particles to the typical internal and external body sensors people carry nowadays. Based on the data collected by the sensors and the representation information sent by the sensors, the ink changes color and can, thus, display any type of data and visualizations thereof. If no longer needed, the data tattoo can take on a person’s regular skin color and, thus, completely vanish. We show how one can program data tattoos through various body sensors, how to interact with them to modify the display, and also detail perceptional algorithms that ensure maximum perception quality, no matter where on the body the data tattoo is placed. Further experimental results prove how immediate access to one’s body sensor data (without the use of external devices) can result in dramatic increase in life satisfaction and we conclude with a report on success stories in the medical domain.
Paper submitted to VIS 2139. Full text pending science-crowd-assessment.