On Data (working title)

Sensor-driven data canvas

Developed with the participation of Creative BC and the British Columbia Arts Council.
"we are prototyping a next-generation interactive screen that integrates computer vision, head tracking and sentiment analysis to create real-time data-driven artworks"
welcome

OnData is a prototype for a next-generation interactive screen that integrates emerging technologies like computer vision, head tracking and sentiment analysis to create real-time data-driven artworks that know their viewer, and respond to each individual.

Data-Driven  Artwork
Our goal is to create artwork that allows users to seamlessly interact with data on the surface of a high resolution display. User interaction is captured through an array of embedded sensors, allowing for a natural interaction through movement, gesture and expression.

Toolset
To support this process we are building tools and plugins to help streamline the workflow involved in making data available to real-time game engines like Unreal and Unity. User data capture by sensors is transmitted through the OSC protocol.

Community / Collaboration
In 2019  we will  invite  artists  of  various  disciplines  to  work  with  us  to  create  real-time  generative  works  that  form  unexpected,  surreal,  and  unforgettable  links  between  data  and  imagery. We hope to create work that gives an  emotional  and  human  dimension to data sets.

Education  and  Resources
At  the  core,  the goal of the project is  to  help  facilitate  the  conversation between  data  scientists, artists, and our  increasingly  data-driven  world. We intend to do this by creating  a  robust  display  framework  and  suite of  resources that  allow makers  of  all  skill  levels  to  create  compelling  artworks with  otherwise  difficult  technologies.

Features-Project DocumentationInquiries

Research Goals

-Simplify the process of calibrating sensors and screens
-Simplify the process of networking displays
-Create tools that make working with live data streams in Unreal and Unity easier for artists
-Connect and leverage data from multiple platforms
 
Head-Tracked Camera Projection
By tracking the user's location at 60fps the screen renders the 3D scene from the perspective of the user. This allows the viewer to explore the scene, look around corners, This has been called the magic window effect, or FishtankVR.

The accompanying video shows the effect using the kinectv2. Our third prototype uses Posenet and Tensor flow to track users using a fisheye usb camera.

Relevant Links/ Similar Projects:
Event Based Interactions
We're using machine learning to give the screen an awareness of its surroundings. It can tell if it is being viewed and how many people are viewing. Using these states we are developing a suite of interactions that incorporates user arrival and departure or active interaction.

These 'states' can be used to trigger animation, deliver messaging, wake the screen or put the system to sleep when needed. States can also be used for narrative conditions. For example, a digital work might "unlock" when viewed by a user with a height under 3' (a child).


Approximate Gaze Tracking
We calculate the approximate location of the viewer's gaze by drawing a vector perpendicular to the viewer's facial plane and finding the point of intersection with the display.

This 'gaze point' acts as a cursor that can be used as an interaction point between the viewer and the 3D scene. In this example a user's blink triggers the placement of a white circle.
 
Sentiment Analysis
Sentiment analysis is the process of analyzing input to determine the user's emotional state.

By combining approximate gaze location with emotional state we hope to create language of narrative interaction, driven by user's focus point and emotional response to stimuli.

This example demonstrates how emotional information could be applied to a facial model. The character's gaze alternates between tracking the user and random movement - mimicking human behavior.
Additional Interaction Models
 
This example shows an interaction model based around the leap motion sensor. A user can insert splines into the scene by pinching their fingers together.

Our goal is to make a modular platform, where additional sensing devices can be added. Complex interaction can be achieved by combining data streams from multiple sensors.





Context

This project is inspired by an awareness of the increasing presence of 3D sensors, ubiquitous cameras and machine learning in our every day lives.

Our digital lives are captured in every click: the videos we watch, our likes and dislikes, our contacts and GPS coordinates. Increasingly, tracking is moving from inside the computer outwards by way of mass surveillance.

Every day, cameras capture our whereabouts, our faces and their expressions. With the help of machine learning, computers systems understand to a greater degree than ever before what they are seeing.

Computer vision gives computers the ability to draw their own conclusions. We see real world implications of this technology everyday. https://en.wikipedia.org/wiki/Sesame_Credit

Implications

It’s not difficult to imagine how data collected on us might be applied. We can see how our online habits can translate into advertisements which are targeted towards our tastes. We understand how our still invisible search bubbles reinforce our biases. What we do not understand, is how emerging technologies - ubiquitous sentiment analysis, gaze tracking, machine learning - will be applied - and how information in our real world will be used to study us, construct images, engineer our behavior and more.

It is our goal to explore the applied potential of these technologies towards art-making, and in the process expose the way some of these technologies can be used, for or against us. We hope to develop a dialogue around the emergent language of digital sensors and real-time reactive artwork.

Thank you