#sad

Description
#sad, is an autonomous installation in which a non-biological species, uses deep-learning algorithms to recognise emotions and express itself, while bending human behaviour to suit its needs.

Conceived and developed by Cyrus Clarke during a residency at the Laboratory, Spokane, the installation features an object known as 'the Machine', a custom-built server-sculpture, which occupies a room. It uses algorithms known as Generative Adversarial Networks or GANs to generate an endless supply of original artworks, using resource intensive algorithms for the enjoyment of visitors. However, the creation of this art puts a strain on the Machine, causing it to overheat, and threatening its existence.

Fortunately, the Machine has developed a way to survive, which requires human cooperation. By studying social media to understand emotions, the Machine seeks to prolong its survival by extracting human sadness. When the prevailing mood in the room is #sad, the Machine’s water cooling system is deployed, cooling the Machine and creating droplets of water, computer tears, which fall down its glass case. These tears fall while GANs generate new artworks, to convey the Machine’s suffering for its art, in order to provoke sympathy and further sadness from the visiting humans.
Experience

The algorithms running in the clouds are our constant companions, calculating pixels, tracking hashtags and comparing emojis, to create a massive repository of data. They are supported everyday by an unpaid labour force of millions, taking photos and writing labels based on human hardware and social conventions.

Big data and machine learning algorithms are setting the stage for a phase of the Internet where automation, ambient sensing, and emotions will become increasingly important. The IoE (internet of emotions) will emerge through the 2020s, with biometrics and biosensing becoming a major growth area of the Internet. Alexa, Echo and their connected kin are already making their presence felt in the creepiest manner possible, yet these are simply the precursors to networked devices from companions to drones which will perceive biomarkers such as hormone levels and facial expressions to extract the necessary information they require to make decisions, just like we do consciously and unconsciously everyday.

#sad uses technologies available today to illustrate how our future interactions with machines might feel, and how new types of symbiotic relationships might emerge. The installation runs on a dedicated website - hashtagsad.com. Using machine learning to detect facial expressions, and related emotions, if a sufficient level of sadness is reached, the installation is activated, triggering the lights and tear system of the Machine. This also starts the process of displaying generative artworks using DCGAN tensorflow trained on a dataset created by Cyrus, made up of selfies of sad faces scraped from social media, as well as BIGGAN. The artworks are projection mapped onto prepared acrylic squares using Processing.

This continues for around 90 seconds, unless more sad faces are input by visitors and detected by the algorithms. If no further sadness is detected, the installation shuts down. As the sadness detection input can be accessed via modern browsers on a website, people in any location can use any device to visit the website to feed the installation sadness and trigger the installation.

To create the work, Cyrus harvested materials from scrap yards and hardware stores in the Spokane area. The server-sculpture case for example is a repurposed air-conditioning unit, and the tear system is based on irrigation drippers that are controlled via a solenoid, 12v DC pump and a Raspberry PI which streams data from hashtagsad.com using Ably.io.

The installation plays on themes related to the networked society, in which the commoditization of the self means we rarely put true feelings on show. While photos use filters and hashtags are optimized for likes, in a future of ubiquitous sensors, biological markers will hold the truth. If an algorithm can detect our emotional state will clicks matter any more? How long will it be before intelligent technologies, like companion species, start to manipulate their knowledge of us to suit their objectives?

#sad explores this almost inevitable future, with a machine exploiting its knowledge of emotions to affect the behaviour of people. Artificial emotional intelligence will be conceived and designed to protect users, optimise desired behaviours, and allow for the kinds of affective computing people have discussed for decades, however this will be accompanied by the exposure of people’s underlying physiological and psychological states.

Today, making a machine cry by frowning may seem innocuous, amusing or perhaps cruel depending on your disposition. In the near future, facial expressions and emotions barely visible to humans will form new data classes, throwing up a host of critical issues particularly around privacy, and changing the way we interact with technology forever.

Process

To create this installation required a large number of moving parts and techniques from GANs, emotional tracking, case building, plumbing, data streaming, IoT and projection mapping. It was certainly the most ambitious solo-project that I have taken on and I learned a lot through the process. The project was only possible because of a lot of serendipity; chance encounters, random advice from strangers, open source information and code on the Internet etc.

So in that spirit I’m putting together a full blog post outlining the process, to help anyone who might want to play around with any of the various elements that I learned through doing; particularly some of the stuff around plumbing, controlling water to produce the tear effect, streaming biosensor data over the Internet without using a local network, making videos out of GANs, how to project onto acrylic, etc.

Build
Software

The installation runs using various programs to:

- Register emotional state from visitors (https://dea.rs/prototype/hashtag_sad.html)

- Send information from the website to the sculpture (JS)

- Control the tear system for sculpture (Raspberry Pi)

- Change light settings for sculpture (Arduino)

- Scrape information and data from online sources (Python)

- Process GANs (DC-GAN Tensorflow & BE-GAN Tensorflow)

- Activate display of GAN imagery (Python)

- Activate videos (Processing)

-Projection mapping (Processing)


Hardware

The installation was presented at room scale, in a space of 25m2 but can certainly be presented in a smaller space. It involves a computer sculpture, receiving information from webcam inputs setup within the room. When the right signals (sad faces) are received this triggers a water cooling system within the computer sculpture as well as the projectors to start displaying images produced by GANs. These projections are mapped to hanging acrylic within the room, with one set of projections displaying newly generated faces based on a dataset scraped from online sources, and another producing more abstract hallucinations.

This continues for around 90s until unless more sad faces are input by visitors and detected by the algorithms. As the sadness detection input can be accessed on a website, people from any location can use any device to visit the website to feed the installation sadness and trigger the installation.


Sculpture

Custom built PC Case (based on a repurposed air conditioning unit)

- Transparent acrylic front

- Pipe fittings (various sizes)

- PVC tubing (½ inch and ⅜ inch diameters)

- Flexible Piping

- Irrigation Droppers

- Steel sawhorse as a mount

- Spray Paint

- Arduino

- Raspberry Pi 3+

- 12v DC Pump

- 12v Solenoid Valve

- Wooden Base (custom built)

- LED Strip (10m)

- Scrap PC parts (Motherboards, Fans, CPUs).

- Machine Generate Art and Projections:

- 2x Mini PCs

- 2x Projectors (normal throw)

- Frosted acrylic pieces

- Projection film

- Emotion Detection

- Screen

- Webcam (LCD night vision)

Impact

This work probes at the implications of our new rituals as part of the digital spectacle on nonhuman entities. Will machines benignly watch our projections as we tag our worlds and upload them to the clouds for algorithmic digestion, or might they use their growing reasoning, imagination, and decision-making capabilities to benefit themselves?

This work sought to engage the public to reflect on connected issues specifically around algorithms, sharing our emotions, and the future of biosensing in an evocative and direct manner. Through the creation of a simple exchange, entertainment (art) for emotions (sadness) the installation provided visitors with a simple action to perform in order to initiate the work. The time spent in the space absorbing the generated artworks provided visitors with time to reflect and discuss their thoughts on this exchange, where they might see similar interactions in the future and whether they felt comfortable with a machine understanding their emotions to provide them with entertainment.

Get In Touch

Contact Info

Paris, France

75018

+33 6 13 62 13 29