EQUITONO


Equitono, the selected project in the 3rd Cesar-Etopia Labs Videocreation Contest, is an experimental audio-visual device which shows a visual representation of sound in real time. The conception of the project is to illustrate the sound of human voice, phonetic recordings, intonations, and expressions, from an artistic subjective perception, based on experimental psychology researches and archetypes of the collective unconscious.
Professor, designer, and typographer Martin Solomon (New York 1932 - 2006) used the word "equitone" to call the visual sound, which means when we define equally the visual shapes and the sound shapes with concepts as rhythm, texture, noise, balance, color, contrast, accent, harmony, density...
Long time before, the painter Vasili Kandinski (Moscow 1844 - Paris 1944) reassured to paint music and sound; he said: "Colour is the keyboard, the eyes are the hammer, the soul is the strings.The artist is the hand that plays touching one key or another to cause vibrations in the human soul."
Nowadays neuroscientists such as David Eagleman research the brain connection between visual shapes and the sound of language into the phenomenon called synesthesia.


INVESTIGATION AND PROCESS
Most typically, we have seen the connection of sound and image through physical-scientific representations, this means through the object that produces it, through the graphic representation of waves, or amidst the vibration on materials. The challenge of Equitono is to connect sound and image from primal ends, without intervention. It is like recreating - also compressing - the process by which letters or ideograms that represent sounds of language - phonemes - were formed, with which later we designate objects and things.
The sounds don't contain concrete meanings, however they could be represented with more or less concrete shapes. This fact was researched years ago, and a connection between sounds of language and object’s shapes was proven. This is called "phonetic symbolism". The experiment took place in different times and spaces. The participants had to associate two words without meaning and two shapes without meaning. For example: the word "bouba" or the word "kiki", with a sharp shape or a round shape. They made the same associations in 95% of the tests.
I realized a participatory experiment where I could record and collect human voice samples, in order to get diversity of expressions and intonations as well as visual interpretations of those sounds. The experiment took place in the Sound Lab of Etopia during the Citizen Science Week. Each participant recorded a sound using his voice. After, he must listen to another random sound recorded before, and finally he must make a drawing on a color card to represent that sound.
It is difficult to avoid relating a sound to an object, as if that object was the sound or could not exist without it. This is the first observation of the experiment: many times an attempt to guess to which object that sound belongs is made, as if it always was an imitation. Although the objective of the project is to obtain visual representations without meaning. As with typography, with letters, Equitono writes sounds, but not words nor meanings. However, the experiment showed interesting connections between sound perception and visual perception.

PRODUCTION AND TEST
Based on our participatory experiment and research on phonetic symbolism and synesthesia, I designed a series of animated visual shapes in loop, which would represent different groups of sounds according to their characteristics.
I programmed the sound-image interaction - the visual response to the sound -, with the free software Pure Data. The sound is analyzed in frequencies (hertz), intensities (decibels), musical notes (tones), and is translated into animated visuals and colors which are previously classified according to the established criteria. For example, a low sound will produce bluish colors and rounded shapes while high sounds will produce reddish colors and broken shapes.
Ángel Lalinde helped me in the production to test the device and specify its settings. He composed several sound pieces with hundreds of phonetic records that we collected, and also made a series of animations in which he proposes a more figurative symbolism.
CONCLUSION
The result is an interface that interacts with the received sound (microphone) and responds with visual shapes and colors (screen) corresponding to frequencies, notes, and audio intensities, which causes a real-time audio-visual dialogue. We can say that it is a data visualizer with subjective intervention, because the visuals are not made by data but by more complex perceptions and considerations, the ones that are always present in works of graphics and illustration which do not arise from automated or unidirectional systems. However, the Equitono device can be adjusted depending on aesthetics, scientific research or experimentation.
In the tests we also noticed something surprising and fun was happening: in front of the screen, with our voice at the microphone, we were playing against images and colors as a chase. In that game, without realizing it, we ended up losing shyness to the microphone and distorting our voices to the grotesque. Maybe that is really the sound translation of the images?