As a classical composer, my first intention was to bring a usual opera into a VR environment. With my own experiences of presenting my live videos inside a VR environment in mind, I imagined a lockdown performance where artists worldwide would present their live videos simultaneously in our VR scene to perform the opera.

With time and experience, I noticed that making live acoustic music online had many disadvantages, like the bad quality of the sound and the impossibility of being on time. It is challenging to communicate musically in this way. And with my experience as a digital artist on the internet, I know that many variables can fail, like a lousy internet connection quality or a wrong streaming setting that damages the performance. Audio routing and streaming software are not things that every musician, dancer, or actor can quickly master. 

The idea of having a specific time to present the opera was also not very appealing to me. It would have implied having all the artists coordinated live and the audience there, like in a physical theater. And even if people were eager to go to events to have social interaction and experience art, replicating the act of a physical opera would not have been suitable for CITIZEN 4 VR.

This is why I started to play more and more with the idea of “installing” the opera and having the videos and recordings of the artists there, repeating in non-stop loops that would fall out of pattern and thus changing constantly. This decision would allow the freedom to visit the opera anytime, stay as long as you want in a specific act or scene, or see it as many times as you wish. 

Of course, every time the opera is presented at a Festival, we will have an opening night to enjoy the social interaction premiere nights offer. 

The “live” aspect that we associate with the opera will be given not by the fact that the performers will be acting live, but by the fact that every person, embodied in an avatar, will traverse the opera at their own pace, with their own direction, thus making it a unique live performance with their decisions. The viewer is the ultimate creator of the opera, the one that gives it its final shape, structure, and form.  

Mozilla Hubs offers very few possibilities for interaction. Only a couple of triggers can be used, like triggering the movement of models and the reproduction of sound with proximity. This is used in the opera to interfere with the patterns of visuals and sounds, thus creating a particularly individual experience for every visitor. Also, the different lengths of animations and sounds that are constantly looping will generate different patterns that will bring personal changes in music and choreography for every spectator. 

All the acts of the opera, and their scenes, can be visited in any order. This structural modularity and non-linearity are fundamental in my compositions, not only in a piece’s general structure but also in its minimal components. 

A chamber ensemble consisting of Accordion, Bass Trombone, Male Voice, Female Voice, and Electronics will be the orchestra that makes the opera’s music. Dancers and actors will tell parts of the story. 

As a live coder who performs and composes using the patterning language TidalCycles, I prefer writing music with its cycling and patterning nature constantly present. Or maybe it’s better to say that as the cycling nature of music is of significant importance to me, TidalCycles is my favorite language to create music.

When I decided to make the opera in a VR environment, I considered having live-coded performances. But coming back to the difficulties of group live performances online, it’s more efficient and more suitable for the opera to have it installed. For this reason, I prefer to limit the live coding this time to a purely compositional technique. 

What survived from my original idea of having streamed performances from artists is the general intention of the musical material. I always knew I wanted the musicians to improvise music following specific recommendations I would give them. I didn’t want to create an overly complex written piece that would make the online performance complicated. 

Instead of this, I give every musician, actor, and dancer, particular instructions in the musical and the emotional aspects for every scene. They send me a video and a good quality sound recording of the resulting improvisation. This material is then composed into the opera. 

The electronic sounds I bring into the opera consist mainly of treating the musical material received from the musicians and voices from the actors and a vast library of synthesized and pre-recorded sounds and effects. The opera can range from a monodrama up to a Grand Opera displaying an enormous orchestra of sounds. 

If the Virtual Reality stage is a score, each avatar will read the musical piece through its movements, timings, and actions. 

As a composer, I give life to a vast set of possibilities that take a specific shape once the audience observes it.

Alexandra Cárdenas

composer/programmer/improviser/live coder/algoraver