Visual Music Systems

Learn about the VMS Performer

You can watch visual music or you can create your own, turning audio compositions into alternate realities. Our instrument, the VMS Performer, is available for free to all. As with any instrument, mastering it takes time and patience. Unlike with other instruments, on day one, you can create beauty as well as chaos. To learn more about becoming a visual musician, email info@visualmusicsystems.com. Before you do, check out the overview of our Performer below.

The VMS Performer is the product name for the visual synthesizer developed by a team of software engineers at Visual Music Systems between 2011-2020.  It provides a graphical programming environment for developing visual Instruments called the Developer mode, which allow an artist to “play” immersive visual environments with the same type of spontaneous control that an audio musician has over their sounds.  It also includes a simpler interface called the Performer mode that allows artists to “play” and configure these instruments without having to dive into the complex inner workings of the development environment. In addition, the VMS Performer offers  multitrack recording and editing features that allow compositions to be edited and overdubbed.  The VMS Performer is used in conjunction with the VMS Player, which is our 3D video viewer. The following sections provide screen shots and brief explanations of some of these tools.

The main screen for the VMS Performer shows the library of available instruments on the right hand side and a set of currently selected instruments on the left.  You drag items from the library into a slot in the active banks.  Multiple instruments are loaded into the active slots even though only one lead instrument is usually “played” at a time. Having multiple lead instruments active allows fast switching while performing.  Some of the instruments are environmental background effects that run on sequencers that only need periodic adjustments, so dropping different environmental instruments into the slots sets the background for the composition.  

Once an instrument is loaded into an active slot, it is under dynamic control.  Lead instruments are “played” like a musical instrument where their shape, position, color, pattern, texture, and other qualities are simultaneously under instantaneous control.  The artist uses motion controllers in both hands as well as foot pedals.  The two hands work cooperatively in a way similar to how a guitar is played.  The right hand is like a pick hand in that it triggers the events – moving the objects and sending out streams of color. The left hand is like a fret hand in that it sets what happens in response to those actions,  setting the colors that will be used, the geometric rules that control the patterns, the textures that will be painted, and many other qualities of the instrument.  The left hand controls are highly modal: buttons are used to select which parameters are adjusted and which instruments or elements within instruments are affected. 

In addition to using the Instruments from our library, the basic Performer mode allows users to reconfigure the instruments using tuning pages, which can create new personalized instruments that can be saved and reused.

 

The Tuning pages group parameters according to functionality such as adjustments for size, color, and patterns.  Some parameters affect aspects of the instrument that are not under real time control.  Other parameters adjust the responses of the real time controls to change their responsiveness or ranges. While these types of adjustments are not typically made during a performance, the control panels are viewable while wearing the VR headset and adjustable using the VR hardware so that they can be adjusted while  interacting with the scenes. 

The Performer mode allows users to load, modify, and save instruments, but not to modify the selection, patching, and configuring of the low level nodes that define each instrument.  In the advanced mode, developers can dive down into these details using a graphical programming environment to create their own visual instruments.  The low level architecture is similar to many audio synthesizer development systems, in that a library provides an assortment of nodes which are patched together, configured using parameters, and interfaced to the hardware input devices.

 

The primary work window has several windows.  The top right PatchView shows the basic functional building blocks creating an instrument “voice”. Each “voice” typically consists of  10-150 nodes.  The nodes are selected from a library, patched together, and then configured using the two windows at the bottom. The bottom left shows all of the parameters controlling the node currently selected in the patch view.  If a parameter needs to be controlled by hardware, it is dragged to the bottom right ControlNode view.  The parameter is connected to hardware via “DataSources” at the top of the panel and then modified using various transforms by the subsequent nodes in the panel.

 

The DataSources used to control parameters in the ControlNode view allow an abstraction of the hardware, so that instruments can be developed independent of the hardware that the artist will use to play the instruments.  The DataSources consist of items like a 6D position control or a color matrix controller.  The DeviceManager window is where different physical devices can be mapped into the DataSources.

The screen capture of the DeviceManager shows the set of DataSources currently being used. Clicking on a node opens up the window on the right where the interface is constructed.  Hardware devices are selected in the bottom left, and their signal sources are dragged to the top of the construction panel.  A node library provides various types of transforms needed to shape the hardware signal, and the final result at the bottom are the ports exposed by the DataSource in the ControlNode view.

In addition to supporting live performances, the VMS Performer output can be recorded, edited, and overdubbed in ways similar to the operation of audio recording studios.  This uses most of the same windows as in the Node View. 

The recorder is controlled by the Timeline in the top center.  This shows the Instruments that are active at any given time, as well as the nodes for the voices within those instruments.  Clicking on a node opens the PatchView for that voice exposing its internal nodes, their parameters, and the hardware interfaces.  The RecordState tree on the top left selects what is being recorded or played at any given time.  It is hierarchical, so that everything can be in record mode,  or just individually selected nodes or parameters within nodes.  The RecorderControl panel at the bottom left works has the expected commands such as play and record.

 

In addition to the primary windows previously described, the system has various sub-windows used to perform specific tasks.  An example of this is the TextureProcessor, which reprocesses two-dimensional artwork as needed to serve as texture maps within the synthesizer. 

The tool allows the color ranges and alpha channels to be set as needed by the synthesizer and the edges cleaned up so that texture boundaries are invisible.