Arts & Culture

The Musician in Us: What the Future Will Sound Like

At the 2012 Grammys, the awards for Best Dance Recording, Best Dance/Electronica Album, and Best Remixed Recording went to a DJ who calls himself Skrillex.

Electronic Musician described his music as “bass riffs that sound like fire-breathing dragons, vocal melodies that closely resemble Central African Mbenga Mbuti Pygmy music, and deftly placed vocal samples that typically propel huge rave crowds into a frenzy.”

What made Skrillex win, though, is his unique ability to produce “future music,” or “what we think the future will sound like,” says Yale Fox—himself a DJ, nightlife psychologist, and a 2011 TED Fellow.

Indeed, Skrillex’s complex use of harsh growls, deep wobbles, and powerful drops, which fall under the genre of dubstep, captivated listeners through its unfamiliarity.

How did we arrive at this point in musical history? And where could we be headed?

*          *          *          *          *

On December 24, 1877, Thomas Edison filed a patent for a cylindrical device on which a stylus engraved etchings into an impressionable material. Slight changes in air pressure induced by sound waves caused the stylus to create grooves of varying depth. The original sound could thus be replayed when a needle ran through the grooves, as the vibrations were amplified mechanically. This device was called the phonograph, and until its invention, all music ever created was exclusively heard and enjoyed live. Today, we live in a world where well over 99 percent of the music we listen to is recorded digitally. Even ‘live music’ is typically an artificial replication and amplification of frequencies digitized by machines—milliseconds after lyrics are whispered into a microphone.

Technology for recording and altering sound has even made it harder to identify music. My mother, born in 1970, argues that Skrillex’s award-winning work isn’t real music. Musicologist Jean-Jacquez Nattiez has written that he finds such uncertainty predictable: “There is no single and intercultural universal concept defining what music might be.”

*          *          *          *          *

During a jam session in the 1950s, rock and roll star Link Wray stabbed a pencil into his speaker cones, creating a harsh, dissonant backlash when he played his guitar. This distortion effect soon became popular and artists began dislocating their amplifiers’ vacuum tubes to unleash an alluring fuzz. The effect could also be triggered by increasing the speaker volume beyond its designated limit, damaging the hardware. This combination of technology and experimentation helped fuel the most popular musical genre in world history–rock and roll. The 20th century saw an outpouring of innovative musical genres, as sound recording and distribution technologies advanced. No longer did people need to travel physically to concerts, clubs, and opera houses to listen to music; no longer did only aficionados get to witness a virtuoso performance; and no longer did it take a professional to produce music. An explosion of experimental music challenged the traditions of music making. Bigger and better sound systems made huge concerts possible, where even those with cheap tickets could actually hear the music . In 1969, 400,000 people gathered for the Woodstock Festival in New York. Two decades later, the New York Philharmonic attracted twice that audience for its tribute to the Statue of Liberty, while even more tuned in through radio.

That was nothing compared to what the 21st century is bringing. In March 2011, 101 musicians from 33 countries were selected via YouTube auditions to form the YouTube Symphony Orchestra (YTSO). They were flown to Australia for a week of performances and their concert was streamed live from the Sydney Opera House, bringing together a total of 33.5 million viewers around the world. The YTSO stream was the largest live stream ever made by YouTube. Thanks to the Internet, such concerts can be streamed as they take place—often for free—and those watching from home can get involved as a virtual audience through tools such as Livestream or Mixify. Producer Richie Hawtin recently released Twitter DJ, software that allows a performer to publicize the tracks that are playing in real-time during live sets. Increasingly affordable technologies make music accessible to everyone.

Today, music production employs machines, studios and high-definition headphones. Even those genres outside of electronica—modern pop, rock, indie, dance, and hip-hop—use numerous artificial elements. Computers help adjust the levels of each recorded stream for mastering, helping to enhance a track’s appeal. Music producers can do this manually, or use automatic equalizing programs that identify song segments unappealing for the average human ear. Vocals can be tweaked with an array of effects—such as reverb, echoes, or filters—to blend seamlessly with the rest of the track. Programs can change the ambiance: for example, a recording that took place in a dorm room can be transformed to sound as if the singer was crooning from a carpeted hallway or a grand cathedral choir. A producer can handpick any part of a recording to edit or rearrange  with a few mouse clicks. Recordings can be assembled word by word from many takes. No longer does a singer even need to be able to sing. Autotune can keep them on pitch no matter what sound they emit. Modern synthesizers can replicate every instrument known to man, not to mention create new sounds never heard before.

A kid with a laptop, some free time, and creativity has the potential to compose—and perform—entire symphonies for virtual orchestra. Cheap, downloadable remix packages with samples, plug-ins, and presets can conveniently be added to music production software. Beginners and pros alike now have access to easy-to-use building blocks to create and layer tracks. Software allows musicians to divide songs into perfectly paced products, syncing them to a specific number of beats per minute (BPM). This has fueled a recent outbreak of ‘mash-ups,’ combining various parts from different songs and mashing them into one—usually tweaked to a consistent BPM and pitch. This easy experimentation has resulted in a raft of entirely new genres. Dubstep, moombahton, and glitch hop are just a few examples of viral electronic genres that were discovered this way.

Technology has helped artists to conceive ‘perfected’ music—perfect equalization and mastering; perfect vocals that are refined and refined again; perfectly timed complex melodies; perfectly spaced beats per minute—all using machines that can do what humans cannot. Even turntables can now shift pitch, loop or replay assigned measures, automatically beat match, and provide all kinds of groovy effects with a simple twist of a knob. And through the Internet, all these new forms of musical creativity can be shared with billions of other human beings around the world. So—what’s in store for future music?

*          *          *          *          *

Concerts are commonly becoming live-streamed for those who can’t make it physically to the venue, and viewers can interact in real-time through the Web. Soon better video and audio recording devices at one end, and better speaker systems and high-resolution 3D screens at home, will allow people to enjoy a richer and fuller concert experience. Massive events such as Sensation, Electric Daisy Carnival, or Ultra Music Festival will become further integrated with audiovisual technology, creating otherworldly environments for participants. DJ Richie Hawtin’s newest iOS app, Smudge, allows the crowd to control the sound and lighting in the club by creating original sequences on their touchscreens.

In the near future, music, samples, how-to sites, and software will be so accessible and affordable that almost anyone will be able to create original pieces, remixes, and mash-ups. Advancements in convenient, responsive, and robust mixing programs will allow musicians to express their thoughts and emotions more accurately. The processes through which music develops—starting with neurons firing in our brains and ending up as audible sound waves—will get reverse-engineered. Today’s technology already decodes brainwaves to control remote devices. Eventually musicians will merely imagine a melody and rhythm, and technology will bring it to life. If there is a soundtrack to creativity in its purest form, we will be able to listen to it.

*          *          *          *          *

What is music, aside from the medium of waves that cause oscillations of pressure through matter? What is music, aside from horizontally organized melodies and vertically organized harmonies? What is music, aside from stubbornly unique frequencies that induce human emotion and nostalgia? Michael Tilson Thomas, the artistic director and conductor of the YTSO, describes it as the singular feeling of “what it is like to be alive.”

Music matters. As musical technology continues to advance, anyone will be able to express the music inside of them, regardless of skill, technique, or experience. Says Thomas: “If you’re curious, if you have a capacity for wonder, if you’re alive, you know all that you need to know about music.” Today his words are figuratively true. In the future, their truth will be literal.

Tags: , , , , , , , , , , , , ,

One Response to “The Musician in Us: What the Future Will Sound Like”

  1. HC Duran says:

    Thankfully, since this was published, musical life hasn’t yet totally fallen into the harsh quicksand Skrillex’s dubsteps. There is hope for a better use of mastering techniques in music’s future.


Leave a Reply

Your email address will not be published. Required fields are marked *