|home | et39 - digital audio design for games - 2015 edition||prev | next|
03 - history
of recording & the psychology of sound - Intro to Multitracking
1877 - Edison makes the first recording of a human voice ("Mary had a little lamb") on the first tinfoil cylinder phonograph Dec. 6 (the word "Halloo" may have been recorded in July on an early paper model derived from his 1876 telegraph repeater, but the paper has not survived) and filed for an American patent Dec. 24.
1881 - Charles Tainter at the Volta Lab made the first lateral-cut records, but without any practical machine to play them back.
1890 - The first "juke box" was the coin-operated cylinder phonograph with 4 listening tubes that earned over $1000 in its first 6 months of operation starting the previous November 23 in San Francisco's Palais Royal Saloon, setting off a boom in popularity for commercial nickel phonographs that kept the industry alive during the Depression Nineties.
Early 1900's - Phonographs were popular household items and for the first time people could listen to recordings of famous musicians.
1908, the first film score or music composed specifically for a picture began to appear. In large cities, movies would be accompanied by a symphony orchestra (this is worth keeping in mind when you think of resistance some people had to talking pictures). In smaller cities, the movie would accompanied by a piano player.
1914 Lee de Forest finally develops the electronic amplifying valve or tube that they can finally successful project sound to an audience.
1920 - KDKA in Pittsburg inaugurated commercial radio when it was the first radio station to receive its commercial call letters from the Department of Commerce Oct. 27; it began regular scheduled broadcasting Nov. 2 with the returns of the presidential election, and continued broadcasting every evening from 8:30-9:30 pm.
1926 - Warner Bros, then a minor studio undergoing financial difficulties, bought the disc-based VITAPHONE system, designed by AT&T, with financial backing from Wall Street. For $800,000 they secured rights to it. In 1926 Warner Bros. released Don Juan with John Barrynore. It had synced music, but it was preceded by a one hour program of shorts and by a short segment in which the head of the MPPDA proclaimed, talking: "the beginning of a new era in music and motion pictures." It was a big hit with the public but it's future was still uncertain.
1927 - The Jazz singer was supposed to be a singing picture. Not talking, but during the shooting, Al Jolson ad -libed some lines that remained in the final cut. He was not only talking, he was talking spontaneously. The film was a hit... everyone jumped on the band wagon... there was no going back. But switching to sound was not easy... besides financial and technological problems.
1933 - Invention of multitrack recording system which allowed
for separation of dialogue, music and effects. Before this musicals had
to film and record at the same time. Up until this point there were no
1938- Les Paul; multi-track recording with discs; electric amplification for guitars
1939 - Multi-channel sound used in with Fantasia, with a system called Fantasound. Disney was interested in emphasizing the directional character of a symphony orchestra, ex, brasses clearly separated from strings. It was a double system based on a separate 35 mm interlocked print which had 3 optical tracks with a control track. For the LA premiere Disney he added a primitive "surround" channel of 96 small speakers to pick up sound from one or more of the main channels, ex: the choir was heard throughout the theater.
1951 Mag sound (magnetic tape) became the leading film audio technology developed in Germany during WW. It was used as early as 1945 for non-synch sound (music especially). The system was very large and difficult to move.
1955 - A suitcase sized recorder was designed for the 10 Commandments: sound for some shots were filmed in Egypt.
1966- The Beatles with George Martin; Sargent Pepper's done with 4 track technology.
Early 1970's - There was a better sound system in the average American teenager's bedroom than in the neighborhood theater.
1974- Todd Rundgren; 24 track solo recording artist; "sounds of the studio"
1975 - Optical stereophonic sound on film pioneered by Dolby laboratories.
Tthis system allowed the 2 optical tracks on the film to be encoded and
split into 4 tracks. L,R,C, S (in a way based on the Quad system). It
was cheaper and quicker than magnetic tracks. It also provided noise reduction
and a broader dynamic and frequency range through the use of compounding
1976- Brian Eno; recording studio as an instrument; Ambient series
1977 - Star Wars needed to get those low sounds up for the space battles. Because the 35 mm print only had room for 4 tracks, they reverted back to 70 mm with 6 magnetic stripes for play in certain specially equipped theaters: those that had left over equipment from the 50's. The sixth channel was dedicated to the lowest frequency creating a theater sub-woofer with its own amplification.
1979 - Superman and Apocalypse Now release in Surround Sound, standardizing the 5.1 surround system used today. This led to the final developments in the Dolby surround system and THX. This was the first year an Oscar was given for Sound Design.
1989 - Digidesign Sound Tools on the Macintosh SE; digital audio editing for the desktop and project studio.
1993- Digidesign Pro Tools with TDM plug-ins; multi-track digital audio editing with accelerated software DSP.
-Increase in technology have led audio producers towards more careful sound design.
-Bombarded by ever deepening visual information, audiences must have heightenbed sound effects, if only to perceive them at all.
-Improved theater speaker systems make further demands on filmmakers who must stretch their sonic creativity to compensate and compete with home stereo systems.
-"Action" movies, whose dialogue is often trivialized , especially depend on music and sound effects to carry their emotive levels.
-We must consider the soundtrack in terms of "quality" of
the sound (technical issue), and artistic quality (aesthetic issue).
-When TV was black and white , and the sound came out of tinny speakers, it was easy to accept technical limitations. We knew that Lucy's hair and Ricky's skin were'nt gray, but we did not care. Or we filled in the red and tan in our minds. Color television made it harder to suspend our disbelief. Although gray hair was acceptable for Lucy, orange wasn't. Lighting and make up became much more important. The same thing happened to sound. The increased audio clarity of digital tape, better speakers and amplifiers on TV sets, and the prevalence of stereo conspire to let us hear more of the track. Likwise, gaps in audio and audio quality are "seamed over" between the ear and the brain.
- Lower quality sounds and technolgy can be used in the background because the ear is focused on the foreground sound's quality.
SOUND PRIORITY FOR Interactive AND Linear MEDIA:
2. SOUND EFFECTS
recording in Pro Tools (cd or midi source)
When recording to an analog medium such as magnetic tape, recording
engineers always try to keep their meters as close to 0 VU (stands for
Volume Unit, which is based on electrical currents) as possible. This
ensures a high signal-to-noise ratio while preserving enough headroom
to keep the tape from saturating and distorting. Recording a few peaks
that go above 0 usually doesn’t cause any problems since the tape saturation
point is not an absolute.
At what level, then, should a signal be recorded digitally? The standard
method for digital metering is to use the maximum possible sample amplitude
as a reference point. This value (32768) is referred to as 0 decibels,
or 0 dB. Decibels are used to represent fractions logarithmically. In
this case, the fraction is: sample amplitude divided by the maximum
possible amplitude. The actual equation used to convert to decibels
is: dB = 20 log (amplitude/32768)
Why do we use dBs? We’ll for one, it’s easier to say -90 dB than 0.000030
(1/32768). Decibels have been used for a very long time when dealing
with sound pressure levels because of the huge range (about 120 dB)
that the human ear can perceive. One confusing thing about using decibels
is that 0% is referred to as minus infinity (-Inf. throughout this manual
and in Sound Forge dialogs).
Now, let’s get back to the real question – At what level should audio
be digitized? If you know what the very loudest section of the audio
is in advance, you can set your record levels so that the peak is as
close to 0 dB as possible and you’ll have maximized the dynamic range
of the digital medium. However, in most cases you don
Now get in there and have some fun with Sound Forge.
Copyright ©1998 Sonic Foundry, Inc.
Intro to Pro Tools
Copyright © 2012 - 2015 David Javelosa