Discussion: Sound of film: The Conversation (Coppola, 1974)

Description

The Conversation (Coppola, 1974)Answer one of the prompts below regarding the use of Sound (and to a lesser extent, editing, cinematography, and mise-en-scene) in The Conversation. Your response should be between 300-500 words and should include minimal plot summary. Assume that your reader has seen the film but please do set up specific scenes with a sentence or two if you plan on discussing them in depth. It’s often helpful to focus on a specific scene or shot for careful detailed analysis as a means of supporting a larger claim. Please be sure to proofread, use specific terms from lectures/readings when relevant, and review the included rubric!You are required to use 2 terms from the Sound lectures (week4), 1 term from the editing lectures (week3), 1 term from the cinematography lectures (week2) and, 1 term from the mise-en-scene lectures (week1).Some questions to help guide your thought process are below (you do not need to answer all of them, or any of them necessarily. They are simply suggestions to guide your analysis, if you need help getting started):1. How does the film trouble the distinction between internal and external diegetic sound? How does that ambiguity contribute to our understanding of the plot or characters?2. How does sound contribute to our sense of perspective (i.e., where we are and whose point of view we share) in any given scene?3. How does sound participate in the creation of the mood of paranoia, or the film’s themes of exposure and vulnerability?

3 attachmentsSlide 1 of 3attachment_1attachment_1attachment_2attachment_2attachment_3attachment_3

Unformatted Attachment Preview

Sound and Film: Part I
While there’s an entire era of film that we call quote “silent cinema,” the truth is that
cinema has never really been silent. While popular histories of film tend to stress an
imaginary shift in the late 1920s when we moved from “silent” cinema, or films with
no accompanying prerecorded sound, to “sound” cinema, this transitional narrative
ignores the fact that cinema has always been reliant on sound as a means of shaping
experience. Whether it’s Edison’s early attempts at linking the kinetoscope, an early
film viewing box, to the phonograph for simultaneous sound and image, pianists
accompanying screenings of films in nickelodeons, orchestras performing during
screenings at the movie palaces of the early 20th century, Benshii in Japan calling out
their narration shaping how audiences understand the screen action, or live sound
effects added by performers, sound has been present in film since almost the very
beginning.
The difference is that sound was not always tied directly to the film text in the way
that it is now: aside from experiments such as Edison’s early attempt at synchronized
sound, the move from what we call silent cinema to what we call sound cinema might
better be understood as the movement of sound during the exhibition of a film from
live performance in the theater, to the film strip itself. That said, as studios and
exhibitors moved toward a more automated synchronization of prerecorded sound,
new artistic challenges and possibilities arose for the medium of film. From a
technical perspective, creating films that could be played back with synchronous
sound, in other words, sound that is matched temporally with the movements
occurring in the images, was not easy. Before the advent of multitrack recording and
mixing, or combining two or more audio tracks onto a single track, film productions
were entirely reliant on capturing direct sound. Direct sound, which refers to music,
noise, and speech recorded from the event at the moment of filming, wasn’t just
something necessary for recording dialogue but scores as well. The technical
inefficiency and pitfalls of early sound recording are captured in this clip from the
1952 classic Singin’ in the Rain.
As we can see in the clip, early sound cinema required a degree of subservience to the
sound at the expense of other techniques of filmmaking. Quick edits, drastic shifts in
camera angles and distances of framing, and camera movement which had all been
techniques embraced in the 1920s by many filmmakers, gave way to a greater use of
static shots, long takes, and consistent repetitive distances of framing and angles that
could better accommodate shooting with multiple cameras simultaneously to ensure
that the sound was consistent between shots. In these two shots from the 1929 film
Broadway Melody, for example, these two women are not only dragging the male
performer away from the crowd: they’re dragging him towards the microphone.
The Singin in the Rain clip from before attempted to humorously convey the
challenges of recording direct sound that arose with the new possibility of
synchronized sound in the cinema. Later on in the film, Singin in the Rain also plays
with asynchronous sound, or sound that is not matched temporally with the
movements occurring in the image.
Here, the fact that the footage and the audio track are asynchronous is played for
laughs – the intentional use of asynchronous sound in the cinema remains rare, and is
generally understood to be the product of machine error or poor foreign language
dubbing.
The lack of creative use of asynchronous sound in most cinema would likely
disappoint the Soviet Montage theorists like Eisentstein and Pudovkin we discussed
last week. In their short manifesto, “A Statement on Sound,” the soviet filmmakers
called for the incorporation of sound not as a means of producing a more quote
unquote ‘naturalistic’ representation of reality by creating the illusion of talking
people and audible objects in the cinema, but instead called for the use of
contrapuntal sound. Contrapuntal sound–or sound that contrasts strongly with the
image–according to these filmmaker/theorists, could allow film to reach new heights
as a universal language, one capable of conveying meaning not present within image
or sound alone. Basically, sound could create meaning through juxtaposition, adding a
new layer to Eisenstein’s earlier conception of intellectual montage. Nowadays,
contrapuntal sound is heavily utilized, particularly in trailers, as we often see the use
of happy or upbeat music accompanying dark, scary, or violent visual content. We can
see an example of contrapuntal sound in the film Dr. Strangelove, in which Vera
Lynn’s “We’ll meet again” plays over footage of nuclear bombs exploding documentary footage which has been repurposed here to suggest the onset nuclear
armageddon which begins at the end of the film – creating an ironic contrast between
music and image.
Unfortunately, for Eisenstein and his comrades, this vision of the use of sound in
cinema did not become the dominant approach, and even the use of contrapuntal
sound as ironic satire, as in Dr. Strangelove, has become an ineffectual cliche. That
said, there are early examples of surprising uses of sound, even in popular cinema.
Take the overarching use of diegetic sound–in other words any voice, musical
passage, or sound effect presented as originating from a source within the film’s
world–in a film often considered to be the first “talkie,” the 1927 film The Jazz Singer.
In truth, The Jazz Singer is only partially a talkie – while the film was the first feature
length movie to contains a fully prerecorded soundtrack and synchronous dialogue
and vocal performance, only some of the dialogue was actually pre-recorded.
Throughout the film, we cut from synchronous and simultaneous sound–or diegetic
sound that is represented as occurring at the same time in the story as the image it
accompanies– to scenes which only have nondiegetic sound–that is, sound, such as
the score or a narrator’s commentary, represented as coming from a source outside the
space of the narrative. In these scenes in The Jazz Singer which only have nondiegetic
sound, dialogue is still shown using intertitles. In The Jazz Singer, Al Jolson plays the
son of a cantor, a Jewish prayer leader who sings liturgical content. His conservative
father wants him to follow in his footsteps but Jolson’s character instead dreams of
being a popular performer, leading to a generational conflict as well as an internal
conflict for Jolson’s character over his Jewish identity. In this scene, notice how the
direct sound of Jolson singing and ad-libing creates an exciting, unpredictable (and,
if we’re being honest, kinda incestuous) performance while his father, who we might
understand as representing the old guard of silent cinema, steps in and ends the use of
synchronous sound with one sharp word. Here we pit older, conservative silence
against youthful, more ‘modern’ sound. Here, sound is being deployed to hype its
future incorporation into cinema, while critiquing those who would try to hang on to
the cinematic practices of the past.
It’s worth pausing for a minute to acknowledge and discuss the fact that The Jazz
Singer includes extensive scenes of blackface – in fact, the film’s climax is Al Jolson,
a russian-born Jewish man, performing in blackface. The use of blackface brings to
light a tension that is already implied in the film’s title: The Jazz Singer’s key conflict
of identity within a Jewish family takes place on the terrain of a musical genre which
was created by, and has continued to be associated with, black performers. Jolson’s
character in The Jazz Singer is coded as progressive and modern through his pilfering
of music from a black cultural tradition, an appropriation that finds its full expression
in blackface. The weirder thing is, it’s only appropriation in name: Jolson is
performing his own songs, as far as I know – songs that sometimes dip, rather
unsurprisingly, into outright minstrelsy: the final song that Jolson performs in the film
is titled “My Mammy.” While it might be inaccurate to say that Jolson is intending to
be hateful – the film demonstrates an appreciation for Jazz, at least in the abstract Jolson nonetheless deploys the deeply racist trope of blackface, performing a false
stereotype of blackness for a largely white diegetic audience during the film. Actual
black people are the structuring absence upon which The Jazz Singer rests: black
performers, who do not appear really at all in The Jazz Singer, are reduced to a
minstrel stereotype through the combination of Jolson’s use of blackface and their
exclusion from the film. I mention this to acknowledge that the vision of a youthful,
progressive sound cinema which The Jazz Singer anticipates is nonetheless tied up in
much older, racist cultural practices.
For entirely different reasons, famed comedian Charlie Chaplin resisted the
incorporation of synchronized sound in the cinema. While The Jazz Singer presents
synchronous and simultaneous sound as cinema’s future, almost a decade later,
Chaplin would continue to make films in what we could call the ‘style’ of silent
cinema, using pre-recorded scores and sound effects but still primarily using intertitles
for dialogue, and only really deploying simultaneous, diegetic sound (such as
recorded dialogue) as critique. For example, Chaplin restricted the use of diegetic
sound in his 1936 film Modern Times to only those who have the “authority to speak”
and always utilizes some mode of technological mediation in the diegetic content of
the film itself, to present this sound. Sound is both a signifier of authority as well as a
product of a technological apparatus in this film, rather than something which simply
emanates from a human body. Sound, for Chaplin, functions as a tool of discipline
and technological control. We’ll leave off this lecture with a scene that demonstrates
this critique.
Sound and Film: Part II
As technology improved and multitrack mixing and postsynchronization practices,
or the process of adding sound to images after they have been shot and assembled like
dubbing, or ADR, became the norm, spectators began to see a greater degree of
experimentation and a level of flexibility denied to earlier productions.
In Fritz Lang’s 1931 classic film M, for example, we see the use of a leitmotif, or a
melodic passage or phrase associated with a specific character, situation, or element,
that becomes linked to the child killing antagonist. In this case, the leitmotif, actually
postsynchronized and recorded by Lang himself, is the whistling of Grieg’s “In the
Hall of the Mountain King” and it becomes a significant element of the plot as it
ultimately leads to the killer’s capture largely owed to the aural perceptivity of a blind
man, a gesture we might read as alluding to the power of sound even in a largely
ocularcentric medium like film.
There’s some interesting play with the notions of onscreen and offscreen sound in this
scene as well. Onscreen sound is, of course, sound that evidently originates from a
source onscreen: more precisely, we would use terms from the last video to call this
kind of sound diegetic, synchronized, and simultaneous. Offscreen sound is sound
which is simultaneous and diegetic, but it comes from a source that is outside of what
is visible onscreen. That is to say that offscreen sound originates from offscreen space:
offscreen space consists of the six areas that are blocked from our view as the
audience: what is above and below the frame, what is to the left and right of the frame,
what is behind the camera, and what is behind the mise-en-scene (that is, what is
blocked from our view by the what is in the frame: the space behind the building in
this shot, for example, is offscreen). In this scene from M, the source of the whistling
is offscreen, placing us in the position of the blind man we watch, who can hear the
tune but can’t see where it’s coming from.
Lang also plays with the use of nonsimultaneous sound, or diegetic sound that
comes from a source in time either earlier or later than the images it accompanies.
Nonsimultaneous sound often takes the form of something like voiceover, such as in
this scene from The World’s End, in which Simon Pegg narrates the events of pub
crawl he and his friends undertook in highschool, narration which is revealed to be
Pegg’s character telling the story of this pub crawl to a therapy group. These two
scenes: the pub crawl in the past and the therapy session in the present, are linked
through a sound bridge. A sound bridge is a technique which links two scenes
through a shared sound. This can either be a sound from one scene carrying over into
another or, in the case of The World’s End, and the scene from M that we are about to
watch, a sound from a future scene heard at the end of the previous scene. Notice how
in this upcoming scene, Lang makes us believe that the voice reading the flyer is in
the scene with us. Instead, we come to realize that we were actually hearing the voice
of an entirely different character in a different space presumably at a different time. M
is a film without one central protagonist: the film follows the efforts of a town to
mobilize and catch the aforementioned child-killer. The sound bridge here, which, in
this case is also an example of nonsimultaneous sound given that we hear a voice
reading from another time and place, drives home the point that the entire community
is united in their preoccupation with these disappearing children. While it’s evident in
the clip from The World’s End that the narration is nonsimultaneous, in M our visual
perception tricks us into thinking the sound is continually coming from the same
space and time. Seeing might be believing, but believing doesn’t reveal the truth: we
may simply have been convinced to believe in a convenient and easy lie.
We might take that statement, that synchronized sound in the cinema involves more
trickery than truth, as the key idea motivating David Lynch’s playful use of sound in
Mulholland Drive. In this scene, which takes place in the oneiric “Club Silencio,” we
watch as a magician declares: “there is no band, there is no orchestra, this is all a tape
recording.” We then watch and listen, as sounds are played and their apparent sources
are revealed, but then we see each time that we have been tricked. This scene is, of
course, not only about the magician’s power over sound in the theater space, but a
metacommentary about the manipulation of sound in cinema: we believe that the
trumpet player is playing and that Rebecca Del Rio is singing because we can see
them performing while a sound plays that seems to emit from them. As Michel Chion
argues in the reading for this week, what makes a sound an offscreen sound is when
we can’t find the symbolic sight of production of that sound: that is, when we can’t
see the mouth that sings or the musical instrument which is being played. When we
can see such things, we believe that the sounds we hear come from things we see. But
that belief misleads us, because all cinema can do is awkwardly suture sound and
image together: synchronized sound is not unified sound – it is not a reproduction of
someone speaking or playing an instrument in reality, but two recordings jammed
together that try to convince us that we’re watching a truthful account of someone
singing, or speaking…
…or, say, dancing to music. David Lynch is working in a long tradition of
manipulating sound in cinema: here, Jean-Luc Godard self-reflexively uses sound in a
similar way, completely removing what we imagine to be diegetic music to allow
voice over narration and the sound of dancing, the rhythmic noises made by the
characters which are now weirdly empty without the accompanying music, to be
heard more clearly. [clip] Whereas most filmmakers would choose to subtly lower the
volume of the diegetic music to allow the narration to stand out, instead Godard cuts
the music out altogether reminding us that all elements of film can be and are
modified, adjusted, manipulated, etc. Our magician friend might say, “no hay disco de
musica,” or “il n’y a pas disque de musique.”
On the other hand, though, sometimes il y a un orchestre, there is an orchestra, as in
the next clip, which is from Blazing Saddles, where music we believe to be
nondiegetic – which would be, to review, music that is represented as coming from
outside of the story space of the film – is revealed to actually be diegetic: or, actually
coming from inside the story space.
These are some pretty overt manipulations of the soundtrack, which toy with our
expectations of the sources of sound, but we can return to Mulholland Drive for a
second to look at an example which is a little harder to notice. The following scene
from Mulholland Drive is dubbed, which means that the dialogue has been
re-recorded and added to the images in postproduction. Watch this scene closely and
see if you can notice if anything seems off.
The effect of the dubbing is subtle – the sound isn’t completely asynchronous, but
some sounds seem to last a fraction of a second too long, and some words seem to
escape the exaggerated grin on Naomi Watt’s face a bit too easily. Here, dubbing
functions as a way to unsettle the viewer, as if to suggest that this romanticized
entrance to Los Angeles isn’t quite taking place in reality. The voices and faces just
don’t quite line up in the way we might expect, and as a result the characters seem
split from themselves, as if their personhood was somehow misaligned.
Boots Riley’s recent film Sorry to Bother You takes this artistic potentiality of
dubbing, the way it can seem to uncannily split a person in two, and uses it to
comment on race and class. In Sorry to Bother You, Lakeith Stanfield plays Cassius
Green, who works as a telemarketer. To get ahead at his job, Cassius takes the advice
of a black coworker, who tells Cassius that to make sales he ought to use his quote
“white voice” when making calls. Cassius is eventually promoted, and when he is
promoted his trainer, who is unnamed, tells him that when in the offices of the upper
echelon of the business, where the callers are selling guns and slaves and are paid
handsomely for it, that Cassius must always use his ‘white voice’. Cassius, for the
most part, continually uses his white voice when working and socializing with his
upper class colleagues, until the CEO of the company demands that Cassius drop the
‘white voice’ at a party.
As Chion notes in the reading for this week, we take the temporal coincidence of
words and lips in the cinema – that is, that what we hear seems to match what we see
– as a guarantee that we’re in the real world. This temporal coincidence, this
sychronization, is part of how we confirm the wholeness of the people we’re watching
onscreen: that’s Lakeith Stanfield – and therefore his character, Cassius Green because I can both hear and see him, and what I hear matches what I see. Like
Mulholland Drive, Sorry to Bother you uses dubbing to unsettle that sense that we’re
in reality, that sense that we are watching real people. But in Sorry to Bother You,
that unsettling quality is being deployed to critique a racial performativity which is
demanded by the racial hierarchy that is entangled with class hierarchy. Cassius’
move into the upper class is predicated on his ability to vocally perform an idealized
whiteness which allows him to be partially culturally integrated into the upper class.
That performance splits him as a human being: Sorry to Bother You uses the nasally
voices of well-known white comedians David Cross and Patton Oswalt as its black
characters’ white voices, leading to a mismatch between voice and body. When we
watch Cassius Green, we are frequently watching and listening to two different
performances of the same character by both Lakeith Stanfield and David Cross
simultaneously. That is, until Cassius’ employer demands a different kind of
performance, a performance of a stereotypical blackness which allows Cassius to shed
the white voice, and yet still demands a racial performance. We would call much of
David Cross’ dubbed performance of Lakeith Stanfield’s dialogue simultaneous,
diegetic, onscreen, mostly synchronous sound, and yet these terms don’t tell the
whole story: the very fact that this dubbed dialogue is supposedly onscreen,
simultaneous, and synchronous is precisely what makes it feel wrong: dubbing
functions as a means of communicating alienation, splitting the cinematic body from
the cinematic voice to show how the intersection of race and class hierarchies demand
performances of the self that split human beings apart.
Next, we can discuss some techniques of sound that position us with a character. We
might look at the title sequence of Edgar Wright’s 2017 film Baby Driver for an
example of sound perspective. Sound perspective refers to the sense of a sound’s
position in space, yielded by volume, timbre, pitch, and, in stereophonic reproduction
systems, binaural information. In this sequence, we see a long tracking shot that
follows our protagonist, as he heads to a coffee shop. While the music we hear might
seem to be a nondiegetic score, the music is actually diegetic sound, emanating from
the earbuds our protagonist is wearing. The mise-en-scene, cinematography, and
sound design work in unity as set dressing and ambient street noises match the music
our protagonist listens to. In this film, the protagonists’ world is music, and we see
that quite clearly articulated in how sounds emanating from the protagonist’s
headphones spill out into the diegetic world of the film during this sequence. As you
watch the scene, especially if you listen with headphones, you might notice how
ambient sound moves with our protagonist from left ear, to right ear as he moves
through space. Moreover, notice how when our protagonist, Baby, removes his earbud
in the coffee shop, we similarly hear a reduction in the level and tone of the music. If
you’re listening with headphones, you’ll notice that the music shifts to the left speaker
in the stereo mix, but dominates the mix once again when Baby puts the earbud back
in. We are hearing exactly what our protagonist hears – basically, we’re hearing from
his perspective.
Often, sound perspective is less showy – in this scene from First Reformed, notice
how the conversations happening at 3 of the tables sound distant, while the
conversation at the table on the bottom left, which is where our protagonist is seated,
dominates the mix. Even though we are viewing the entire dining area in a long shot,
we are aurally situated with the two characters in the bottom left. This becomes
evident when we cut to the next shot of these two characters: even though the camera
is now closer to them, in approximately a medium long shot, their conversation has
stayed the same volume.
Sometimes, however, the sound perspective is that of the camera. Take this scene
from Down By Law, in which the characters’ footsteps get softer as they move further
from the camera, and their shared mutual chuckle as they go their separate ways is
much quieter to us than their conversation they held in the beginning of the scene,
when they are shot in a tighter distance of framing.
Finally, I’ll leave you with this last clip from Apocalypse Now. In this opening
sequence, our protagonist, a burnt out US soldier dying to get back out onto the
battlefield, sits alone in a hotel room. This sequence makes use of all of the
techniques we’ve discussed so far and adds one additional element, internal diegetic
sound. Defined as sound represented as coming from the mind of a character within
the story space, the voiceover narration we hear in this scene is that of our protagonist
Willard’s thoughts. We might oppose internal diegetic sound to external diegetic
sound, which is sound that appears to come from a physical source within the story
space that we can assume characters can also hear: it is sound which exists externally
to the interior consciousnesses of characters. Everything we heard earlier in the clip
from Baby Driver, First Reformed, and Down By Law was external diegetic sound.
Apocalypse Now plays on the concepts of internal diegetic sound and external
diegetic sound. Here, the whirring of helicopter blades begins as what we might
understand as a flashback taking place in Willard’s mind and, eventually, that sound
carries on as a sound bridge, transitioning to the whirring of a ceiling fan before
finally becoming what we think is the whirring of a helicopter outside of Willard’s
building. What starts as internal diegetic sound, the memory of the sound of
helicopters, becomes external diegetic sound, the sound of the ceiling fan, and then
ultimately it leads to what we might presume to be external diegetic sound, the
sound of a helicopter outside his hotel room, which we actually learn might very well
be an aural hallucination taking place in Williard’s mind which would be internal
diegetic sound. There is much more to say about the incredibly complex use of sound
design in this sequence, so see how many distinct sound techniques you can discern
and, as always, ask why.

Purchase answer to see full
attachment

User generated content is uploaded by users for the purposes of learning and should be used following Studypool’s honor code & terms of service.

Reviews, comments, and love from our customers and community:

This page is having a slideshow that uses Javascript. Your browser either doesn't support Javascript or you have it turned off. To see this page as it is meant to appear please use a Javascript enabled browser.

Peter M.
Peter M.
So far so good! It's safe and legit. My paper was finished on time...very excited!
Sean O.N.
Sean O.N.
Experience was easy, prompt and timely. Awesome first experience with a site like this. Worked out well.Thank you.
Angela M.J.
Angela M.J.
Good easy. I like the bidding because you can choose the writer and read reviews from other students
Lee Y.
Lee Y.
My writer had to change some ideas that she misunderstood. She was really nice and kind.
Kelvin J.
Kelvin J.
I have used other writing websites and this by far as been way better thus far! =)
Antony B.
Antony B.
I received an, "A". Definitely will reach out to her again and I highly recommend her. Thank you very much.
Khadija P.
Khadija P.
I have been searching for a custom book report help services for a while, and finally, I found the best of the best.
Regina Smith
Regina Smith
So amazed at how quickly they did my work!! very happy♥.