Category

Emotional Prosody

Category

Friends of the Nonverbal Communication Blog, this week we present the paper “Nonverbal Auditory Cues Allow Relationship Quality to be Inferred During Conversations”, by Dunbar, R. I. M.; Robledo, J. P.; Tamarit, I.; Cross, I. and Smith, E. (2021), in which authors wonder whether it is possible to infer through auditory nonverbal cues the quality of the relationship between the people who talk and how.

Language is undoubtedly one of the most important evolutionary developments achieved by humans. Apart from its obviously central role in enriching culture, it is also invaluable as a medium through which we transmit information, negotiate cooperation, or convey emotions.

For some years now, there has been a growing interest in studying which aspects of communication are most important, whether the verbal or the nonverbal.

The first investigations affirmed that the nonverbal elements predominated. Mehrabian was one of the experts who affirmed this idea, saying that, at least regarding the communication of affections, more than 90% of the conversation was transmitted by nonverbal signals, such as intonation, volume or facial expressions.

Although there are many other experts who deny this version, no one doubts that nonverbal signals provide a great deal of information during verbal exchanges. In fact, they are what allow us to infer the meaning of a sentence.

This also has something to do with Mehrabian and his famous claim that only 7% of the meaning of any sentence is found in its verbal component.

On the other hand, other experts found, after conducting their experiments, that both audio and visual channels independently report characteristics such as social dominance or reliability.

Authors point out that the criticism that previous studies on the subject have received the most is that they have focused on the transfer of information of a very low level, such as the recognition of emotional states. Simply recognizing the expression of an emotion, or an affective disposition, is not comparable with, for example, recognizing the degree of rapport between two individuals who are having a conversation.

A recent attempt to overcome this challenge found that listening to a short clip of two people laughing together was enough to allow the listener to predict whether the couple was in a friendly relationship or, on the contrary, were strangers, with an accuracy between 53-67%, in 24 different cultures.

Although this is just above the level of luck, the results suggest that it may be possible to infer some information about the quality of social interaction from just nonverbal cues.

Authors’ study differs from the others in that it uses natural recordings of real situations in which two or more people interact. Previous studies focused on how we interpret emotional information with the intervention of a single speaker.

The fact that natural conversations are used, ensures that the stimuli are ecologically valid and do not include prosodic exaggerations such as those introduced by actors in laboratory studies.

On the other hand, while most previous studies have focused on the emotional cues of expressions, authors focus on interpreting the quality of the relationship.

The objective of the study, therefore, is to evaluate to what extent semantic and prosodic information is required for listeners to identify the quality of the relationship between speakers.

Participants listened to three different versions of the same audio clip: the original clip, with all prosodic and verbal cues preserved; a version in which the prosodic clues were preserved but the verbal content was removed; and a version in which the audio stream was converted solely to tones and rhythm.

It involved 199 native English speakers and 139 native Spanish speakers to determine if familiarity with the language had any effect.

Authors made three predictions: if verbal content is essential, they expected performance to be above luck when participants listened to the full audio; whereas if nonverbal cues play such an important role, performance will be above luck even when verbal content is degraded.

On the other hand, if verbal content is crucial, authors expected that participants would perform better when listening to their own language, with which they are more familiar.

By classifying the clips, participants could choose between friendly situations, such as: free agreement, difference of opinion with respect (where the speakers still want to maintain a good relationship), phatic communion (the speakers are not concerned with the topic of conversation, but simply spend time together) and friendly provocation/jokes.

They could also choose between unfriendly interactions, such as enforced agreements, disagreements without regard, malicious gossip, or aggressive provocation.

The first of the results surprised the authors, as it did not agree with their predictions: there were no significant differences in the performance of Spanish and English speakers when listening to their own language and the other.

The lowest rates of correct responses were obtained by clips that effectively corresponded to enforced agreements and malicious gossip. This may be because a broader range of signals is needed to clarify the meaning of the interaction in these cases.

There was also a tendency to misclassify friendly provocation/jokes as free arrangements, and vice versa, which seems like a reasonable alternative.

In the delexicalized clips, participants were 80% correct when it came to classifying them as belonging to positive or negative interactions (that is, they made a binary decision).

The overall results confirmed that nonverbal cues from conversational exchanges alone provide significant information about the quality of the relationship between those who interact.

This study is interesting because, among other things, it can have many implications for understanding messages online, where we have fewer verbal and non-verbal channels available, depending on the interaction.

NonVerbal Communication Blog