The master element of music’s emotional content lies within the nature of its tune, the melody. Implicating rhythm, timbre, harmony, dynamics, and form, the tune can depict, divert, and convert emotions and feelings, because it immediately and directly resonates with inner physiological operations that do not rely on semantic interpretation of cognitive logic and language in order to function. In short, melody gets right to the point; it immediately engages one’s basic emotional instincts. Donna Williams (1998) claims that the autistic system that can sense things without dismissing anything is actually more able to make sense of the planet because instinct and intuition have not yet been lost, and perhaps, also, that integrated sensing through cognitive channels has actually limited human intuition and instinctive responses. For many diagnosed populations, the tune speaks louder than words in tapping into inner physiological functions. (Dolan 2002)
In The Symbolic Species: Co-evolution of Language and the Brain, Terrence Deacon (1997) conjectures that language development may have diminished the instinct for using human calls to convey emotional states and needs, since humans could now articulate verbally (though less accurately) what a call, or calls, would have symbolized. The more the neocortex expanded for cognition, the less became the need for instinctive (and non-verbal) communication. How unfortunate – for while language absolutely requires cognitive processing, melody does not; and while cognitive processing is comparatively slow and prone to misinterpretation, melody makes immediate sense to human instinct and intuition. Deacon notes that human calls continue to remain limited to only six basic forms, and variations thereof, while speech employs many vocalizations to elucidate emotional "pitches" of word meanings.
Speech inflection is a form of emotional vocal communication providing parallel channels to verbalization. One understands, but often can misinterpret, the meanings of foreign language words largely because of the emotional content derived from their inflected pitches, which can as often be misunderstood as understood. In spoken language, vocal intensities change, attitudes permeate within inflection, pitches rise and fall, and one intuitively seems to comprehend meanings, regardless of whether or not words are recognized ("reading between the lines," so to speak). An infant will surely recognize the intent behind an adult’s loud shriek long before he or she learns the meaning of "No!" Deacon (1997) suggests that "Though tonal shifts in prosodic emissions can be used as phonemes, the changes in tonality, volume, and phrasing that constitute prosodic features are most often produced without conscious intention." Prosody and inflection might also be instinctive and culturally dependent. So, too, with melody.