Psychoacoustics

Psychoacoustics  is essentially the study of the perception of sound. This includes how we listen, our psychological responses, and the physiological impact of music and sound on the human nervous system.

In the realm of psychoacoustics, the terms music, sound, frequency, and vibration are interchangeable, because they are different approximations of the same essence. The study of psychoacoustics dissects the listening experience.

Traditionally, psychoacoustics is broadly defined as “pertaining to the perception of sound and the production of speech.” The abundant research that has been done in the field has focused primarily on the exploration of speech and of the psychological effects of music therapy. Currently, however, there is renewed interest in sound as vibration.

By improving psychoacoustic we can improve quality of sound for not only listening a song but also to percive a song. which can be improved by following technical way:-

1. The Haas Effect

Named after Helmut Haas who first described it in 1949, the principle behind the Haas Effect can be used to create an illusion of spacious stereo width starting with just a single mono source.

Haas was actually studying how our ears interpret the relationship between originating sounds and their ‘early reflections’ within a space, and came to the conclusion that as long as the early reflections (and also for our purposes, identical copies of the original sound) were heard less than 35ms after and at a level no greater than 10dB louder than the original, the two discreet sounds were interpreted as one sound. The directivity of the original sound would be essentially preserved, but because of the subtle phase difference the early reflections/delayed copy would add extra spatial presence to the perceived sound.

So in a musical context, if you want to thicken up and/or spread out distorted guitars for example, or any other mono sound source, it’s a good trick to duplicate the part, pan the original to extreme right or left and pan the copy to the opposite extreme. Then delay the copy by between about 10-35ms (every application will want a slightly different amount within this range), either by shifting the part back on the DAW timeline or by inserting a basic delay plugin on the copy channel with the appropriate delay time dialled in. This tricks the brain into perceiving fantastic width and space, while of course also leaving the centre completely clear for other instruments.

2. Frequency Masking

There are limits to how well our ears can differentiate between sounds occupying similar frequencies. Masking occurs when two or more sounds occupy the exact same frequencies: in the ensuing fight, generally the louder of the two will either partially or completely obscure the other, which seems to literally ‘disappear’ from the mix.

3. The Ear’s Acoustic Reflex

As mentioned in the introduction, when confronted with high-intensity stimulus – or as Brick Tamland from Anchorman would put it, ‘Loud noises!’ – the middle ear muscles involuntarily contract, which decreases the amount of vibrational energy being transferred to the sensitive cochlea (the bit that converts the sonic vibrations into electrical impulses for processing by the brain). Basically, the muscles clam up to protect the more sensitive bits.

4. Create the impression of power and loudness even at low listening levels

If you take only one thing away from this article, it should be this: the ears natural frequency response is non-linear. Or more specifically, our ears are more sensitive to mid-range sounds than to frequencies at the extreme high and low ends of the spectrum. We generally don’t notice this as we’ve always heard sound this way and our brains take the mid-range bias into account, but it does become more apparent when we’re mixing, where you’ll find that the relative levels of instruments at different frequencies will change depending on the overall volume you’re listening at.
Before you give up entirely on your producing aspirations with the realisation that even your own ears are an obstacle to achieving the perfect mix, take heart that there are simple workarounds to this phenomenon. And not only that, but you can also manipulate the ears non-linear response to different frequencies and volumes to create an enhanced impression of loudness and punch in a mix, even when the actual listening level is low.

5. Equal Loudness Part II: Fletcher-Munson Strikes Back

Of course the inverse of the closer/louder affect of the ears non-linear response is also true, and equally useful for mix purposes: to make things appear further away, instead of boosting you roll off the extreme highs and lows. This will create a sense of front-to-back depth in a mix, pushing certain supporting instruments into the imaginary distance and keeping the foreground clear for the lead elements.

6. Transients appear quieter than sustained sounds of the same level

This is the key auditory principle behind how compression makes things sound louder and more exciting without actually increasing the peak level. Compressors are not as intuitively responsive as the human ear, but they are designed to respond in a similar way in the sense that short duration sounds aren’t perceived as being as loud as longer sounds of exactly the same level (this is called an RMS or ‘Root Mean Square’ response, a mathematical means of determining average signal levels).
So compressing the tails of sounds such as drums, that are relatively quiet compared to the high-energy initial transient attack, fools the brain into thinking the drum hit as a whole is significantly louder and punchier, although the peak level – the transient – has not c.

7. Reverb Early Reflection ‘Ambience’ For Thickening Sounds

If you combine part of the principle behind the Haas Effect with the previous tip about sustained sounds being perceived as louder than short transients at the same level, you’ll already understand how adding the early reflections from a reverb plugin can be used to attractively thicken sounds. It can take a moment to get your head around if you’re very used to the idea that reverb generally diffuses and pushes things into the background. Here, we’re using it without the characteristic reverb ‘tail’, to essentially multiply up and spread over a very short amount of time the initial transient attack portion of the sound. By extending this louder part of the sound we’ll get a slightly ‘thicker’ sound, but in a very natural ‘ambient’ way that is easily sculpted and tonally finetuned with the various reverb controls. And with the distancing and diffusion effects of the long tail you can retain the ‘upfront’ character of the sound.

8. Decouple a sound from it’s source

A sound as it is produced, and the same sound as it is perceived in it’s final context are really not the same thing at all.
This is a principle that is exploited quite literally in movie sound effects design, where the best sound designers develop the ability to completely dissociate the sonic qualities and possibilities of a sound from its original source. This is how Ben Burtt came up with the iconic sound of the lightsabers in Star Wars:

 

admin

H! myself anurag patnaik ,music composer,multi instrumentalist,music programmer and audio engineer

Leave a Reply

Your email address will not be published. Required fields are marked *