Saturday, May 17, 2008
Exporting Audio from Reason
What is Exporting, and why do you need to do it?
Well, right now your songs are Reason files (*.rns). They only play back in Reason. You can't burn them to a CD, you can't listen to them in iTunes, you can't upload them to Myspace/imeem/etc. You can only open them on a computer that has Reason on it.
In order for you to do all the things I just mentioned, you need to convert the song into in audio file. Doing this is super simple, but before you do it, you really need to be aware of a couple of things. The main one is that you need to make sure that nothing is clipping.
Clipping is when something is distorting digitally. It happens when the volume of your track(s) is louder than the computer can handle. Usually it is really obvious and sounds like things are crackling in an ugly way. Sometimes, though, you can't immediately hear it, so you don't do anything about it until it's too late.
In general, the way you avoid clipping/distortion is to make sure that none of your tracks' volume levels are getting all the way to the top of the meter.
Tuesday, May 13, 2008
LAST DAY OF CLASS!
Your final tasks for today are as follows:
1. Bounce final mixes of your songs out of Pro Tools and put a copy of the file in the Instructor Share folder.
File format: WAV
Stereo Interleaved
Resolution: 48,000 Hz
Bit depth: 24 bit
2. Bounce out instrumentals of your songs, and put those on the Instructor Share folder, as well.
3. Back up all the work you are proud of and would potentially want to share with other people to a data CD. Potential items might include:
- PSAs
- Interviews
- MLK songs
- Post Production stuff (video)
- Any music
It has been a pleasure working with all of you. I wish you the best in whatever you choose to pursue in your life. Be sure to keep in touch!
Chris Runde
Digital Pathways Instructor
(415) 558-2181
crunde@bavc.org
Thursday, May 1, 2008
Mixing, part 3 - EQ
1. To make an instrument sound clearer and more defined.
2. To make the instrument/mix sound LARGER THAN LIFE
3. To make all the elements of a mix fit together better so that each instrument has
its own place in the frequency range.
So, before we can talk about how we can use EQ to make our tracks better, we have to review some fundamental sound stuff...
The middle of this range is 1 kHz.
What you need to understand is this:
Even though different instruments do have different frequency ranges, most of them overlap somewhat.
Here is a chart that shows some instruments and their ranges (click on the picture to see a larger image):
For example, a violin, an MC's voice, and a snare drum may all contain a lot of the same frequencies. This is important to understand when you're mixing music because if you have a bunch of instruments all playing in the same general area of the frequency spectrum, it means that they are all competing for the listener's attention. So, what you want to do is give each one its own special spot in the mix. You do that by cutting certain frequencies and boosting others.
Cutting and boosting- that's the basic concept. Now let's talk about the tools you have to accomplish this. There are basically two types of EQ that you use in mixing:
1. Shelving EQ - This is simple. With a shelving EQ, you're just boosting or cutting everything above or below a specific frequency. This is a more general tool that lets you make adjustments to big sections of your sound. You will generally have one for dealing with the High Frequencies, and one for the Low Frequencies. Here's a chart:
2. Peaking (aka Parametric) EQ - This one let's you zero in on a very specific frequency range to cut/boost. This is a more precise tool for working with really detailed parts of the sound. Generally, you will have a couple of these that are meant to be used in the Low-Mid and Hi-Mid ranges. Here's a chart:
This is what the Digirack EQ plugin that comes with Pro Tools looks like:
Notice that there are five sets of EQs. The middle three sets are all Peaking EQs. The ones on the far left and far right can be EITHER Peaking or Shelving, depending on how you set them. They will normally be set to Shelving.
Really knowing how to EQ is an art form and, just like any other art, takes years of practice to really master. Here are some basic tips to get you started:
1. LESS IS MORE. Something that is recorded halfway decently should not need more than a little EQ adjustment. Anything more than +/- 6 dB is a pretty big adjustment.
2.
Thursday, April 17, 2008
Mixing, part 2 - Compression
What is compression?
The short answer is that it's a type of processing that allows you to automatically control the loudness of your tracks. The type of device that allows you to perform this magical processing is called (wait for it)...a compressor.
For example, say you have a vocal track where the MC's performance is at a pretty consistent level for most of the song, but then he/she suddenly gets really loud at one part. In this case, compression could be used to just turn down the loud part and leave the rest of the performance the same. This is what compression was originally used for...
But, in most modern pop music, a TON of compression is used on pretty much every track. Why? Here are a couple of reasons:
1. To make things sound smooth (aka "clean").
2. To make things sound punchy (aka "slap").
3. To make the song loud.
So, if you want your song to have any of the above qualities, you should probably take some time to learn how to compress your tracks properly.
The basic concept is that you set a certain volume level on your compressor, called the threshold. If the volume of your track goes over the threshold, then the compressor kicks in and turns down the volume until the signal goes back down under the threshold. The amount it turns the volume down by is called the ratio. These are the two most important settings on a compressor - they tell the compressor when to start working and how much. Check it out...
The next two settings you need to consider are the attack and release. The attack tells the compressor how quickly to start working once the signal crosses the threshold. The release tells it how quickly to let go once the signal goes back below the threshold.
The last setting you should know is the gain, or makeup gain. This setting allows you to turn the overall volume of the track up. Why would we want to turn the volume back up when we just used the compressor to turn it down? Good question. The short answer: the slap factor. Think of the gain knob as the slap control. BUT, the slap control only really works if you've set the other settings properly.
Here is a picture of the Digidesign Compressor/Limiter that comes standard with Pro Tools software. It features all the controls we just discussed, plus a few that you don't need to worry about just yet. You can Insert this on any of your audio tracks:
Here is a general formula for compressing a vocal track:
1. Solo a vocal track and insert a compressor on it.
2. Set the Ratio to 5:1
3. Now adjust the Threshold until you see a maximum of about -6.0 dB of gain reduction happening in the column called "GR"
4. Set the Release to about 150 ms.
5. Now turn the Attack all the way to the right and slowly start turning it left (counterclockwise) until you hear the vocal just start to get muffled. Stop.
6. Now adjust the Release until you see the Gain Reduction moving nice and smoothly in time with the music. Close your eyes and listen. The volume of the vocal should sound pretty even and consistent. If you hear any sudden jumps, then you should try to adjust the Attack and Release settings until the jumps get smoothed out.
7. Turn up the Gain to 6.0 dB.
8. Hit the Bypass button to check what the track sounds like with the compressor on and off.
9. Now unsolo the track and listen to it in context with the rest of the song. Turn the bypass on and off to hear how the compressor is affecting the overall feel of the song.
Now, the formula above is a very general set of instructions and you should definitely try to adjust these settings to whatever sounds best on your music. I would recommend not using a ratio higher than 8:1. Try compressing everything from kick drums and bass, to digeridoos and Andean flutes. But just know this...
When you overuse compression, you run the risk of taking out all the subtle parts that make music sound human.
Some people say that almost all mainstream pop music these days is totally overcompressed and plastic sounding. I personally tend to agree, but this is the sound we're all used to hearing on the radio, MTV, etc., so it's getting hard for us to even imagine music sounding any other way.
I guess it's up to you to decide how to best use this tool to make your own music sound as good as it possibly can. At the end of the day that's the only thing that matters, so use your ears and always pay attention to what's happening to the song as a whole!
Tuesday, April 15, 2008
Mixing, part 1: Basics
So, at this point most folks have finished the majority of the recording for their songs. Now it is time to turn our attention to mixing...
Mixing is the second part of the process of producing a professional recording. This is where an engineer (usually not the same as the one who did the tracking) takes all the raw tracks and fine tunes everything to make the song sound as powerful, polished, and interesting as possible. Really successful engineers get popular because they have a unique "sound" that they bring to the music they work on.
Different genres of music tend to have different styles of mixing.
For example, most pop music (including most rock and hip hop) tends to use a lot of processing in the mixes to make songs sound LARGER THAN LIFE. Listening to pop music is almost like watching a movie. Like, Kanye West basically wants you to believe that his music is the product of a superhuman badass, so the mixing on his songs reflect that: everything hits really hard, there are lots of FX, etc.
With other types of music, like jazz and classical, the goal is to make everything sound as realistic as possible. You're trying make the listener feel as if he or she is right in the club or concert hall where the performance is happening. You don't want to add anything that sounds like it was artificially produced in a studio (even though the mix is happening in a studio!).
So, considering that most of you guys are pretty much making various forms of pop music, and most of your instrumentals were created "artificially" with software, which approach do you think you'll take?
When you're ready to mix you have a lot of ways you could approach it, but here is a simple formula for you to follow:
1. Pull down all of your faders and then bring them up, one at a time to build a good basic balance of all your instruments. Especially the melodic instruments (synths, samplers, guitars, saxaphones etc.). If you have a couple different instruments playing at the same time, then you have to decide which one is the most important and turn the other ones down, relative to that one.
*Note* Main vocals will almost always be THE most important thing in the mix and must be loud enough to be CLEARLY heard over the other instruments.
2. Pan all instruments at least a little bit to the left or the right. The only exceptions are kick drums, snares/claps, bass, and main vocals. Try to get a balance, so that instruments are spread out evenly between the left and right sides.
3. Clean up your tracks. Get rid of all the stuff you don't need, like the parts on the vocal tracks where the vocalist isn't performing. In Pro Tools, use the trim tool to get rid of the extra bits. Make sure you're either trimming to the zero-line crossing or using fades to avoid pops and clips! Use crossfades in the parts where you're sticking two regions together. I also often use high pass filters (HPF) at this stage just to get rid of super low frequencies that I don't need. (See Mixing, part 3).
4. Use compression to smooth out the volume levels of certain instruments and make them sound punchier (see Mixing, part 2 for more details). In a modern professional mix, almost everything will have at least a little compression on it. Most important things to compress: vocals, drums (esp. kick and snare), bass, guitars.
5. Use EQ to balance out the frequencies of all instruments (see Mixing, part 3). Possibly the hardest aspect of mixing to master. Good mixers know how to find specific frequencies from different tracks that clash with each other and then
dip or boost them to make certain instruments stand out, and put others more in the background.
6. Add FX to add space and interesting textures to certain tracks. The two most common FX are reverb and delay. Both of these add "echo" and make things sound like they are in a real acoustic space (a church, for example). More extreme types of FX include flangers, phasers, distortion, etc. As a rule of thumb, less is more when it comes to these. Check them out, experiment, but use them sparingly!
7. Automate tracks to give the song more flow and movement.
Figure out where it might help to bring the volume of certain tracks down. For example, maybe a certain synthesizer would be good to be low in the mix during the verses, but at the hooks you could bring those up to make that part hit harder. You can also automate panning and pretty much anything else to create interesting movements in the song's flow. Again, less is more!
8. Bounce your final mix out of Pro Tools!
Remember to bounce to whatever format the final tracks are going to be collected. For this class, bounce at:
Stereo Interleaved
Format: WAV
Resolution: 48,000 Hz
Bit Depth: 24 bit
Thursday, March 6, 2008
Lyrics Workshop - Day 1
Tuesday, March 4, 2008
Catch up day
1) Finish mixing your movie trailers and turn them in. The final file you will be turning in is:
- Quicktime movie
- Sample rate: 48,000 kHz
- Bit depth: 16 bit
Put your final projects in the Instructor Share folder in the folder called "Movie Trailers"
2) Finish all other unfinished projects, especially the Sound Design project.
3) Finish any Reason beats that you plan to record vocals over and export all the different tracks as audio files. Import the files into Pro Tools. Start discussing ideas for collaborations.
Tuesday, February 26, 2008
Movie Trailer Post Project - part 2
1. Finish recording and mixing all dialog.
2. Divide up duties for getting all the different audio elements together:
- Foley
- FX
- Music
4. Mix everything down and export to a single Quicktime movie.
In the real world, people work completely separately on the different audio elements of a film. Foley people do their thing, FX people do their thing, music people do their thing. Generally, these people aren't interacting and their work doesn't get put together until the final mix. Instead, they all work separately and mix their work down to stems, which are stereo mixes of their particular stuff (one for foley, one for FX, one for music.) These mixes are then given to the mixing engineer, who puts them all together to create a nice balance. This is what we're doing today...
Movie Trailer Post Project - part 1
Take a minute to look through these clips and then, as a group, decide which one you want to work on.
Groups will switch off working in the DAS and recording the appropriate dialog.
Some things to remember:
- Make separate tracks for every different character who is speaking
- Make a note about what kinds of environments characters are in (indoor? outdoor? OUTER SPACE???) How do you think this will affect the sound of the characters' voices?
- Always pay attention to the placement of the microphone. Make sure it is adjusted to the appropriate height and is aimed towards the speaker's mouth. The speaker should be about 6-12 inches away from the mic.
Thursday, February 14, 2008
Audio Post-Production Exercise
We’ll be working with the opening sequence from “Grind & Glory: From the Streets to the Stage.” I’ve presented you with a Pro Tools session file that contains a simplified version of the OMF file I was given, plus the film clip so you can see what you’re working with. For the sake of time, I’ve organized all the sounds and mixed them down into continuous WAV files on separate tracks.
Tuesday, February 12, 2008
Elements of Audio Post-Production
1. Dialog - the MOST important part of the soundtrack. Includes anything being said by characters on or off screen. In many cases, the dialog you hear in a movie is actually NOT what was recorded at the time of filming; it is rerecorded in a studio by the actors who say their lines while watching the video of their performances. This is called ADR (Automatic Dialog Replacement).
2. Foley sounds - sounds made by the characters as they move around in the scene (footsteps, clothing rustling, picking things up, etc.). These sounds are performed by foley artists who specialize in making sounds that realistically match the actions on the screen (even though the sounds may actually be made using all kinds of crazy materials).
3. Sound effects - sounds not made by the characters. These can include realistic sounds (cars, animals, everyday things), ambience, bigger-than-life sounds (explosions), and imaginary sounds (lightsabers).
4. Music - there are four main types of music you might hear in a video
- Score - Music composed specifically for a film or video
- Jingle - Music composed for a commercial
- Environmental music - music that is actually part of the background of a scene (from a radio, playing in a bar, etc.)
- Soundtrack music - any music where the music is the main audio focus (a music video, the end credits of a movie)
ASSIGNMENT:
Today we're going to practice importing a movie into Pro Tools, and then add some effects and ambience.
- Create a new PT session in your folder on the Media drive. Name it: (your name)_video import exercise
- Import a Quicktime movie (same as process of importing audio):
- Go to File>Import>Video.
- Look in the Instructor Share>Video Files
4. If you have time, import some music.
Thursday, February 7, 2008
DP at the Yerba Buena Center for the Arts!
The time and location of the event are as follows:
Yerba Buena Center for the Arts
701 Mission Street
San Francisco, CA 94103-3138
Reception - 6:30 pm
Screening - 7:00 pm
This is a great chance for you to show off your work, so please invite family and friends!
Tuesday, February 5, 2008
Sound Design/Musique Concrete Assignment
Assignments
- Transfer all audio collected over the break to Pro Tools and edit your sounds.
Name your PT session: (your name)_Sound Design 1
To Import your recordings from your Pure Digital camera, do the following:
- Plug the Pure Digital camera into the USB drive on the front of your computer (NOT the one on your keyboard).
- Go to File>Import>Audio
- Navigate to the Pure Digital icon (it should show up in the left panel).
- Go into the folder called "DCIM"
- Scroll down until you see the files that end with .AVI
- Single-click on the first.AVI file
- Now scroll down to find the last .AVI file. Single-click on that.
- Click on the Convert All button.
- Click Done.
- Now choose between "New Track" or "Region Bin". THINK very carefully before you hit OK! What is going to happen when you choose either of these options? Which is better for your workflow? Your call...
3. Practice and show the instructor an example of each of the following processes:
- Pitch shifting
- Reversing a sound using the Audio Suite plugin
- Inserting a reverb or other effect on the sound
5. Bounce your final songs out of Pro Tools and put them in the folder called "Sound Design songs" in the Instructor Share folder. Use the following format:
-File name: (Your name)_Sound Design song
-AIF file format
-Stereo Interleaved
-Sample rate = 48,000 Hz
-Bit depth = 16 bit