You need to appreciate that there is a lot to learn and do before your first paid voice acting job. Yes, you want to get out there and start auditioning. But first, you’re going to need proper training, equipment, resources, and yes, some natural talent. The great news is that even though the voice over industry is competitive, there is plenty of voice over work out there for everyone. This guide on how to become a voice actor will give you a good idea of where to start.

I use the Adobe Audition package to edit my own voiceover work, but the techniques described below can be used in most audio‑editing software. Start by saving a new version of your original recording. (I keep the name of the file the same, but change the extension from _original to _edit.) Next, I'll turn my attention to the end of the recording, where I stepped away from the mic after the second take and recorded about 10 seconds of silence. Although processing is usually kept to a minimum, dealing with sibilance is important, and Eiosis' De‑esser plug‑in is much more precise than most.Wearing headphones, I can zoom in on this 'silence', turn up the volume, and listen for any background noise. If I notice any constant noise seeping into the recording, I may consider using Adobe Audition's noise‑reduction tool to capture a one‑ or two‑second profile of the silence, and then reduce the offending noise by about 75 percent throughout the recording. I should stress that this is rarely needed, and it's always a last resort, because such processing can generate swirly, metallic artifacts that draw more attention to themselves than the noise you intended to eliminate! But it's good to listen for such issues with fresh ears at the start of your editing session. If you find yourself using noise‑reduction often, that's a sign that you should find a better place to record, or improve the isolation of the space you've chosen. Don't forget that there are other ways to deal with some noise: often, a high‑pass filter is all you will need to clean up a recording, for example. Also remember that if your recording is to be mixed with music, slight background noise won't be noticeable, and that the more exposed your voice in the end product, the more problematic noise will become.
Not all noise can be tackled in this way, though: you need to listen for clicks, plosives, digital glitches and the like. These can normally be acceptably repaired by using a 'heal' tool, or a pencil tool to redraw the waveform. Popped 'p's can often be 'fixed' using a high‑pass filter set at 100Hz. For a single glitch, you can zoom in and cut out the cycle of the waveform in which the glitch appears. Just be careful to start and end the cut where the waveform crosses the centre line, otherwise you'll inadvertently add another digital glitch. If glitches are frequent, it's likely that there's a problem with your audio interface's buffer settings — it may be just a playback issue.

Before you get down to the nitty-gritty, though, I recommend doing a test recording to ensure your equipment works properly and your audio levels are strong. You don’t need to record the entire script, but a few paragraphs will give you enough to ensure that the audio is clear, at an appropriate level, and doesn’t include any stray or ambient noises.
A great performance recorded on mediocre gear will always sound better than a mediocre performance recorded on great gear. By preparing well, before the session, you'll find it far easier to relax and focus on the message and the listener. As a producer, it's important to be clear what the expected format is for the final audio file. Although there are certain 'standards', your clients' expectations will vary.Read through the script, look up any questionable words and check their pronunciation. For American English, which accounts for the majority of my work, my favourite pronunciation resource is the Merriam‑Webster on‑line dictionary (www.m‑w.com), which includes audio examples. If you don't know how to pronounce the name of a company or product, call the company's customer service centre (if anyone can pronounce it right, they can!). If you don't know how to pronounce the name of a person or city, try searching YouTube for news reports on the subject.
It's only after this sort of processing that I'll start cleaning up the recording and dealing with the gaps of silence between spoken passages. A noise gate (or an off‑line 'strip‑silence' function) can be used to automatically mute sections that fall below a certain level. However, if not used carefully, these tools will clip the 'T's and 'P's off the ends of words, and shut out natural breathing sounds. Worse, if your recording is noisy, a noise gate will actually draw attention to the problem, since your client will be able to hear the difference between room tone and absolute muting. Another trick is to use a downward expander to reduce the noise floor of quiet sections, rather than cut out the noise completely.
Next, go back to the beginning and start editing out your mistakes. I also like to edit out any abnormally long silences between sentences or statements and any weird sounds that don’t belong. Remember, though, that pauses are ok (and even necessary) to help break up the audio and make it feel more natural and conversational, so don’t go hog wild with it.

At this point — unless you're tracking for broadcast applications (where they don't work to peak levels) — consider 'normalising' your recording to an optimal level. Normalisation can be used to push the loudest peak to around ‑1dB and increase the volume of the entire recording by the same ratio. Of course, you can also tighten up dynamic range (the difference between the softest and loudest parts of your recording) by manually reducing the loudest sections of the recording before you apply normalisation. On a final note, if you do plan to use a compressor, then you might as well add gain at that stage instead of normalising.

×