||Moving from Analog to Digital Recording
| br>Hello all,
Believe it or not I have never used a computer program to record anything other than mixdowns from tape. My current setup is a Tascam 1516, TSR-8, and a small rack of outboard gear, recording synth punk with analog and digital synths. BUT I'm finding recently that the hassle involved with tape machines/analog recording (bulky, limited tracks, setup and set down time, maintenance time and expense) have been keeping me away from just sitting down and banging out a track. I have less time than I used to and so setting everything up just to work on something for a few hours makes me not want to do it all.
SO I've been looking at using a DAW (gasp!) so I can just sit down and throw on an overdub when I have some spare time but I'm pretty lost. I downloaded an Ableton trial. I get the basic concept, sure. But there are a lot of things I feel like I'm missing.
How do you deal with latency on overdubs? Is latency still noticeable if you're using a firewire mixer like the Allen & Heath Zed R16 (still definitely not going to stray from an actual mixing desk).
Would I be better off starting with an apogee unit??
Anyone have any general tips or guidance from someone trying to move form analog to digital?? br> br>
| br>It's a big topic and the choices depend how you want to work.
-PC as tape recorder or used for mixing, plug-ins, MIDI as well?
-Do you want to track everything in one go or do you see yourself overdubbing one sound at a time?
-Compose on hardware or compose 'inside the mix' through sequencing, edit/arranging and automation?
-Simplicity or options, or a mix of both?
System latency depends on your computer system, audio hardware, and particularly your OS settings and drivers. Some systems can go down to very low latencies of 7ms or less, but a certain amount is inevitable. How big of an issue that causes you depends on your answers above.
In my system I use a fast desktop PC with Reaper, and an RME PCI interface. I use an external mixer and hardware for tracking and submixing, but the final mix is done in Reaper itself with minimal use of plug-ins, because the automation and internal routing capabilities are simply too useful and creatively powerful to overlook.
It took me a long time experimenting with many different setups to arrive at my current solution but I finally feel I have 'come home' to a setup that works for me with minimal fighting with technology. br> br>
| br>Latency is always an issue for me on every DAW I've used, especially if I am doing complicated mixes with a lot of effects.
To cope, I generally turn off all plug ins etc. while I track and crank the buffer down to say 128 or 256 samples. As low as it will go without clicking and sounding crappy.
When it's time to mix/sweeten/add reverb etc., put the plug ins back online and increase the playback buffer to say 1024 or 2048. That will work for playback only. If you need to track again, turn off the plug ins and crank back down to a lower buffering rate. br> br>
| br>Why not use a tool like the Zoom Livetrak, or the Tascam Portastudio?
Cheaper than a computer, faster and more convenient than tape, familiar workflow, reasonable options. br> br>
| br>umma gumma
| br>I'm relatively new to this, but this is working well for me:
I use Reaper on my computer ( Mac or Windows ) basically like a giant multitrack recorder. I don't really use plugins, maybe just some EQ here & there
most of my audio source is hardware, and I use an 8 channel USB interface ( Roland Octatrack ) to record line or mic inputs. I have messed with VST's a bit, and use non-USB MIDI for controlling multiple live instruments, as well as the VST controller. I have a MOTU midi splitter feeding all the instruments
have never had a problem with latency doing overdubs, but I don't think I've gone over 18 tracks?
have also used the octatrack to multitrack rehearsals, with a laptop. worked great br> br>
| br>www.audacityteam.org is free and does tracking, has useful effects like EQ and limiting, shows the waveform which helps with fine repairs of things like clicks you’d like removed, and can export to useful file formats and gives the option to add tags like artist name, album name, year, that show up in digital playback programs.
I like Audacity but here’s a summary that should be useful in every program:
Sampling rate and bit depth are recording quality settings. These affect file size which is an important consideration in digital distribution.
A practical rule is to reproduce a signal from a digital representation you’ll need to have samples of the signal at double the rate of the highest frequency you want to represent. For sound 20,000 Hz is often accepted for the upper limit of human hearing, so by sampling 40,000 times a second the signal to the speaker should be fully represented for humans when played back. My understanding is there were other considerations for the 44,100 Hz number (44.1 kHz), but generally this will give you a full quality recording and is probably a number you’ll see in your DAW.
If you want to slow down the recording you’ll want a higher sampling rate for full quality, and people argue that a higher sampling rate can reduce distortion in the audio frequency range when effects like EQ are applied later. If you double the sampling rate then you’ll double the amount of processing your computer is doing when working on it.
Bit depth is how accurate each sample of the original signal is. 16 bit means there are 16 bits of steps that the signal can slot into (16 switches can represent 65536 numbers, a bit is a switch). 24 bit means there are more slots, but also that the file will be 1.5 times bigger than a 16 bit file. What I’ve read is that 16 vs 24 can have a perceptible effect in recordings with both quiet and loud sounds.
Your audio interface forces which quality settings can be used. If your computer and DAW can process the work then I suggest going to the highest settings (perhaps 192kHz/24bit) then reduce it to 44.1kHz/24bit for distribution, or whatever setting is demanded.
File formats are either lossy or lossless. Lossless files can be compressed.
Lossy means the original signal is adjusted using rules that make the file smaller by removing information that ideally isn’t perceptible to people anyway. Lossy might introduce some distortion though because these rules aren’t perfect. Lossy formats will be the smallest files.
Compressed means the digital representation of the original signal is compressed for storage on your computer, meaning the file size is smaller. When decompressed the original signal is still intact, but the tradeoff is there might be some additional processing that happens to decompress for playback (which could translate into possible additional battery use on a phone for example). The compression might use similarities between stereo channels to reduce the file size, so if you have a mono recording represented as stereo uncompressed it would unnecessarily be double the size of the equivalent compressed file.
The mp3 format is lossy and is portrayed as widely used in my experience. My understanding is the/a patent expired recently so using it to encode without having to pay for a license is a legal option now. The ogg format is another lossy format.
wav and aiff are lossless uncompressed format, so you’ll get the exact recording this way. wav cannot have tags (like artist name) but aiff can.
alac (which has a .m4a extension) and flac are compressed lossless formats. I think these are the way to go, minimal file size and no distortion, and they can have tags. I haven’t gotten Audacity to write alac (.m4a can also be a lossy format and I guess Audacity assumes that) but the www.ffmpeg.org program can do it on the command line.
In digital recording you do not want to ever overdo the signal because that saturated part of the signal is clipped and lost and I think is generally accepted as not a pleasant sounding distortion. A common practice I’ve read is to record with a reduced volume (like -10 dB or -20 dB). Just amplify the final mixed recording later to a reasonable digital volume.
A consideration with digital volume is that the signal played by the speakers is a continuous approximation of the digital samples, so if there is enough momentum then the highest amplitude part of the digital representation you see could cause the speaker to go past that point and distort. I think I heard this effect on my phone speaker that disappeared when I reduced the digital volume by 0.3 or 0.6 dB. If you reduce the sampling rate unevenly (like 48 kHz to 44.1 kHz) the program has to approximate samples which could cause a full amplitude sample to have the approximated neighbor be clipped by being above full amplitude. That's just a guess, the details depend on who wrote the resampling part of the program, but I've read a rule to keep the full volume at something like -0.5 dB.
There's a lot of detail in every part of this, so what I wrote here might not be perfectly accurate.
Anyway, having spectrum analysis graphs, accurate EQ, and precise tracking available is a great strength of computer sound recording. Also, I don't think any personal computer program interfaces are very good, but they can get the work done. If you get frustrated then maybe just getting it to work can be a good focus. br> br>
| br>matthewjuran is mostly right, a couple of tweaks I'd add:
For best quality, try to record in at least 96KHz and 24 bits. If your computer and audio interface will give you higher, then go higher. You can always drop bits and bitrate when rendering, but you can never get them back. The additional headroom is very useful when mixing or adding effects.
-0.5dB normalisation is a common thing, but it's about the maximum (for several boring reasons) practical choice. I usually do -1dB.
FLAC format is widely read and accepted, and a good lossless format that saves space over uncompressed formats. It's pretty flexible, it's freely available, I would use it for archiving originals.
Another great tool in the Audacity box is a nice, solid quality noise reduction algorithm that gets a profile from some silence, then applies it to the whole track. Watch out for that one when you're mixing. br> br>
| br>How do you guys deal with not having an actual mixer while recording. br> br>
|hybrid_theory wrote: |
|How do you guys deal with not having an actual mixer while recording. |
DAW tracking is like a mixer. Usually there’s a volume slider and other controls for each track or useable per track. br> br>
| br>Funny, you are moving against the tide of people going back to analog.
I would suggest an RME Interface. My Babyface Pro is invisible as in it never crashes, sounds transparent and has very low latency. You can monitor through the hardware instead of Ableton, so you need not worry about latency. Trust me, RME are the best at making reliable drivers, for over a decade or so.
Live is a good choice for quick sound making. I would suggest you track through tape into it so you get the sound of tape with the editing capabillities of a DAW. So you loose little and gain a lot.
The worst thing about going to a DAW: ergonomics. if you are going to edit a lot, get your workspace optimized. Standing/sitting position, left/right hand editing, light mouse, touch pad - you want to move around, take breaks, drink lots of water. Staring at screens is a killer for the back once you hit 30, part of why I have a very analog setup. br> br>
| br>The lowest latency solutions are Lynx and RME PCI-e cards.
Turnaround latency is below 1 ms. br> br>
| br>search for 'zoom r24'. simultanious 8 track digital recorder, with up to 24 tracks per project. onto sdhc card.
small. portable. it is also a usb audio interface.
around 350$. 48khz/24bit resolution. that's enough. br> br>
| br>Just to give another perspective on this issue, one of my goals is to not use my computer for music production; I love the coding I do for my day job but if I wanted to be making music on my laptop I would be using Sonic Pi or Overtone to live-code it.
What I do do, however, is run my audio through a Zoom U-44 interface that's plugged into my iPad via a USB3 to lightning adapter. I can use a very simple layout in AUM to take audio in, toss a peak limiter on it, and record my modular to a file, or I can have the modular output be one track in a Cubasis project, or even feed something out of my iOS apps, through the second set of outputs on the Zoom and back into my rack (for example to feed something through Rings).
I have never noticed any latency, likely because I can monitor directly from the interface instead of going through the iPad apps and back out. The one challenge I have with this setup is that currently I lack a way to mechanically sync clock between the modular and the MIDI host on the iPad. That's something I could fix by buying or building something small to translate a clock gate into MIDI, but to be honest I don't find myself wanting to do a whole lot where I'm playing both iOS and hardware "live" so I've lived without it happily so far. br> br>
| br>DAW-wise, I'd suggest downloading a bunch of trials (bitwig, Ableton, Reaper, FL Studio, Cubase, Logic) and seeing which software's approach meshes best with how you think. Once you learn one DAW, learning the others is pretty easy, so you can move to a different one later. br> br>
| br>I gotta admit I misssmy tascam 388. finally moved over to zoom r24, which i can at least move by myself br> br>
| br>For your journey I concur with Mkc, and would recommend trying all the usual DAW suspects until you find one that is intuitive and inspiring for you. Then stick with it and ignore the others, because the grass always appears greener but rarely is and most DAWs are very capable.
I can recommend and have tried Cubase, Ableton, Logic Pro, Studio One FL Studio and Bitwig. I'd add Pro Tools too, but I am not such a fan of their pricing policy (needlessly expensive and they suck your cash) and also the MIDI side, despite what some people say, is not as slick as other DAWs. br> br>
Powered by phpBB © phpBB Group