Forum Replies Created

Viewing 8 posts - 16 through 23 (of 23 total)
  • Author
    Posts
  • #50413
    Profile photo of croydon_clothears
    croydon_clothears
    Participant

    We use the AR2412 on D-Snake with Qu16/24/32 mixers for live broadcast on premium UK channels (BBC/ITV) as well as A-list corporate presentations and regular broadcast recordings.
    I don’t know how much you need to be convinced but we have never been let down by the kit despite the extreme reliability demands of the genres we work in.
    Yes, there’s a fan in the box – in the most intimate of studio configurations this could be an issue but keeping the box out of the vocal booth will solve almost all of your problems there.
    Broadcast is a livelier environment so any potential noise is swamped, mostly by the abysmal drone of the lighting fixtures!
    Don’t despair, properly placed the AR2412 in a music studio is the perfect partner to your QU-series mixer.

    #49049
    Profile photo of croydon_clothears
    croydon_clothears
    Participant

    OK here’s what we do when using one desk for FOH and monitors. This isn’t specific to A&H, it applies to any mixer, digital or analogue.
    As has already been said, the crucial thing is to get a proper gain structure through the desk, so setting the input gain sensibly for each channel is the most important thing you will do.
    We always put in the channel HPF starting at between 90Hz up to 120Hz on ALL vocals, and most instruments with the exception of Bass DI, Keys DI, kick-drum, floor toms and any other instrument that heads towards the subs – tenor sax, for example.
    It’s a total waste of space to keep loads of LF in channels that don’t need it – anything there is usually junk and it’s sucking masses of power from the amp rack.
    So, in the sound-check we start with the drum kit, as that’s usually the most complicated,, getting the drummer to give us each element individually.
    We begin with the basic engine of rock and roll, kick/snare/hi-hat, getting a level from each, setting the drummer’s monitors as we do so, then balancing them on FOH while adjusting gates and compressors.
    Then we balance rack and floor toms, again individually for level and tonality, and then get the drummer to play them in cadence so we can tweak them to the same level. This is often the point that we find it’s the drum that needs attention rather than the desk!
    Now we ask the drummer to batter the whole kit while we add the stereo overheads and balance the whole kit for FOH.
    While it seems horrifically complicated we usually get this sorted in about 5 minutes.
    Next is bass guitar, then keyboards and then lead guitar.
    The concept is the same, engineer the FOH balance, reverb, compression and any gates, while also tweaking the monitor mixes for each musician.
    Apply the same principle to other instruments.
    Vocals is where the hard work begins.
    Most vocalists sing at least 6dB louder on the night than they do in sound-check – allow for it!
    The same principles apply – set gain structure then tweak your FOH and monitor feeds.
    DON’T add any reverb to the monitor mix unless it’s IEM – it’ll screw you up royally.
    Remember that monitor feeds, especially on most digital desks, should have graphic EQ available to stomp on individual howl-round resonances.
    The RTA function on A&H (and other) desks can help you here.
    If any channel needs more than a subtle EQ tweak for something other than artistic effect, then might I suggest that you need to look at a different microphone or some other means of acquisition?
    Keep the sound as pure as you can and you will find that it naturally falls into shape by itself.
    While most modern mixers offer enormous amounts of EQ, it’s a bit like fire extinguishers – nice to know it’s there for an emergency but you’d rather not have to use it on a daily basis.

    #49036
    Profile photo of croydon_clothears
    croydon_clothears
    Participant

    If you’re using AudioRacks connected via DSnake to your Qu Mixer, AND you have firmware version 1.7, you can easily map the same input to more than one fader strip. That would elegantly give you a channel for chatting and a channel for belting it out.
    However, if you’re plugging up locally on the back of the desk, perhaps the fastest solution would be to make a modest investment in a mic splitter or two?

    #48494
    Profile photo of croydon_clothears
    croydon_clothears
    Participant

    I’m assuming you’re referring to the non-electronic type of megaphone, reminiscent of the 1920’s!
    I would have thought that you could get pretty close to that using parametric EQ. The sound we associate with the megaphone results mostly from the materials the cone is made from absorbing some frequencies and resonating at others. It tends to be quite nasal with a human voice and lacking at the LF end.
    Perhaps some experimentation with generous helpings of LF and HF cut, and some high-Q (narrow-band) upper and lower-mid peaks, sweeping up and down the spectrum until you find something you like. I’d start with building a nice honk at around 1.75 to 2.5kHz into the sound with maybe another one around 500Hz.
    In a real megaphone the amplification is done mechanically and some compression will help replicate the effect.
    You may not be able to get all the way there on the channel’s PEQ, so maybe route it through a separate mix channel and tweak it some more with the GEQ on that mix.
    You don’t say if you’re going to have to do this live!
    If you are, naturally you can store the settings in the PEQ library, but my instinct would be to get a mic splitter and have two channels for your vocalist, one clean and one for the effect.
    Too many things to go wrong otherwise, with all that button-pressing.

    #48434
    Profile photo of croydon_clothears
    croydon_clothears
    Participant

    Thank you Nicola and everyone else who responded!
    Good news that what we thought we could do can actually be done satisfactorily.

    #48406
    Profile photo of croydon_clothears
    croydon_clothears
    Participant

    That was my initial conclusion too, but as I indicated, without the assembled hardware it’s not so easy.
    In the case of a Qu-32 working with a Qu-16 I can see how the app can differentiate from the default device names.
    However, just to play Devil’s advocate here, and admittedly without the hardware on hand to tinker, just supposing that this scenario was using two Qu-24s, can they be renamed in network terms to establish unique sources?
    Hands up for being lazy here, I know this could be sorted out easily, but all of my kit is presently ensconced in its warehouse!

    #47927
    Profile photo of croydon_clothears
    croydon_clothears
    Participant

    Can you rename? I was under the impression that Qu-Drive was very specific in its file/folder naming architecture.
    Besides, the work-rate in broadcast production is far too high to allow time for touch-screen tinkering that only a separate QWERTY keyboard would possibly allow.
    So…I’m assuming that the Qu-series doesn’t have an internal battery and that stored data is flashed into EEPROM or stored in a similar non-volatile way.
    This, I fully appreciate, precludes any form of real-time clock because, unlike PC/Mac computers, there’s nothing there to keep the clock ticking while the system is shut down.
    However, as a computational device, a Qu mixer DOES have, while operational, a whole bunch of highly accurate clocks running – if it didn’t, the entire digial processing chain would collapse.
    Might I therefore suggest, as this particular forum is focussed towards possible software upgrades, that a future iteration might include some simple user-settable time/date option that retained stability during power-on from the internal CPU clock cycle and was then fed into the Qu-Drive file metadata?
    Personally I don’t have a problem with having to set something like that up every time I switched on – usually that would be once a day and the benefit would more than outweigh the few moments work.
    Obviously all of this is solved using the streaming output because the host computer does all of the file management in ProTools or whatever DAW software one is using, but not every gig requies or allows the luxury of an attached work-station.
    All I want is a metadata timestamp on the Qu-Drive file that allows me to quickly sort the files and identify which one was created when.
    Is that one step too far?

    #47737
    Profile photo of croydon_clothears
    croydon_clothears
    Participant

    I don’t personally have a problem with either dialling in the time for each session or if the time can be harvested from an iSomething.
    Where a metadata time-stamp WOULD be useful is for rapidly working out which of the files belongs where, rather than absolute sync.
    At the moment, polling QU-Drive file and folder properties produces a big fat blank and it’s impossible to know what DAY a project was recorded, never mind at what time.
    Your issue with SMPTE stream drift relative to the audio sample rate is, I would suggest, a blind alley.
    While it’s perfectly correct that without a proper lock, SMPTE from one source may frame-drift relative to a data stream from elsewhere (in this case the mixer) the only value that is needed is the SMPTE one at the start of the file. The contiguous but drifting SMPTE stream is ignored as timecode is actually calculated against the audio by backtracking to samples generated since “timecode midnight”.
    In any event, I’m not too bothered about sample-accuracy, what I’d like is just the ability to see at a glance, even in something as basic as Windows Explorer, which of the 40 or 50 folders I’ve recorded in a day happened when.
    I sense that I’m not alone in this frustration from the similar threads you’ve highlighted.

Viewing 8 posts - 16 through 23 (of 23 total)