Forum Replies Created

Viewing 15 posts - 46 through 60 (of 249 total)
  • Author
    Posts
  • #57790
    Profile photo of cornelius78
    cornelius78
    Participant

    No. Proprietary Formats. At most you could run the analogue outs of the 2412 into a p16I and go ultranet from there, but then you can’t use the outputs of the 2412 for anything else (unless you y-split from them, and then you’d have the same signals as in the ultranet.)

    #57776
    Profile photo of cornelius78
    cornelius78
    Participant

    If you’ve got a 2412, the surface itself has 4x preamps. Maybe get hold of a multi core to run alongside your cat5 if you need those inputs on stage, but a 2412 + console will get you your 28 inputs.

    #57775
    Profile photo of cornelius78
    cornelius78
    Participant

    I’m guessing the chrome faders are a lot more visible in low light than the traditional black. No internal differences that I know of, just the firmware (older consoles can load the chrome firmware anyway, so that’s a moot point.)

    IIRC faderglow is on soundcraft’s other consoles too. Its definitely on the Si performer.

    #57765
    Profile photo of cornelius78
    cornelius78
    Participant

    I like the 2nd method there. I use it to make the GLD behave like the Qu.

    #57752
    Profile photo of cornelius78
    cornelius78
    Participant

    That was a while ago. I had to re-read the thread to re-familiarize myself with what we were trying to achieve. IIRC this was the one: emulate your old TC M-One XL unit, which allowed you to dump delay into the reverb engine without really noticing the delay.

    The reason for the assign\un-assign was because we didn’t want the delay on the dry vox to come through to LR by itself: we only wanted to push the delay into the reverb unit, and then only have the reverb unit’s return feeding LR.

    We wanted to hear dry vox + FX2(delay+verb), not dry vox + FX1(delay) + FX2(delay+verb). If we’d not un-assigned FX1 from LR, we still would have heard the first lot of delay by itself, and also if you pulled FX2Ret down, you would have pulled out the verb from LR, but not the delay.

    We could have avoided using the assign\un-assign method if we’d sent FX1Ret to FX2 pre-fader, and just kept the FX1Ret fader down at -inf in LR, but if we did this then we’d need to control amount of delay being dumped into the verb engine by jumping around layers and changing the “active” mix on the RHS of the console all the time. If instead we used a post-fader send from FX1, you can just control the level using the FX1Ret fader while you’re in the LR mix (which I imagine is where you’d spend most of your time.) IME less layer switching and changing of “active” mixes means less chance of mistakes, especially if you’re new to digital or new to motorized faders (you said you were coming over from PreSonus.) As were were going to use post-fader to allow us to use the FX1Ret fader while in LR, we needed to make sure the FX1Ret was un-assigned from the LR mix, so you didn’t hear the first lot of “dry delay.”

    Edit:typos.

    #57733
    Profile photo of cornelius78
    cornelius78
    Participant

    You can choose which parameters are and aren’t linked in the gang, and you can bury one of the faders on another layer, or even un-assign it from the surface completely (once you’ve set its pan.)

    #57721
    Profile photo of cornelius78
    cornelius78
    Participant

    When the FX1 mix is active on the RHS of the console, the fader levels on the LHS of the console represent the send level to the FX1 bus from those faders (like they do for any other mix when using fader flip, assuming they’re not masters.)

    If you’ve got the FX1 mix active, you can push the FX1 return into the FX1 send, creating a feedback loop, which as people have said, will end badly.

    The reason the console allows you to push FX returns into sends is so that you can for example push a delay return from FX1 into a reverb send, that way you get verb on your delay tails. You just have to watch that you don’t send an FX return into the SAME send that generated that return.

    #57675
    Profile photo of cornelius78
    cornelius78
    Participant

    AFAIK yes, but obviously the features in the 24’s scene that don’t apply to the Qu16 (eg mtxs, groups, ch17-24, extra FX buses, last 8x faders on the custom layer, extra softkeys etc) won’t be recalled on the Qu16. If you then save the scene on the 16 and put it back on the 24, I think you lose those all those extra things too. You’d also have to watch your patching with Qu-Drive, USB-B and dSnake if any of those aren’t on defaults when you recall your scene (eg if Qu-Drive track5 was set record input24 on the QU24, then you loaded up the scene on the Qu16, Qu16 doesn’t have an input24. Not sure if it would record the default for track5 (input5) or if it would record nothing. Best to check anyway.

    #57468
    Profile photo of cornelius78
    cornelius78
    Participant

    You can change the 4x stereo groups (on a group-by-group) basis from being subgroups (essentially a post-fader send with the send level locked at unity) to being pre-fader mixes, suitable for monitors. Then you can mix to them like any of the other stereo mixes (mix5-10) P41-42 of the manual explains how to do this. Then it’s just a matter of assigning those “group” (now “mix”) outputs to a stagebox Setup>IO Patch>dSnake out (P73 of the manual.)

    #57416
    Profile photo of cornelius78
    cornelius78
    Participant

    1.) Not sure. It could be that you’re using an FX engine as an insert instead of send-return. If you upload your scene file someone could have a look at it and try to figure out what’s going on (not me though, I don’t have access to a Qu atm. Please A&H, bring out an offline editor.)

    2.) Yes. You can insert a reverb processor across the whole mix, or you can route a reverb return to the mix and just pretend its another input channel, and adjust its level using sends-on-faders, like any other input channel feeding that mix.

    3.) Yes. You’ll need to play around with pre/post settings depending on your workflow though, and possibly use one reverb engine for monitors, a separate one for FOH (and hope you don’t run out of FX engines too quickly.)

    #57369
    Profile photo of cornelius78
    cornelius78
    Participant

    +1 to M-Dante. Although there are other options with MADI, ADAT etc, Dante seems to be rapidly gaining market share, and seems a lot more easily distributable than the other standards. I’d only go with one of the other standards if you\your venues had specific adat\madi etc hardware that needed to be interfaced with. The M-Dante card + DVS +a laptop with a NIC (100mb for 32×32, 1G for anything higher) will allow you to do your multitrack recording, as well as open up options for virtual sound check. You can also use it to interface with other Dante-enabled consoles to share audio, eg you can use your console as a monitor console, and send signals to another FOH\Broadcast consoles via Dante, and receive their returns via Dante and route them straight to your Stagebox’s outputs, something you couldn’t do with the Qu series. Adding a cheap switch will allow you to do the multitrack+FOH\monitor split at the same time.

    #57269
    Profile photo of cornelius78
    cornelius78
    Participant

    I’d go with different scenes with the appropriate recall safes. Depending on how quickly you need to swap between settings, you could assign the recall of those scene to a couple of softkeys.

    #56704
    Profile photo of cornelius78
    cornelius78
    Participant

    ^This

    Think of inputs and channels as 2 separate things. The QuPac can have many (approx 100) “inputs” connected to it at once: local preamps, local line inputs, remote preamps over dSnake, and USB inputs from Qu-Drive\DAW.

    Of all these, the QuPac can only process a maximum of 32mono +3stereo at a time. Adding AR\AB boxes doesn’t give you more processing channels, it just gives you alternative locations to source signals for your 32mono + 3stereo channels.

    So yes, you can use an AR2412 to have 24x remote preamps, but if you do you’ll then only be able to use 8x of the local preamps on the QuPac before hitting the maximum of 32 mono signals to feed 32x mono channels (ignoring having stereo channels source their signals from mono preamps rather than line inputs.) If you wanted to you could use an AR2412 + AB1608 to give yourself 40x remote preamps. You’d only be able to use 38 of them at a time (32 for mono and 6 feed the 3x stereo channels,) but if you did this then you wouldn’t be able to use any of the local preamps.

    Luckily you can choose on a channel-by-channel basis whether you’re sourcing signals from a local preamp, remote preamp, or USB.

    #56267
    Profile photo of cornelius78
    cornelius78
    Participant

    The IO Patch\Monitor section is for the MEU\ME1 (40 channels.) It’s not for streaming USB audio (limited to 32 channels.) Setup>IO Patch>USB Audio is where you need to be. By default you get the first 30 channels and LR (though LR is on ch17+18.) The stereo ins are not part of the 32 channel USB stream. You have to replace some of the channels already there with your stereo ins if you want them to appear in your DAW.

    #56222
    Profile photo of cornelius78
    cornelius78
    Participant

    Sadly the only option for sidechaining sources on the channels’ compressors on the GLD is “self.”

    My first though was if you had an external compressor with a sidechain input you could use that: insert the compressor across the bass bus, sidechain it to the kick’s DO, but it sounds like you’re already doing that.

Viewing 15 posts - 46 through 60 (of 249 total)