Forum Replies Created

Viewing 15 posts - 1 through 15 (of 218 total)
  • Author
  • #100722

    I dig the iPad app for some things. It has some awkward limitations if you’re leaving the desk and need to run the show. For those situations I’ve used touchOSC coupled with Osculator on a host machine somewhere on the network. (I have a mac mini that functions as OSC/Midi host for any number of communications between a/v devices on the network.) Any OSC app will do though.

    While the AH ios app lets you customize layers, giving you quick access to faders you might need in a pinch, there is no capacity for cues or custom buttons to be triggered, (which you *can* do with TouchOSC.) IMO it’s a major deficiency.

    Given the anemic iOS app, and the inherent limitations of TouchOSC, (you have to create everything you want to control in advance, no storing cues, but you can trigger them,) The ideal mobile solution is a surface pro running director. It’s what we’ve done at our larger campus so the engineer can leave the desk/wander the auditorium and make adjustments. It also has some weird quirks, (Have had some issues storing cues, for instance,) but is a lot more feature rich on the go.


    I would even add that – *because* dLive maintains phase relationships at every point of processing, it’s arguably a better choice than many other systems for studio work. Obviously, things like Pro Tools will do this to a point, but even in Pro Tools there are weird quirks occasionally when committing tracks. (I’ve had things *clearly* not properly phase-align when you look at the unprocessed and processed tracks side-by-side.)

    If I had a dLive in my personal studio it would be my primary mix tool for most things. I would still use UAD and Fab Filter processing for certain aspects of mix. Reverbs, multiband compression come to mind in particular. (Dyn8 is okay, but there are many things it doesn’t do particularly well.)

    My previous studio interface was an Apollo quad, A/d was phenomenal. Preamps were amazing too. I abandoned ship on that for a Focusrite Red4, which gives me Dante i/o in the space. I also had some qualms about Uad Console limitations. I still have a UAD satellite for DSP. (The Red4 is also ridiculously transparent.) …I am still considering a DM0 to upgrade studio and live rigs.

    A Good friend of mine bought in to Antellope instead of UAD when I got my Apollo. …great hardware unless you’re in a Windows environment; and the support is… pretty nonexistent. Converters are great, but there are so many weird quirks with software.


    Probably the easiest way to do this is to copy/paste mix from an existing stereo channel. IE you have to set your pre/post/level one time, but then use that as a template for every other stereo channel you need to use this for.


    Hope this was resolved. If not though… have you checked your clock settings in Dante Controller? I’ve seen this kind of thing happen if clock is having trouble being resolved. (Typically in systems where two or more devices are competing for grandmaster.) Generally speaking you want your CDM48 to be the “master,” clock. Make sure “Enable Sync to External,” and “Preferred master,” is selected for the CDM48, and you should be good to go.


    Yeah, agreed: Custom Control is really the way to go for IEM controls/setting limits.

    I do also use Osculator and TouchOSC to give volunteers a more comprehensive set of controls with built-in fader limits, for multiple systems on one screen. Takes a bit more time to set up, but it’s proven to be quite bullet resistant. (We’ve been running it since 2017 with zero system crashes to date.) In that instance I have a Money bank for mission-critical faders + master fader, Lighting status/go/back buttons, Pro Presenter Go/Back buttons, and several buttons that recall specific dLive cues. …and beyond that first screen, some additional in-depth controller pages for more skilled users.

    You must be logged in to view attached files.

    …And I stand corrected: I hadn’t noticed it was contextual! Looks like it follows whatever is currently selected in the main scenes menu. I could still see certain situations where it would be useful to have the soft rotary section *locked* to current cue list as opposed to whatever is currently selected. IE if I’m moving around a festival show and having to jump back and forth between mics, current cue list, and all my cues (because I’m building additional cues beyond the current list, but still needing to pay attention to the current cue list.) …in that type of scenario, it would be nice to have the choice to lock it.


    In my personal experience, having it in the mixrack makes you a little more bullet-resistant. IE – If you lose surface connection then you still pass audio through the card. I would be particularly reticent to put it in the surface without some kind of redundancy. I guess if you had to choose between them I would lean dante in the rack too.


    Yes, it should work. You just need to be very specific with the midi messaging.

    Issuing scene recall commands should be straight-forward, (program change,) but if you wanted to do, say, a particular fader or send level, (I’m thinking delay swell/feedback moment type programming,) That is a super-specific NRPN message you’ll have to program on the uTrack, which might be a bit more complicated. It all depends on how deep you can go with message formatting ON the uTrack.


    Yep BomeBox is the correct interface for that *unless* you want to just use TCP midi driver over the network, and have reaper spit out the commands that way. I’m pretty positive you could accomplish that with SWS *or* just build midi triggers in to your tracks. (Say you wanted a panic button or something, SWS might be useful for issuing that kind of command.) don’t technically need the Bome Box if you’re running the TCP driver. I rely on TCP for communications between dLive and Live Professor pretty consistently. It’s been rock solid for me.

    One thing that *is* nice about the Bomebox is the midi translator: If you use that you don’t have to build your specific messages in Reaper, you can use, say a particular note on/off and have “Midi Translator,” turn that in to a specific dLive command.

    I use Osculator to do that for certain circumstances. (In that case changing OSC messages like “/thingy1/go” to dLive commands, but it also works for straight midi.)

    Good luck!


    Big +1 here


    Yes, exactly. You need to set a “midi device,” and then tell that device to look to the TCP driver for input. See attached.

    You must be logged in to view attached files.

    Fwiw, I’m on Mojave and run the TCP midi driver on *several* machines and have had zero issues to date.


    I absolutely *love* Live Professor. You don’t need DAW control though. You need TCP midi up and running. Once you’re good with that connection, in LP, go to Audio/Midi settings/midi, add a midi device, tell it to look to TCP midi for input. …and that should be all you need to get Midi from dLive to LP. …then you can start mapping. 🙂




    How is that remotely helpful? The point of tb and tb groups is that I have a quick way to swap between them, and that the audio is only active when I push the TalkBack button. If you patch the socket that’s a bunch of always-on noise from FOH. Nosir, that is absolutely stupid.

Viewing 15 posts - 1 through 15 (of 218 total)