V2.0

Tagged: 

Viewing 12 posts - 31 through 42 (of 42 total)
  • Author
    Posts
  • #122986
    Profile photo of MichaelStelzMichaelStelz
    Participant

    To me it’s strange that the new vocal effects have such a high latency. All the major auto tune style softwares have a latency at maximum 3ms.

    All RackUltra effects incur a transport latency of 0.1875ms.

    In addition:
    Plate Reverb and Spaces have zero processing latency.
    Amb+Cab distortion and Saturator have 0.0104ms of processing latency, but the wet/dry path when inserted is compensated to avoid any phasing issue.

    With Harmonisers, Tuner, Gridder and Shifter, the processing latency varies (this is common across tuning/shifting FX or plugins). Some of these FX are using the ARM core which also adds a block latency of around 10ms.

    Total latency for Vocal Shifter and Harmonisers is 16-26ms (block + processing + transport).
    Total latency for Vocal Gridder is 21-32ms.
    Total latency for Vocal Tuner is 6-11ms.

    #122988
    Profile photo of Nicola A&HNicola A&H
    Keymaster

    FYI a first issue of the V2.0 Firmware Reference Guide is available on our website, together with updated Release Notes.
    More details on RackUltra FX will be added to the guide in the next few days.

    Resources

    #122991
    Profile photo of BrianBrian
    Participant

    To me it’s strange that the new vocal effects have such a high latency. All the major auto tune style softwares have a latency at maximum 3ms.

    All RackUltra effects incur a transport latency of 0.1875ms.

    First, I feel like you are under the impression that the original FX on DLive don’t have any latency. That is not true as many of the original FX actually have latency that is not included in the “0.7ms latency compensated” time of the console. The latency on some of these original FX is actually longer than the latency of some of the new UltraFX, so the idea that the new plugins have “such high latency” is flawed. It also shouldn’t be surprising that by adding a second processing chip to the system, the “transport latency” for these new plugins was increased. After all, that data has to go from the original processing chip, out to the new RackUltra FX chip, and back again. That data movement takes time (0.1875ms to be exact).

    Second, I don’t think you aren’t including any transport latency when you say that “All the major auto tune style softwares have a latency at maximum 3ms”. Unless we are talking about the Waves Soundgrid Server system which has wicked low transport latency, any other plugin method is going to include transport latency that is even slower than the UltraFX rack transport latency. So when A&H says the total latency for vocal tuning is 6-11ms, that is honestly in line with most plugin solutions outside of Waves Soundgrid Servers.

    Finally, I don’t think you will find that the latency times on the new vocal FX actually matter in real life usage. That’s because you should never be sending “tuned” or “harmonized” vocals back through the monitors to an artist and no one in the audience is going to notice the latency on these effects. Long story short, it’s not a problem that will appear in actual normal usage IMHO.

    For more information on the latency of the DLive system, please look at this article from A&H. https://support.allen-heath.com/hc/en-gb/articles/25863491742353-Dyn8-and-FX-latency-dLive-Avantis

    #123000
    Profile photo of MichaelStelzMichaelStelz
    Participant

    Hey Brian,

    thanks for stepping in the conversation about this 🙂

    First:
    I’m aware of all the latency happening in the dLive system. I also measured different things in Smaart to confirm the official numbers.

    Second:
    Why should I include ADDA conversation latency times when I compare plugin latencies since all the processing happens inside a digital environment like with the ultra fx card? The main competitors in speed and quality of the tuning is Waves and UAD. Also Waves can now use any VST3 plugins (I really like Slate Digitals Metatune which has 2.9ms of latency – Waves Tune Real Time und Antares Auto Tune are still faster than that)

    So finally, I don’t agree with your point. Of course many singers nowadays like to hear their voice tuned in their in-ears! That’s common practice in studio world and people are really asking for it! But with a latency of at least 6ms it is not an option to do so in monitor world. So my big question is, why didn’t Allen & Heath take the chance and reduce the need for 3rd party solutions? Should be doable with arm chips.

    #123001
    Profile photo of Ark WorshipArk Worship
    Participant

    Where cam I buy this new RackUltra FX card? We are in Russia and now we have many difficulties. But it is possible to buy equipment in Turkey or the United Arab Emirates. Are there any dealers there?

    #123003
    Profile photo of BrianBrian
    Participant

    Why should I include ADDA conversation latency times when I compare plugin latencies

    Why are you confusing ADDA conversion latency with total plugin latency times? No one is talking about ADDA conversion time by itself – although if your plugin chain required a ADDA conversion, then that time would matter to the overall total plugin latency time.

    Talking about the latency time of the plugin by itself (without factoring the total transport latency time as well) is pointless in real life use. You can’t use any plugins without utilizing some sort of transport mechanism as well, so comparing only plugin latency times without considering the total latency times (which is the total transport latency plus plugin latency) is a waste of time. It doesn’t matter if the plugins are hosted natively on the console or externally on other hardware – every method has some transport latency that must be accounted for along with the actual plugin latency.

    #123008
    Profile photo of MichaelStelzMichaelStelz
    Participant

    No one is talking about ADDA conversion time by itself

    got it now 👍🏼
    I just made the assumption that you also added the latency for ADDA in your transport calculation. Because without it I can‘t imagine someone would argue the latency for A&H Vocal Tuner is anywhere near competitive on the market! But ok, I guess we just have different use cases 🙂

    #123053
    Profile photo of BWFBWF
    Participant

    I for one, am really excited about all the new features with v2.0 and the RackUltra FX.

    However, with only wo insert points available, its going to be even more difficult picking the plugins you want to use now, especially if you would like to use Dyn8, Vocal Tuner and something like the d-esser together.

    It would be really nice to have more insert points available, even if some of them are only for the RackUltra FX

    #123116
    Profile photo of DaveDave
    Participant

    Of course many singers nowadays like to hear their voice tuned in their in-ears!

    Seriously? Why? They wouldn’t be able to tell what pitch they’re singing at if it’s being tuned all the time.

    #123117
    Profile photo of MichaelStelzMichaelStelz
    Participant

    Seriously? Why? They wouldn’t be able to tell what pitch they’re singing at if it’s being tuned all the time.

    It gives confidence and depending on the settings vocal tuning can be quite subtle and unhearable. Many bands do it live and during recording successfully.

    especially if you would like to use Dyn8, Vocal Tuner and something like the d-esser together.

    I would be really careful using EQ before vocal tuning since the phase shift introduced by eqing can really mess up the pitch detection of the tuner. So in that case I would just use Dyn8 after the vocal tuner. Also I prefer deessing with Dyn8 instead of the dedicated plugin.

    #123148
    Profile photo of JasenJasen
    Participant

    I have not installed 2.0 yet, but I did pull down MixPad to a spare iPad to check out the new Scenes capabilities.

    It looks like you can move between ALL the scenes in a show, not the current cue list. Is that correct?

    If so, that is suboptimal for my particular use case. We have dozens of scenes, but only use a subset of them for each performance (cue list), which are the scenes we would want to move between.

    #125096
    Profile photo of Bob BriessinckBob Briessinck
    Participant

    100% agree.

Viewing 12 posts - 31 through 42 (of 42 total)
  • You must be logged in to reply to this topic.