Separate channel delay on stereo inputs

Forums Forums dLive Forums dLive feature suggestions Separate channel delay on stereo inputs

This topic contains 12 replies, has 5 voices, and was last updated by Profile photo of Wolfgang Wolfgang 5 years, 11 months ago.

Viewing 13 posts - 1 through 13 (of 13 total)
  • Author
    Posts
  • #69711
    Profile photo of chrismock
    chrismock
    Participant

    Hi there.
    I would love to see way to set different delays within a Stereo-input channel. Right now for fader with two inputs, it’s only possible to set a delay for the whole stereo channel, not for each signal within it.

    I hope I explained it right, my english skills are limited 😉

    Cheers
    Chris

    #69729
    Profile photo of Wolfgang
    Wolfgang
    Participant

    i don’t speak english well, but i understand it anyway – because i have a good translation program 😉

    if you want to have different delays in two inputs, you should not create a stereo, but work with two mono-channels with ganging.
    that’s how it works.

    #69730
    Profile photo of chrismock
    chrismock
    Participant

    Hi Wolfgang,
    I know and that’s what I do now. But I think it would be great to have the same possibility with just using one fader. But maybe that’s only me 😉

    #69744
    Profile photo of Wolfgang
    Wolfgang
    Participant

    depending on how you set the gangings, you can use only one fader on the surface 😉

    #70818
    Profile photo of Jens-Droessler
    Jens-Droessler
    Participant

    I’d like to see that on EVERY channel, even the mono ones. Some people like to work with delays instead or along with the usual pan, which makes sense in many situations. dLive could become one of if not the first mixing console offering that option. Maybe even on the usual pan rotaries, selectable per channel: Volume-based pan, delay-based pan or hybrid of both.

    #70823
    Profile photo of chrismock
    chrismock
    Participant

    Absolutely! I didn‘t think of it this way, but this would be brilliant!

    #70829
    Profile photo of Wolfgang
    Wolfgang
    Participant

    that’s quite an interesting idea

    #70885
    Profile photo of SteffenR
    SteffenR
    Participant

    yes please let’s destroy the phase coherent mixes…

    #70893
    Profile photo of Jens-Droessler
    Jens-Droessler
    Participant

    What has this to do with the coherence of the mix? As long as you keep those signals stereo there is no problem. I know people already doing this, live as well as in studio. And it is actually how binaural reception works. Of course you can’t sum up stereo to mono then and it should not be used on signals going to the subwoofers, but that’s for the experienced user to know.
    Also, nobody forces you to use that option if it’s available.

    #70894
    Profile photo of chrismock
    chrismock
    Participant

    Correct. I see it more and more used by engineers on big productions and even started using it in smaller venues for guitars for example.
    It works much better for the audience, it helps you saving energy in your mix. I‘m all up for it!

    #70907
    Profile photo of Mark Oakley
    Mark Oakley
    Participant

    Are you describing the “Haas” Effect? :

    Creating depth: The Haas effect


    https://en.wikipedia.org/wiki/Precedence_effect

    -Mark

    #70916
    Profile photo of Jens-Droessler
    Jens-Droessler
    Participant

    Well, actually it is the use of the Haas effect to create stereo imaging. The Haas effect actually describes under what circumstances a second identical delayed in the time domain will be interpreted as belonging to the first incident of said signal. What I meant is using this effect in a way similar to what happens in “natural” stereo: Sound coming from let’s say 45° right does not only arrive on your right ear, but on your left ear too. The difference in distance between your ears to the sound source between is let’s say 25cm. Now the sou d source is 10m away. There is not much difference in SPL between left ear and right ear, because 25cm difference is negligible at 10m distance to the source, so that can’t be how we locate sounds. There is some “shadowing” going on, because certain frequencies will be blocked by our head travelling to the left ear, that’s part of it. The other part is said 25cm difference in time the sound takes to arrive at the left ear PLUS (if present) reflections of the sound on walls, arriving much later, replacing the “shadowed” frequencies our left ear won’t get directly. Now as said, the Haas effect describes that all this summed up sound will still be recognized as ONE sound from one source.

    The interesting part for us in live sound would be to use this to place a sound in the stereo image instead of using volume diffrences. The why of that is easy: Stereo is only for the people in certain positions between the mid/hi speakers. Now people standing far to the right will only hear the right speakers. If I now pan one guitar mostly to the left speakers by means of reducing the volume of it on the right speakers, people on the right side won’t hear it well. But if I use a delay to create the stereo image of the guitar coming from the left, I still get full or only slightly reduced volume on the right, so people hear both guitars everywhere AND people in the middle will still get a decent stereo feeling.

    #70978
    Profile photo of Wolfgang
    Wolfgang
    Participant

    Jens is absolutely right.
    and the knowledge about this theme is actually decades old.
    at the moment you can solve it with all digital consoles (which allow delay off inputs) by simply placing a signal on two channels and then delaying one channel of it.
    a solution that would practically already be integrated into the PAN button would be a dream 😉

Viewing 13 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic.