Michael G Wagner
Michael G Wagner
  • Видео 231
  • Просмотров 973 798
Sound Particles InDelay: The Swiss Army Knife of Delay Plugins?
In this video, I review Sound Particles' InDelay plugin, showcasing its innovative features for both stereo and immersive audio production, while also discussing its strengths and potential areas for improvement.
This video has proper subtitles in English and German.
Transcript available on Patreon: www.patreon.com/MichaelGWagner
► Join our free Discord community: discord.spaceforaudio.com
► Subscribe to my secondary channel: www.youtube.com/@MichaelsSecondChannel
► Become a Patreon of this channel (free option available): www.patreon.com/MichaelGWagner
► Get the "In Immersion We Find Presence" merch (channel members get 15% off): shop.spaceforaudio.com
Affiliate links:
► Check out my channel gea...
Просмотров: 405

Видео

Pro Tools Dolby Atmos Tutorial, Part 2: Working With the External Renderer
Просмотров 292День назад
This video provides a comprehensive guide on using the external Dolby Atmos renderer with Pro Tools, covering two different methods of implementation, setup processes, exporting, and working with re-renders. This video has proper subtitles in English and German. Transcript available on Patreon: www.patreon.com/MichaelGWagner ► Join our free Discord community: discord.spaceforaudio.com ► Subscri...
Pro Tools Dolby Atmos Tutorial, Part 1: From Stereo to 3D Sound in Minutes
Просмотров 95214 дней назад
This tutorial provides a comprehensive walkthrough on setting up and using the internal Dolby Atmos renderer in Pro Tools, covering everything from basic configuration to advanced features like track presets, live re-renders, and working with master files. Edit: Studio One and Logic can also import ADM master files. This video has proper subtitles in English and German. Transcript available on ...
Digital Media Expert Reacts: The AI Music Lawsuit That Could Change Everything
Просмотров 1,4 тыс.21 день назад
My reaction to the RIAA's copyright infringement lawsuit against AI music generation services. This video has proper subtitles in English and German. Transcript available on Patreon: www.patreon.com/MichaelGWagner ► Join our free Discord community: discord.spaceforaudio.com ► Subscribe to my secondary channel: www.youtube.com/@MichaelsSecondChannel ► Become a Patreon of this channel (free optio...
Dolby Atmos Secrets: Why This Powerful Audio Tool Might Be Overkill for You
Просмотров 876Месяц назад
In this video, we explore AudioMovers' Omnibus 3 and its integration with Dolby Atmos and binaural rendering, revealing why this powerful tool might be overkill for many users' audio workflows. This video has proper subtitles in English and German. Transcript available on Patreon: www.patreon.com/MichaelGWagner ► Join our free Discord community: discord.spaceforaudio.com ► Subscribe to my secon...
Top 20 DAWs for Dolby Atmos: Ultimate Tier List 2024 (Pro Tools vs. Logic vs. Cubase)
Просмотров 1,9 тыс.Месяц назад
In this video, I compare and rank 20 popular Digital Audio Workstations based on their Dolby Atmos capabilities, providing insights for both professionals and hobbyists looking to dive into spatial audio production. This video has proper subtitles in English and German. Transcript available on Patreon: www.patreon.com/MichaelGWagner ► Join our free Discord community: discord.spaceforaudio.com ►...
Finally! A New Approach to Dolby Atmos Mastering
Просмотров 3 тыс.Месяц назад
In this video, we explore the Fiedler Audio Mastering Console for Dolby Atmos, a standalone tool that allows users to easily remaster Dolby Atmos master files using the Gravitas dynamics processing module for tasks such as adding sidechain compression without the need for a digital audio workstation. This video has proper subtitles in English and German. Transcript available on Patreon: www.pat...
Lumina Delay: The Best Immersive Delay Plugin?
Просмотров 1,1 тыс.Месяц назад
In this video, we explore the features and quirks of Lumina Delay, an affordable and intuitive multi-tap delay plugin from Mountain Road DSP that can handle immersive audio formats up to 7.1.4. This video has proper subtitles in English and German. Transcript available on Patreon: www.patreon.com/MichaelGWagner ► Join our free Discord community: discord.spaceforaudio.com ► Subscribe to my secon...
The Dolby Atmos Maschine 2: More Immersive Maschine Madness
Просмотров 4032 месяца назад
In this video, we explore the integration of Fiedler Audio's SpaceLab and Gravitas plugins with our Dolby Atmos Maschine setup, creating immersive reverbs and dynamic sidechaining effects that showcase the endless creative possibilities of this powerful combination. This video has proper subtitles in English and German. Transcript available on Patreon: www.patreon.com/MichaelGWagner Affiliate l...
Let's Create a Dolby Atmos Maschine
Просмотров 1,9 тыс.2 месяца назад
In this video, we explore the surprisingly easy and cost-effective process of producing Dolby Atmos within the Native Instruments Maschine ecosystem using the Fiedler Audio Dolby Atmos Composer plugin. This video has proper subtitles in English and German. Transcript available on Patreon: www.patreon.com/MichaelGWagner Affiliate links to products mentioned in this video: ► NI Maschine at Amazon...
Tough Comments on My Take on AI in the Creative Disciplines
Просмотров 4852 месяца назад
In this video, I address viewer comments and questions about the impact of generative AI on creative disciplines, discussing philosophical debates, potential socio-economic consequences, technical aspects of working with AI-generated music, and the misconception that AI is stealing from human artists. This video has proper subtitles in English and German. Transcript available on Patreon: www.pa...
Sound Effects 101: Boom Library, ElevenLabs, or something else?
Просмотров 5662 месяца назад
In this video, we explore five different ways to source sound effects for your game projects, including creating them yourself, using royalty-free or commercial sound libraries, leveraging sound effects software, and the emerging trend of using generative artificial intelligence. This video has proper subtitles in English and German. Transcript available on Patreon: www.patreon.com/MichaelGWagn...
Multichannel Side-Chaining With Gravitas
Просмотров 5132 месяца назад
Fiedler Audio introduced a new compressor plugin called Gravitas. The dynamics processor can be used as a stereo or multichannel plugin as well as an insert for the Dolby Atmos Composer. In this video we dive into one of the most unique features of Gravitas: multichannel dynamics processing with multichannel sidechaining. This video has proper subtitles in English, German and Hindi. Transcript ...
Is This the Ultimate Dynamics Processing Tool for Dolby Atmos? (Gravitas Review)
Просмотров 1,5 тыс.3 месяца назад
Fiedler Audio introduced a new compressor plugin called Gravitas. The dynamics processor can be used as a stereo or multichannel plugin as well as an insert for the Dolby Atmos Composer. In conjunction with the Dolby Atmos Composer it adds master bus processing to Dolby Atmos projects. In this video, we look at the different use cases and check out the functionality of Gravitas. This video has ...
My 10 Favorite Plugins for Remastering Udio Tracks
Просмотров 8663 месяца назад
Udio.com shook up the music production world. But it is not perfect. Follow me in this tutorial about what plugins I consider essential for remastering Udio tracks. Transcript available on Patreon: www.patreon.com/MichaelGWagner Listen to the final track on my second channel: ► Sands of Time: ruclips.net/video/fP5QLTGkq8k/видео.html ► Subscribe to my second channel: www.youtube.com/@MichaelsSec...
Dolby Atmos Composer One-Year Anniversary: A Conversation With Fiedler Audio
Просмотров 14 тыс.3 месяца назад
Dolby Atmos Composer One-Year Anniversary: A Conversation With Fiedler Audio
Two Spatial Audio Features Of Mixbus 10 That Nobody Is Talking About
Просмотров 6933 месяца назад
Two Spatial Audio Features Of Mixbus 10 That Nobody Is Talking About
Apple Spatial Audio Monitoring in Mixbus 10
Просмотров 6883 месяца назад
Apple Spatial Audio Monitoring in Mixbus 10
Getting Started with Dolby Atmos in Mixbus 10
Просмотров 2,9 тыс.4 месяца назад
Getting Started with Dolby Atmos in Mixbus 10
How About Apple Spatial Audio on Windows?
Просмотров 7374 месяца назад
How About Apple Spatial Audio on Windows?
Two New Ways of Rendering Apple Spatial Binaural Audio in Your DAW
Просмотров 1,1 тыс.4 месяца назад
Two New Ways of Rendering Apple Spatial Binaural Audio in Your DAW
MiniDSP Flex: Perfect Sound Through Digital Room Correction?
Просмотров 5 тыс.4 месяца назад
MiniDSP Flex: Perfect Sound Through Digital Room Correction?
A Review of the Zylia ZM-1 3rd Order Ambisonics Mic
Просмотров 8855 месяцев назад
A Review of the Zylia ZM-1 3rd Order Ambisonics Mic
Let's Talk About Suno.ai! (Digital Dreamer Edit)
Просмотров 4015 месяцев назад
Let's Talk About Suno.ai! (Digital Dreamer Edit)
Getting Started With Sony's 360 Reality Audio and the 360 WalkMix Creator
Просмотров 1 тыс.5 месяцев назад
Getting Started With Sony's 360 Reality Audio and the 360 WalkMix Creator
The One Microphone That Can Do It All? A Review of the Voyage Audio Spatial Mic
Просмотров 1,2 тыс.5 месяцев назад
The One Microphone That Can Do It All? A Review of the Voyage Audio Spatial Mic
New VST Plugin Alert! Astrometry Delay and SideRack
Просмотров 8135 месяцев назад
New VST Plugin Alert! Astrometry Delay and SideRack
An Educator's Perspective on the Use of Artificial Intelligence in the Creative Industries
Просмотров 6345 месяцев назад
An Educator's Perspective on the Use of Artificial Intelligence in the Creative Industries
The Future of AI Based Stem Separation: A Chat About Ultimate Vocal Remover
Просмотров 5996 месяцев назад
The Future of AI Based Stem Separation: A Chat About Ultimate Vocal Remover
Dolby Atmos Multichannel Madness with Ableton Live and DearVR Pro 2
Просмотров 1,6 тыс.6 месяцев назад
Dolby Atmos Multichannel Madness with Ableton Live and DearVR Pro 2

Комментарии

  • @alejandroacosta1128
    @alejandroacosta1128 16 часов назад

    I have problems trying the profiling part, I got disconnected every time I play the start buton on unity. Does any one knows how can I solve this problem?

  • @kayhellmich8837
    @kayhellmich8837 День назад

    Das ist der Hammer! Danke Danke Danke für dieses Video! Ich bin Bitwig User, habe bei Ableton Live 8 den Wechsel zu BW gemacht, weil die PDC nicht funktioniert hat. Ich arbeite momentan an einem immersiven multimedialen Livekonzert für 2025 (da ist übrigens wahrscheinlich auch die Uni Graz beteiligt) und frickele das in BW mit Grid, FX Sends, jeder Menge Modulatoren etc. hin. Also selbst gebaute 3D Panner etc. Funktioniert auch. Aber mit dem Ergebnis, dass die BW GUI nun nur noch mit 3fps refreshed. Das BW Grid bietet keine Möglichkeit, ein Bus Encoding wie bei M4L zu realisieren, deswegen funktioniert bei einem 3D Sound Projekt der Mixer nicht mehr. Es ist ein Graus. Dagegen ist das hier vorgestellte Verfahren einfach genial mit dem Two Channel Encoding und Decoding. Ich probiere das jetzt alles aus und sollte das wirklich funktionieren bin ich wieder zurück bei Ableton. Echt ein Bärendienst, den Du mir hier geleistet hast!! Vielen Dank!

    • @michaelgwagner
      @michaelgwagner День назад

      😂 Yes, don”t underestimate Ableton. Thanks for the kind words!

    • @kayhellmich8837
      @kayhellmich8837 3 часа назад

      @@michaelgwagner Hi Michael, related to this video some more questions to understand the workflow: As far as I understood, E4L and its derivates work in a way that they encode and pass signales along to other instances of them, like an encoder directly sends the audio to the decoder. So routing happens outside the regular workflow, correct? How does that affect mixing/gainstaging/leveling, especially as I assume that everything happens pre-fader? Can I still use the mixer's faders to control volume of individual tracks or do I need to do that in a different way? Does the mute button work? Just thinking of live scenarios where I want to fade in certain tracks, I usually do this on the Mackie protocol, which controls faders and mute buttons. Would that still work? Many thanks for your help!

  • @polarsoundproductions
    @polarsoundproductions 3 дня назад

    Such a helpful tutorial! thank you for your hard work on this. Can't wait to check your next videos!

  • @electreelife
    @electreelife 6 дней назад

    You didn't mention Audio Brewers

  • @immerseaudio
    @immerseaudio 6 дней назад

    thats a really cool delay. im surprised they didnt implement the low/high pass filtering as well. i love sound particles but i really feel they should work on more methods of coloring and effecting the particles in different ways , timbre wise.

    • @michaelgwagner
      @michaelgwagner 5 дней назад

      Totally agree! But that should be easy to fix for them.

  • @ImaplanetJupiteeeerr
    @ImaplanetJupiteeeerr 8 дней назад

    The end really catched my ear: Both ambisonics and Atmos can be used also in game design, but in other ways than wwise etc. How are they different and how can they still be used for game audio what are their limitations and why? Thank you for these awesome videos! 💯

    • @michaelgwagner
      @michaelgwagner 8 дней назад

      Thanks! Those are big questions. I might do a video.

    • @ImaplanetJupiteeeerr
      @ImaplanetJupiteeeerr 8 дней назад

      @@michaelgwagner Thats what I was thinking, these questions might be a good idea/theme for a video! :)

  • @matthewpattersoncurry8795
    @matthewpattersoncurry8795 8 дней назад

    Excited to hear your take on Sound Particles' InDelay

  • @ImaplanetJupiteeeerr
    @ImaplanetJupiteeeerr 8 дней назад

    Hey Michael, thanks for the video! Has there been any developments on this field in the last years, what would you recommend these days? :) (I use Reaper and Bitwig)

    • @michaelgwagner
      @michaelgwagner 8 дней назад

      The Ambisonics space is fairly stable. A couple of plugins from Audio Brewers came out. And you can now also di Ambisonics in DaVinci Resolve.

  • @tobiasjone
    @tobiasjone 10 дней назад

    Thanks Arnold

  • @Banjoikey
    @Banjoikey 10 дней назад

    Thanks so much for this informative and well reasoned video. A very great deal for the courts to consider .I hope they will seek expert witnesses who have the type of understanding of AI and AI training that you have . Fascinating that it appears that in training the AI "just listens " and of course in that way it does not take copies ,. The longer I study copyright law the more convinced i become that it is not fit for purpose in our rapidly changing world . I am more minded to agree with the late Jon- Luc Goddard who said "It's not where you take things from - it's where you take them to.”

    • @michaelgwagner
      @michaelgwagner 10 дней назад

      Love the quote! Yes, copyright law needs to be updated. US copyright law had a long run btw. Many European countries already had to update at least once during the rise if the home computer because their laws would not cover software.

  • @robertjones9598
    @robertjones9598 11 дней назад

    Michael, what are the chances of something similar but which allows third part plug-ins to be loaded onto the "Dolby Atmos master bus"?

    • @michaelgwagner
      @michaelgwagner 11 дней назад

      Unlikely because these third party plugins would need to be able to handle 128 channels simultaneously.

    • @robertjones9598
      @robertjones9598 11 дней назад

      @@michaelgwagner Thank you once again for the fast response! That makes total sense. I’m actually not ready to master anything yet though, so I will watch for further developments… I already bought the renderer before the Fiedler stuff came along. 🤦‍♂️

  • @electreelife
    @electreelife 13 дней назад

    Hi Michael, I always come back to your videos after watching manufacturers tutorials and then it clicks in full…thank you. I would like to learn how to use Audio brewers abverb in conjunction with Fiedler audio DA composer…hopefully then I won’t need to pan my reverbs…is that possible? I haven’t bought abverb and tested because I’m not sure it will work, I’m using Fiedler and I’m not a fan of their reverb workflow tbh

    • @michaelgwagner
      @michaelgwagner 12 дней назад

      Thanks! It depends a bit on the DAW you are using but generally yes, you can use Audio Brewers with the DAC as long as your DAW is multichannel capable.

  • @SamHocking
    @SamHocking 13 дней назад

    Michael, you're not quite correct re. Windows. You can't use the Dolby Panner plugin or Dolby Audio Bridge with Windows DAR because Dolby don't release it for Windows, but ProTools Atmos panner and DAR works fine through ASIO, JACK ASIO, MADI and DANTE. You do need Ultimate though because ProTools is limited to 64 i/o through ASIO in the Studio version.

  • @dfhm-pq2cf
    @dfhm-pq2cf 13 дней назад

    So you can’t bounce mp4 directly from pro tools?

    • @michaelgwagner
      @michaelgwagner 13 дней назад

      As far as I know no, unfortunately not.

  • @dracau1176
    @dracau1176 16 дней назад

    Incredibly useful, thanks a lot !

  • @manuelgiasson-leger3693
    @manuelgiasson-leger3693 17 дней назад

    I have tried with the Dolby Atmos composer Essential and i don't have what you have in your video. Can you do a video with the Dolby Atmos composer Essential and Dolby Atmos Beam Essential?

    • @michaelgwagner
      @michaelgwagner 17 дней назад

      The video was made before the Essentials version got released. But most of the things should work identically. I might do an Essentials video if there is broader interest.

    • @manuelgiasson-leger3693
      @manuelgiasson-leger3693 17 дней назад

      @@michaelgwagner cool thank you.

  • @TheJaswant82
    @TheJaswant82 17 дней назад

    Good information

  • @wesjones7090
    @wesjones7090 17 дней назад

    It's pretty insane that Ableton decided not to pursue mulitchannel support for Live 12 after years spent on the update while Dolby Atmos exploded as a household format in that stretch of time. That said, your workarounds are much appreciated. Question on the area around 25 min in your video here: what happens without the Bidule plugin? Would those channels just get trimmed out of the signal, or would they fold in - for example would the .6 just sum to .2? I simply want to exploit the M4L multichannel support without encoding or panning, using the Dolby Atmos Renderer and the Dolby Panner per my usual workflow within Live, but am reluctant to spend yet another $100 on a plugin which you admit crashes on the regular to solve the 9.1.6 to 7.1.2 dilemma shown here. Thanks much!

    • @michaelgwagner
      @michaelgwagner 17 дней назад

      Have you tried Kushview Element in lieu of Bidule. That one is free and should work in the latest version.

  • @TeeCee-qq4ev
    @TeeCee-qq4ev 18 дней назад

    The end-user of these models has the same responsibility to not infringe as they ever did. I've done about 50 songs with A.I. In 48 cases it worked as it should have--as they claimed. But 2 cases sounded damned close to famous voices. One case--because I loved the song so much--I put in a DAW, and changed the pitch of the voice so it didn't sound like the Artist. The other case I don't intend to try to monetize in any shape or form, but I will give it away for free because it's a good song with a much heralded singer. I'll just use another version on the album.

  • @dfhm-pq2cf
    @dfhm-pq2cf 18 дней назад

    What are the benefits of using the external renderer? Are all the features in the “internal renderer”?

    • @michaelgwagner
      @michaelgwagner 18 дней назад

      Not everything is in the internal renderer. I have a video about the external renderer coming up this week. Make sure to subscribe. ;)

  • @misterringer
    @misterringer 19 дней назад

    Have you tried the Dsoniq Realphones plugin? It's another mixroom style plugin and I've really enjoyed using it. They don't have an HD490 profile yet but they work with a ton of headhones and you can load custom eq profiles, or just use the environment functions without eq.

  • @user-ty9ho4ct4k
    @user-ty9ho4ct4k 19 дней назад

    Why is the LFE omitted from the 2.0mix? I actually prefer the binaural mix with the binauralization setting all set to off more than the 2.0 mix, even in monitors.

    • @michaelgwagner
      @michaelgwagner 19 дней назад

      The LFE is not to be confused with a subwoofer channel. It is used for low frequency effects, primarily in film and TV productions. It is not included in the 2.0 mix by default following Dolby specs.

    • @user-ty9ho4ct4k
      @user-ty9ho4ct4k 19 дней назад

      ​@@michaelgwagnerI understand the difference. I also understand the concept of bass management in consumer audio systems. I still don't see why the LFE signal couldn't be folded into a 2.0 mix. I have made stereo masters from binaural re-renders(with effects set to off) and they have a huge low end. My only thought is that not all consumer stereo systems can reproduce those frequencies.

  • @matthewpattersoncurry8795
    @matthewpattersoncurry8795 20 дней назад

    thanks for the run through on the PT built in renderer. is it possible to still use the dolby panner in addition but with the built in renderer? asking since i do love the dolby panner's sequencer to easily add movement without having to write automation.

    • @michaelgwagner
      @michaelgwagner 20 дней назад

      I don't think so, but I am not 100% sure tbh.

  • @shawnyvibes
    @shawnyvibes 20 дней назад

    Thank you so much great video

  • @Beniemacaulay
    @Beniemacaulay 20 дней назад

    Amazing video as always. Except you didn't want to, You did however forgot to set the monitoring format to binaural. You can do that easily with the monitoring settings button, top right of the Renderer window. Cheers!

    • @michaelgwagner
      @michaelgwagner 20 дней назад

      Great point. I usually do not set the monitoring to binaural. It's a personal thing.

    • @Beniemacaulay
      @Beniemacaulay 20 дней назад

      @@michaelgwagner Alright, Thought so. I’ve learnt so much from your Dolby Atmos tutorials btw. Thank you so much.

    • @michaelgwagner
      @michaelgwagner 20 дней назад

      @Beniemacaulay You’re very welcome! 😊

  • @GhostWriter_Music
    @GhostWriter_Music 21 день назад

    they could got musicians to feed training data, I bet a lot of musicians would feed it. Now as for these labels, all they are interested in is creating music without paying the big artists. I think a few artists will endup using ai to generate music with lyrics in styles, and then reproduce the music with real instruments and real singer. ect.

  • @Only4YGuitars
    @Only4YGuitars 21 день назад

    A very nice review Michael ! I've a question : do you know if the unit can work without Dirac Live ? I've already invested a lot of time with REW/RePhase, so i would prefer not to change the calibration software.

  • @Nathankaye
    @Nathankaye 22 дня назад

    Cool Mastering software. Currently I use Hornet SAMP as a mastering plugin for Dolby Atmos. I see in the comments in the Fiedler Audio video that they have an EQ module coming soon. Is the module version of Gravitas included freely with Mastering Console, or do you have to purchase Gravitas MDS seperately for that function?

  • @solarion33
    @solarion33 23 дня назад

    it seems like a better deal then Nugen Upmix , as you get a downmixer to any format included (binaural too) and it does HOA . but the main audience seems to be the people who needs to convert from various formats to Atmos .

    • @michaelgwagner
      @michaelgwagner 23 дня назад

      It does not really upmix like Nugen. It’s more straightforward format conversion.

  • @pokepress
    @pokepress 23 дня назад

    One thing you didn’t mention are that apparently Suno was in negotiations with one or more labels when the suits were filed, and that apparently the AI companies are not opposed to discovery. This suggests that either: -The AI companies are bluffing on the discovery aspect. -The communications with the record labels contain information that the AI companies think puts the labels in a bad light (could be a wide variety of things-comments about users, pricing, artists, etc.).

    • @michaelgwagner
      @michaelgwagner 23 дня назад

      I don’t think they can oppose discovery. Imho, the lack of legal rules favors AI companies.

  • @live360studio
    @live360studio 23 дня назад

    Thanks for such good info! We just buy it! Thanks for the plugin info because Zylia site does not show clear.

  • @peterbulyaki
    @peterbulyaki 23 дня назад

    Hi, in you Audio Midi Setup you have speaker layouts I have never seen in my MacOS, I can only select 7.1.4, that is the maximum. Did you have to install extra software to enable these layouts? Or are these part of MacOS, and I have to do something to enable them? I have audio devices with 60+ outputs, so I have enough outputs still I can't get these options. (Never mind, I figured it out, you have Ventura in the video, which did have these settings, but in Sonoma there is only 7.1.4, and no one knows why Apple removed the rest.)

    • @michaelgwagner
      @michaelgwagner 23 дня назад

      They took them out in newer OS versions.

  • @pokepress
    @pokepress 23 дня назад

    I appreciate your open-minded approach to the situation. I have a few things to add: -It’s worth noting that certain elements might be too small to merit copyright, even if the output mimics them very closely. I could see producer tags falling into that category, or be considered a trademark rather than copyright issue. -Any discussion of music AI training needs to consider what’s happening in other forms of expression. The artist lawsuits against image generators seem to be trending in the image generators’ favor, which could make the record labels’ cases more difficult. -Sound generators aren’t and won’t be exclusively made by companies with well-defined structures. The optics will change drastically if the record labels eventually start taking individuals or small groups of people to court.

  • @davidchapdelaine
    @davidchapdelaine 23 дня назад

    Thanks so much for these tutorials! As someone who is just getting into game sound implementation, is there a way around adding custom code? I have never used any sort of scripting before. Is that knowledge generally required to put sound into a game? Or is there different software that is more straightforward? I am following line by line what is done here and its working, but I don't really understand why or what I am doing. Thanks!

    • @michaelgwagner
      @michaelgwagner 23 дня назад

      Thanks! You can’t avoid adding code completely, unfortunately.

    • @davidchapdelaine
      @davidchapdelaine 22 дня назад

      @@michaelgwagner Appreciate the reply! Would you have any resources you recommend for wrapping my head around that skill set more?

    • @michaelgwagner
      @michaelgwagner 22 дня назад

      The Unity learning resources are usually really good.

  • @jj.vargasmusic5019
    @jj.vargasmusic5019 24 дня назад

    Great video. Thanks! But what if I need many beds with different FX on them? What if I need, for instance, a bed for the reverb to route some objects, and then another bed with a delay to route other objects, but don´t necessarily want to put the two FX on the same bed? Can I make multiple beds for a more complex mix?

    • @michaelgwagner
      @michaelgwagner 24 дня назад

      Afaik the Dolby specs allow multiple beds but this is usually not implemented. What you can do is use so called object beds where you recreate a bed with the help of one object per speaker location.

    • @jj.vargasmusic5019
      @jj.vargasmusic5019 23 дня назад

      @@michaelgwagner Great! Thanks for the response. I will certainly look into how to set up object-beds in Cubase... Or maybe you already did a video about that? I will search. Anyway, thanks for responding. :)

  • @top10sandthings
    @top10sandthings 24 дня назад

    THINK OF THIS... AN IDIOT COPIES AN ELVIS SONG WORD FOR WORD LIKE ALL OF THE PEOPLE TRYING TO SHOW THAT MUSIC GENERATION AND AI IS SO EVIL... THOSE ARE PEOPLE SIMILAR TO idiots who try to crazy rocks and use them instead of updating and trying to blame the technology because they dont understand hammers. Even if you say something like auto generate,.... the AI and music is so dumb it likes NEON and other silly phrases. THOUSANDS OF AI SITES AND PEOPLE ARE USING THE TOOL OF TECHNOLOGY TO MAKE AMAZING THINGS... BUT JUST LIKE ARTIST AND ALL HUMANS they want to go try to copy and get as many experiences like listening to radio and they dont pay any artist for it. THE RECORD LABEL is greedy.

  • @user-iw6yx9bf6s
    @user-iw6yx9bf6s 25 дней назад

    Hey there, You're talking about Dolby Atmos decoder... but is it hardware or an app ?

  • @rolandropnack4370
    @rolandropnack4370 25 дней назад

    Step back two or three steps and you gain the perspective that the choice of words surrounding AI blinds the eye for some very basics. AI is not an autonomous entity that secretly asks itself "will I dream?". It is a software. A tool like a gun or a car. The wielder of the tool is responsible for whatever damage the tool makes. The AI doesn't infringe on copyrights. It's the owner who didn't check its output. The question how the AI did it is as irrelevant as the question what made neighbour's mastiff break into my yard and kill all the chicken. Also AI don't listen to music. The computer on which the software is run reads the data and analyses the values. That means it downloads it into its RAM. In the end the method of recreation is irrelevant. You can recreate something by storing and rereading bit for bit, or you can establish a procedure that computes the values of the bitmap anew. In the end you have copied the bitmap nonetheless. So the big question is not: what happened in the process? But: what came out? Is it too similar?

    • @michaelgwagner
      @michaelgwagner 25 дней назад

      I agree completely with the one exception that wether or not AI "listens" depends on the definition of listening. Yes, it has to be in the RAM for it to be able to process it, but it that constitutes "storing" in a copyright infringement relevant meaning is up for debate. I am using the term "listening" for that purpose in order to avoid using the term "storing".

  • @KasasagiWad3
    @KasasagiWad3 25 дней назад

    AI always takes the easy path when training, maximizing "rewards", and if that means overfitting parameters to data, it will do it. this is why making good generalized AI hard, you need absolutely massive data sets so the risk of overfitting goes down, or you need to spend countless VERY expensive training sessions patching up whatever shortcuts the AI finds.

    • @KasasagiWad3
      @KasasagiWad3 25 дней назад

      IRL examples of overfitting is using very few media sources for inspiration, which is a recognized step to becoming an artist. so it's similar (but not exactly the same) as sampling, and still likely a very grey area case by case thing.

    • @michaelgwagner
      @michaelgwagner 25 дней назад

      That is a very interesting point!

  • @ultimatums1337
    @ultimatums1337 25 дней назад

    So unless they played the music and listening to that via a mic.. you kinda always end up reading data from a download. Even streaming buffers data on your system storing it till read then later discarded so your almost certainly downloading then reading the data into the ai

    • @michaelgwagner
      @michaelgwagner 25 дней назад

      Sure, but if that is considered "storing" in a copyright infringement relevant meaning is something the judicial system needs to figure out.

  • @pihi42
    @pihi42 25 дней назад

    This war with AI on all fronts is just stupid. AI is just a tool to quickly get to results. As any other tool. Are we going to ban AI tool just because it is deemed "too effective" by the people manually doing the stuff right now? Just imagine if some big accounting company sued IBM for using "AI" to replace manual "innovative" work of adding up numbers in the 1950's. We wouldnt't have the internet today, probably. Law salad aside (it's just irrelevant to the point and philosophy of this), every living musician learned from large body of work of other people, and didn't have the copyright for all the stuff they learned from. Listening to average music - it's all the same, copies of copies of copies of something good written a long time ago. The same that these LLM's do in couple of seconds. Who has the copyright is of course a large money grab and has nothing to do with morality, philosophy, commons sense or logic. Just big industry trying to to survive and burning money on lawyers. And of course, from now on, every musician on the planet who now claims AI is "stealing the work" is going to use the same AI tools to write their "copyrighted" music, together witl labels themselves, who are just trying to cancel out the "authors" from the money altogether. Well, breaking news - in the most money producing music today there are no real "authors" left - perhaps a few - it's just a bloody industry producing hits like sausages in a factory. The real music you can find if you dig or go to a local pub is not going to be affected by AI at all. The real creative stuff is going to only shine more in the sea of AI generated crap.

    • @michaelgwagner
      @michaelgwagner 25 дней назад

      I probably would not formulate it that way, but in essence you are right imho.

  • @Jeffcrocodile
    @Jeffcrocodile 26 дней назад

    Training AI should never be subject to barriers, as much as i have to pay anyone if i train myself listening to some record. That's just beyond stupid. If they are stealing music, voices, screenplays, in part or in total to make their own pieces, then yes go after them. People lost sight of reasonable thinking.

    • @michaelgwagner
      @michaelgwagner 25 дней назад

      I agree.

    • @paulhiggins5165
      @paulhiggins5165 24 дня назад

      What you missing here is that the AI's in question are being trained to directly compete with the artists whose work is being used to train them. And in order to use those artists work the AI developers first had to make copies of that work- So the sceanrio is this- I use your work to build a machine that then competes directly with your work- this cannot be justified as 'fair use' either legally or morally.

    • @michaelgwagner
      @michaelgwagner 24 дня назад

      We don’t know if they made copies. Technically it is not necessary for training.

    • @paulhiggins5165
      @paulhiggins5165 24 дня назад

      @@michaelgwagner Is that really true? How likely is it that zero work was done on the training data prior to use? " You start with collecting data from various sources like databases, spreadsheets, or APIs. Next, you clean the data by removing or correcting missing values, outliers, or inconsistencies. Then, you transform the data through processes like normalization and encoding to make it compatible with machine learning algorithms. Finally, you reduce the data's complexity without losing the information it can provide to the machine learning model, often using techniques like dimensionality reduction. Preparing data is a continuous process rather than a one-time task. As your model evolves or as you acquire new data, you'll need to revisit and refine your data preparation steps." Could all this be achived without ever once making a copy of the source data in order to perform operations on it? It seems unlikely. But I am certainly no expert so I can't say for sure.

  • @e-frame5344
    @e-frame5344 26 дней назад

    That sounds like storing the data in a different format, nothing else.

    • @paulhiggins5165
      @paulhiggins5165 25 дней назад

      I think it's the copying of the data itself for a commercial purpose that's the problem- but this is not really clear. It's interesting that the 'Fair Use' defence- which seems to be the one the AI developers are counting on- is a double edged blade because to invoke it does amount to an admission that you have made use of other people's IP.

    • @e-frame5344
      @e-frame5344 25 дней назад

      @@paulhiggins5165 Tell me something: If I record someones music and then store it in a new file format, did I copy it?

    • @michaelgwagner
      @michaelgwagner 25 дней назад

      It really isn't.

    • @bornach
      @bornach 25 дней назад

      ​@@michaelgwagnerWhat if I made a deep neural network guess the coefficients of a discrete cosine transform (DCT is used in many audio codecs) and adjusted its weights and biases using stochastic gradient descent (the training algorithm used by every AI tech company) until it was able to generate a nearly indistinguishable audio replica of a well known producer tag. How is this different from MP3 transcoding an audio sample?

    • @paulhiggins5165
      @paulhiggins5165 25 дней назад

      @@e-frame5344 If you record someone's music then you make a copy of it- that is just a literal fact. Making a further copy in a new format is just an extension of that fact. I am old enough to remember people copying songs off the radio onto cassette tapes- technically this was illegal. Just as ripping CD's was (is?) illegal. Techically making any unuthorrised copy of a commercialy sold music track is a violation of copyright in my understanding. Fair use allows for this in specific circumstances- but using copies to create competing commercial products is not one of those circumstances. I am puzzled as to why the AI developers are so confident that they are covered by fair use, given the fact that they clearly are creating a competing product to the Artists whose work they have taken. But i admit that I am no copyright lawer so there may be aspects to fair use that I don't understand.

  • @Raptorman0909
    @Raptorman0909 26 дней назад

    By my observation, the industry that's been most effected by AI is voiceover actors. I've heard what appears to be AI generated voices that sound SO MUCH like a real person I've heard. I do believe that many actresses and actors as well as other people with notable voices have had their voices used to train AI such that they real people no longer own their voice and all money generated from their voice goes to the AI company. Scarlett Johansson had her voice stolen from her even after she told them no.

    • @michaelgwagner
      @michaelgwagner 26 дней назад

      They used a different voice actress that happend to sound similar to Scarlett Johansson to train the voice model. I always found that a very unfair situation towards that other actress. It is not her fault that her voice sounds similar to somebody with power in the industry.

    • @pokepress
      @pokepress 23 дня назад

      @@michaelgwagner Imagine what it would be like if you happened to be related to a famous person. 😉

  • @mrmikis
    @mrmikis 26 дней назад

    Not all music is not owned by record labels and as an artist, I don’t want my music training AI models but if Suno and Udio just got given a login to Spotify, then I think that my copyright has been breached. I don’t want labels to act on my behalf as it has been proven many times that they look after their own interests first…. Always…..

    • @michaelgwagner
      @michaelgwagner 26 дней назад

      Good point. If your music is strictly behind a paywall, it should not have been used for training. But the vast majority of music is freely accessible through services like RUclips Music. That's where it gets a lot more complicated.

  • @vjrei
    @vjrei 26 дней назад

    AI should be washing dishes and doing laundry, so we can create in peace.

    • @michaelgwagner
      @michaelgwagner 26 дней назад

      :)

    • @c0unt_WAVnstein
      @c0unt_WAVnstein 26 дней назад

      Or better still, it should be doing the things we can't do like analysing how genes network or how proteins fold, and the things humans can do like driving cars or making audio recordings should be left to us.

    • @vylbird8014
      @vylbird8014 26 дней назад

      That would just be a different sort of disaster. We don't currently have an economic framework that could handle such a situation. Remember that you don't really have a right to food, or water, or shelter, or clothing. You have a right to buy these things. If AI actually could perform as advertised (it can't, yet) it would lead to mass unemployment - entire sectors would disappear overnight. Millions of people suddenly jobless, unable to support themselves unless they turn to crime, and very, very angry.

    • @vjrei
      @vjrei 25 дней назад

      @@vylbird8014 That is the idea, depopulation.

    • @vylbird8014
      @vylbird8014 25 дней назад

      @@vjrei I imagine the AI-i-fied future as a farm. The robots grow fast amounts of food... which then gets crushed and burned by more robots, because no-one can afford to buy it. While automatic gun turrets hold back the starving, pennyless mob.

  • @PanAthen
    @PanAthen 26 дней назад

    The main defensive argument from the side of AI companies is that AI is trained like a young musician who learns songs and later plays similar songs, so it's not like sampling and, therefore, falls under fair use. This is bogus, at least. A young musician will have to pass years of learning and training and eventually will create some songs, while an AI computing system can be trained much faster, retain and access a vast amount of information instantly, and spit out thousands of songs per second. So, any judge and jury with common sense should understand that we are talking about two different things, we cannot say that a musician and an AI system are the same thing. It's like a country that develops nuclear weapons, saying that essentially, what happens inside the nuclear weapon is what happens when we light a candle with fire but in a more efficient, powerful, and innovative way, and therefore, by prohibiting the development of nuclear weapons it's like prohibiting Prometheus on giving fire to humans. As I said, it's a bogus argument, to say the least. The AI companies' lawyers also know what they're doing by their counterargument that the music industry tries to monopolize creativity. They hit the nerves of new musicians and creatives who are not fully developed into professionals, taking advantage of populism, and invoking a sense of innovative creativity that covers their true motive: staying in business without regulations. Any advanced technology that shifts the economy so rapidly must be regulated. All those AI companies that advocate "ethical" AI are nothing more than companies for profit, and that's how we should see them; that's their true image. I'm all for new tech, and I love what we can achieve with innovations like GenAI, but we should regulate and do it right for the future of human creativity.

    • @michaelgwagner
      @michaelgwagner 26 дней назад

      I am not sure if the fair use argument these companies make is a good one. It assumes that there is no commercial interest in the end product, which there clearly is here. Additionally, fair use is a US concept. It does not exist in other parts of the world, including in Europe. The problem with regulating emerging technology is that you do not know what exactly you are supposed to regulate. The result premature regulation is unintended consequences that can drive entire economies out of competition.

    • @PanAthen
      @PanAthen 26 дней назад

      Yes, that also is true about the fair use argument. Regarding regulating emerging technology, I would say that at least they should regulate prematurely the use of material without consent, which is the case in this legal battle. Don't forget that many research spinoff companies started making GenAI years ago, but they tried to make it work using restricted datasets because they wanted to do it within their legal rights, and failed to provide a widespread solution and achieve wide public adoption. So, the companies that finally did it out of their legal rights already drove the older and more ethical companies out of the competition by not adhering to the rules. In the end, it's the difference between asking permission to do something questionable or just doing it and prefer to just say "I'm sorry" if later is proven wrong. AI is a powerful tool that will bring us closer to our wildest dreams and I'm very happy that we now see it in GenAI, but massive scale changes in any economical sector that seem beneficial to the lay public, can yield catastrophic effects in the long run, even for their creators. Regulation from proper official bodies that work for the public good (non-profits, governments, public organizations, etc.) is something that is needed in the free market of the for-profit sector, so innovative companies don't become the proverbial beasts with no head. Overdue regulation will only try to solve a problem that we didn't cared for prematurely. What are we going to do? Are we going to lay off thousands of people, not care about copyrights, and train those people for other jobs and skills, and then say - hey, we should have made something from the beginning, let's limit it now and get all those people back to their old jobs? In the end, they are for-profit companies, there is no reason we should let them test-drive new technologies in the real world without regulating them at least on the basics that our society has already in place for the economic system. I believe that copyright should be restricted to GenAI for now and we'll see for later.

    • @michaelgwagner
      @michaelgwagner 25 дней назад

      @PanAthen That is a somewhat romantic position, because it assumes that everybody is following those regulations. As long as we don't have something like a Federation of Planets who controls everything, regulation will be relatively meaningless. OpenAI restricted access to Sora for fear of unethical usage. The result was that Kling, a Chinese model, overtook leadership in the video AI space.

    • @PanAthen
      @PanAthen 25 дней назад

      @@michaelgwagner I never said that there is one federation and I wouldn't agree with your description of my position as 'romantic'. We might not have a 'Federation of Planets' as you say, but we do have communication between countries and organization in a very organized level. Nuclear weapons are regulated, bio-weapons are forbidden, heroin is illegal, in the COVID pandemic we agreed to rules that benefited all. Countries work together for the greater good all the time. In a sense, we have a Federation of Planets if you think about it, it's called the United Nations. I'm only saying that people who don't profit from GenAI should regulate GenAI, and that's going to happen eventually. I find it offensive when the CEO of OpenAI, a private company, has the audacity to claim that when AI evolves enough to take away creative jobs the governments should be prepared to pay us salaries for up to ten years until the economy is balanced out and we don't have to work. First of all, he takes advantage of the classic position that the government should have a solution for problems introduced by the private sector, and second he assumes that people work only because they are forced to, not because they are doing what they love doing. Doing what you love is a valid path for the pursuit of happiness and creative work is a great antidote for the human condition. People don't do well when idle. I'm aware of the reality and the human factor, I would say that governments or high-level organizations that are non-profit and look for the greater good should take action - and probably will - to balance things out. GenAI is a great tool but we should apply it properly.

  • @QFXmusic
    @QFXmusic 26 дней назад

    Nice Video Michael very interesting

  • @coloradoing9172
    @coloradoing9172 26 дней назад

    Hello, I found a different way of doing this for free with Opentrack and "Head Tracker OSC Bridge." Basically you use the OSC output in opentrack and then you use the bridge to convert the quat data that opentrack outputs into whatever format you want. No need for OSCII bridge or any scripting, and there are lots of presets in the OSC bridge program. This also probably works if you use your phone to do the head tracking as you can just send the data directly to OSC bridge. Still, I think its best to use Opentrack and do the tracking on your computer for minimal latency. That's all.

    • @michaelgwagner
      @michaelgwagner 25 дней назад

      That's interesting! Need to look into this. I haven't touched Opentrack yet.

  • @Dave-rd6sp
    @Dave-rd6sp 26 дней назад

    I don't think the producer tags are overfitting. If something appears many times in the training data, then the model will consider that a normal part of music, no different than say a triangle ding. It doesn't know that it's a sort of signature, it just knows it's popular in certain music. A generative AI model *should* copy things if those things are a core part of what it's generating. Imagine a world in which *every* piece of music started with the same tone. Generative AI should reproduce that tone if it's generating music. Since it doesn't know what a producer tag is, the producer tag is no different. This is fairly easily solved by explicitly training it on what a producer tag is and then adding it as a negative prompt to generation, or just filtering it out of outputs manually.

    • @nullid1492
      @nullid1492 26 дней назад

      Theoretically, the model should understand patterns that occur in music in general. The accurate reproduction of these tags (which are essentially arbitary) suggest that the model contains patterns from specific samples in the training data. This is also known as over fitting. A human composing in a particular style will not copy the tags because they are not an intrinsic part of that style.

    • @michaelgwagner
      @michaelgwagner 26 дней назад

      Agree with @nullid1492 here.

    • @Dave-rd6sp
      @Dave-rd6sp 26 дней назад

      @@nullid1492 Some level of reproducibility is required, otherwise it would never be able to reproduce standard instruments in standard pitches, and thus be unable to make music. And it doesn't know what it should and should not be reproducing. If it sees something often enough, it'll be able to reproduce it, because it has no concept of what it is and isn't supposed to reproduce.

    • @nullid1492
      @nullid1492 26 дней назад

      ​@@Dave-rd6sp Much like how a text based model outputs tokens instead of letters, I would strongly guess that the music models output notes rather than waveforms. This saves a lot of time and difficulty in training as reliably getting the same style of notes is very difficult with a nural network (as you have pointed out). The fact that the model doesn't know what it should be reproducing is the entire point. The model needs to 'learn' which patterns are applicable. Rather than reproducing arbitary decisions from a specific peice of training data, a model should produce a unique piece based on patterns generally found in the genre. To determine if a model is overfitted, one can test it using data that was not in the training set (usually called validation data). If the model's performance is significantly worse in this validation data than on the training data, then the model is overfitted. Reducing overfitting is more of an art than a science and requires increasing the training data and tweeking the training parameters.