Amazon's Deep Composer

Any music gear discussions that don't fit into one of the other forums.

Moderators: Kent, Joe., analogdigital, infradead, lisa, parasitk, plord

artieTwelve
Common Wiggler
Posts: 100
Joined: Fri Sep 01, 2017 10:43 am
Location: Baltimore

Amazon's Deep Composer

Post by artieTwelve » Mon Dec 02, 2019 9:30 pm

I watched the video and I'm not very impressed. It's an interesting start and not something I would have expected from Amazon. It's kind of feels like early Garage Band. It would be far more interesting if hardware has CV in/out but at $99, I doubt it does.

And that's kind of a bummer because the people who put this together have to have some knowledge of the home brew synth side of the world. Eh, maybe in version 2.0

https://aws.amazon.com/deepcomposer/

User avatar
Orgia Mode
Common Wiggler
Posts: 106
Joined: Mon Nov 04, 2019 5:24 pm

Post by Orgia Mode » Mon Dec 02, 2019 9:44 pm

I don't do Amazon.

User avatar
hlprmnky
Common Wiggler
Posts: 132
Joined: Mon Jul 02, 2018 9:51 pm
Location: Indiana, USA

Post by hlprmnky » Mon Dec 02, 2019 10:15 pm

Amazon (at least AWS) is what I do for work - it is Not Permitted to be part of my musicking time.

User avatar
nangu
Super Deluxe Wiggler
Posts: 1638
Joined: Thu Aug 04, 2011 1:07 am
Location: near Chicago

Post by nangu » Mon Dec 02, 2019 10:27 pm

I read an Engadget article that has more info..
The keyboard itself costs $99, making it -- at least initially -- the most affordable of Amazon's three machine learning tools. Developers can also skip buying the keyboard entirely and instead use the included virtual keyboard. However, actually using the keyboard for experimentation and developing new musical genres could become expensive quickly. After a free three-month trial, Amazon will bill DeepComposer users $1.26 per hour of training usage and $2.14 per hour of inference usage.

artieTwelve
Common Wiggler
Posts: 100
Joined: Fri Sep 01, 2017 10:43 am
Location: Baltimore

Post by artieTwelve » Mon Dec 02, 2019 10:48 pm

nangu wrote:I read an Engadget article that has more info..
The keyboard itself costs $99, making it -- at least initially -- the most affordable of Amazon's three machine learning tools. Developers can also skip buying the keyboard entirely and instead use the included virtual keyboard. However, actually using the keyboard for experimentation and developing new musical genres could become expensive quickly. After a free three-month trial, Amazon will bill DeepComposer users $1.26 per hour of training usage and $2.14 per hour of inference usage.
True that. No matter how much they try to mitigate the cost, training an AI model is computationally expensive, and therefore, wallet expensive. It will be interesting to see how much adoption this gets.

The closest thing to this is Google's Project Magenta. They came out with hardware device specs and a few fab shops created the PC board. I bought one but when I asked a few questions about component substitution on the Magenta forum I was met with... crickets. It seems almost no one built these things. The forum is still active, but it's all about the models.

Google, IMHO, was willing to invest in the research and software, just not the hardware. Or maybe that was never the point. Maybe the hardware was just a gimmick. Amazon, on the other hand, seems willing to put the money into hardware and give a somewhat easier on ramp to creators.

Look, I'm no fan of Amazon, but they like to play the long game and they (cough... Prime) usually win. Let's see how much support this gets down the road.

User avatar
Gribs
Super Deluxe Wiggler
Posts: 1295
Joined: Sun Jan 10, 2010 12:19 am
Location: San Ramon, CA

Post by Gribs » Mon Dec 02, 2019 11:58 pm

hlprmnky wrote:Amazon (at least AWS) is what I do for work - it is Not Permitted to be part of my musicking time.
Me too. I work at Lab126 in hardware engineering in the display-optics-touch area of Architecture and Technology. The job is challenging and interesting but it is also draining. I do a lot of computational work (computational optics) and anything that has to do with code or scripting feels like work. Also, my mind is just spent by the end of the day. I worked in R&D at 3M before this job; it was the same there.
----------------------------------------

cs1729
Common Wiggler
Posts: 161
Joined: Fri Oct 07, 2011 5:39 pm

Post by cs1729 » Tue Dec 03, 2019 7:58 am

looks gross. like if you turned a facial recognition filter into a keyboard. how can we get a pool of millions of user's data without paying them for it.

p_shoulder
Learning to Wiggle
Posts: 50
Joined: Mon Oct 16, 2017 9:33 am

Post by p_shoulder » Tue Dec 03, 2019 8:59 am

artieTwelve wrote:True that. No matter how much they try to mitigate the cost, training an AI model is computationally expensive, and therefore, wallet expensive. It will be interesting to see how much adoption this gets.
"Machine learning" is one of the Silicon Valley buzzword bingo terms, that tends to fall apart on further analysis. There are technologies where this works well, but machine learning currently is being (cough) "over-fitted" into too many applications which it honestly makes no sense to go to.

Most AI tools are essentially pattern recognition at its core. They can be great for that. For music, though, I have been thoroughly unimpressed with all machine learning algorithmic music attempts so far I've heard. Sure, generative music can do some nifty ambient noodlings, and if you need infinite ambient noodlings, great. But AI has no concept of "narrativium" or human interaction (DJs and bar bands can respond to the crowd vibe, do you think AI has a chance here?). Training for basic music structure sounds like it would take a while to even get passable results, and also be a waste of time. You could probably use that time to practice, say, scales, chords, and fingering, to *much* greater effect.

Instead of paying up the wazoo to The Amazon Empire, it makes more sense to me to train your *own* "pattern-recognition-engine" (eg your brain) by actually listening to a lot of tunes and reading a little theory and playing out etc.

User avatar
JES
Veteran Wiggler
Posts: 551
Joined: Tue Jun 18, 2013 10:03 pm
Location: Montreal

Post by JES » Tue Dec 03, 2019 9:33 am

Also there's the whole question of why a person wants to make music.

I have yet to hear AI-generated music I like. But even if I did, I would still rather listen to music made by people I can interact with one way or another.

But I don't think this is really about music. Amazon is promoting AWS with music. I clicked on the preview form and it's all about business uses of ML.

jorg
Wiggling with Experience
Posts: 383
Joined: Fri Apr 03, 2015 9:38 am
Location: East Coast USA

Post by jorg » Tue Dec 03, 2019 10:56 am

cs1729 wrote:looks gross. like if you turned a facial recognition filter into a keyboard. how can we get a pool of millions of user's data without paying them for it.
Or better yet, "how can we get a pool of millions of users' data and bamboozle them into paying us for it"

User avatar
EPTC
Super Deluxe Wiggler
Posts: 1176
Joined: Thu May 14, 2015 9:01 am
Location: Austin TX
Contact:

Post by EPTC » Tue Dec 03, 2019 11:19 am

Laughing my genuine ass off at the shitshow of a song at 12:22

[video][/video]

User avatar
Flounderguts
Common Wiggler
Posts: 152
Joined: Thu Dec 07, 2017 5:56 pm
Location: SLC

Post by Flounderguts » Tue Dec 03, 2019 12:25 pm

Amazon Decomposer. Compost for your brain.
----------------------

Flounderguts

User avatar
commodorejohn
Ultra Wiggler
Posts: 860
Joined: Fri May 03, 2013 4:19 pm
Location: Placerville, CA

Post by commodorejohn » Tue Dec 03, 2019 12:33 pm

p_shoulder wrote:Instead of paying up the wazoo to The Amazon Empire, it makes more sense to me to train your *own* "pattern-recognition-engine" (eg your brain) by actually listening to a lot of tunes and reading a little theory and playing out etc.
That's a bingo. Algorithmic music is a neat, interesting challenge from a programmer's standpoint, but as a commercial product? Pffft hahaha no.
Computers: Amiga 1200, DEC VAXStation 4000/60, DEC MicroPDP-11/73
Synthesizers: Roland JX-10/SH-09/MT-32/D-50, Yamaha DX7/V50/TX7/TG33/FB-01, Korg MS-20 Mini/ARP Odyssey/DW-8000, Ensoniq SQ-80

"'Legacy code' often differs from its suggested alternative by actually working and scaling." - Bjarne Stroustrup

onthebandwagon
Ultra Wiggler
Posts: 981
Joined: Mon Feb 11, 2019 9:53 am
Location: jersey

Post by onthebandwagon » Tue Dec 03, 2019 12:42 pm

Had to cancel my Prime...felt it damaging my soul, something I can’t afford to degrade any more...
“no matter how fine you grind the dead meat, you’ll not bring it to life again“

User avatar
BailyDread
Wiggling with Experience
Posts: 254
Joined: Tue Oct 23, 2018 12:55 pm

Post by BailyDread » Tue Dec 03, 2019 2:08 pm

p_shoulder wrote:
artieTwelve wrote:But AI has no concept of "narrativium" or human interaction (DJs and bar bands can respond to the crowd vibe, do you think AI has a chance here?)
I'm skim reading this thread at work so this is possibly not relevant to the point you were making, but the only thing to stop an AI from being able to do this is that it doesn't have the tech yet to be able to simultaneously monitor the nervous system reactions of the entire crowd of people. If it could, it could hypothetically run second by second trial and error for which intervals and sounds etc caused the greatest pleasure response to the greatest number of people in the room, and it would then do more of that, and less of whatever the inverse was. Within no time at all, the crowd will have their vibe responded to and accounted for. It would not be a one-size-fits all, since a crowd of people wanting something "heavy" would only have the neurological signature of pleasure if presented w/ something suitably "heavy", even if in their other listening, they respond pleasurably to jazz or something. It's basically like having a taste for hot chocolate, but being in the mood for coffee. A machine w/ suitably detailed readings on your nervous system response would be able to detect that you were in the mood for coffee over hot chocolate by measuring your physiological responses to both stimuli at a given moment. This could occur so subtly you would never notice, and possibly, so effectively, that you prefer the drink machine to the barista, or the robot DJ to the dude behind the booth.

The AI would not need any kind of "taste", it would merely have to have a suitable picture of what "crowd having a great time enjoying music that is personally suited to them" looks like, neurologically. That's not particularly hard to think up, hypothetically, so I don't see why this isn't right around the corner.

This is why considerations into "transgressive" and "challenging" art are important; they violate the logical form of art appreciation I've presented here, potentially giving a neurologically non-pleasurable response (at least initially), but nevertheless impacting the person intaking it. And so there may still be uncovered stones in the arts, that machines cannot touch, at least insofar as they can't figure out the weirder parts of human life.

I'm fairly certain that part of what allows for transgressive art to be unaccounted for by AI is that it's psychologically unsound, as in mentally illogical. Such left-turns might in fact be very difficult to account for within an AI-analyzing-human picture simply b/c it makes no damn sense to be drawn to something that legitimately disgusts you, for example... kind of in the way that economics models have a hard time accounting for people who are just completely bonkers about how they handle money. Maybe the equivalent within the arts are the only non-machine-learnable moves to make.

TL;DR: AI will be able to manipulate the emotions and beliefs of crowds of people very directly with an extreme degree of effectiveness, very, very soon

cheers! :deadbanana:

User avatar
dumbledog
Super Deluxe Wiggler
Posts: 1165
Joined: Sun Aug 16, 2015 10:10 pm

Post by dumbledog » Tue Dec 03, 2019 3:39 pm

What if you feed cloud music... into Clouds

User avatar
InsectInPixel
Common Wiggler
Posts: 171
Joined: Sat Jan 21, 2012 11:34 am
Location: Naples, Florida
Contact:

Post by InsectInPixel » Tue Dec 03, 2019 6:24 pm

it would be interesting to "train" it w/ music from current works by Autechre. Or to be original, train it w/ a self generating patch just to mess with it. i just found out about another AI-ish web app called DrumBot drumbot.glitch.me I haven't messed with it yet, but i'd like to mess w/ it too to see what type of rhythm it would generate
with some bizarre arp or modular patch. Things are getting interesting out there.
I make UnderBridge for the Elektron Analog Keys. https://youtu.be/dowAKQkFAE0

artieTwelve
Common Wiggler
Posts: 100
Joined: Fri Sep 01, 2017 10:43 am
Location: Baltimore

Post by artieTwelve » Tue Dec 03, 2019 10:18 pm

InsectInPixel wrote:Things are getting interesting out there.
Yes. Yes they are. I'm going to wait out the beta period on this and then try it when it's opened up. I would like to see what it makes of a well constructed Krell patch. Will it search forever looking for a pattern in the randomness? What will it find? A counter point no human could find? Or nothing? Probably nothing. Just surface noise.

But there is no doubt these models will get better. It's the future.

Les Paul took all kinds of shit about the electric guitar. Bob Moog caught the same when he was told he "destroyed western music". Is this on the same level? Hell, I don't know.

User avatar
(((EMP)))
Learning to Wiggle
Posts: 30
Joined: Sat Oct 19, 2019 9:38 pm

Post by (((EMP))) » Tue Dec 03, 2019 10:34 pm

jorg wrote:
cs1729 wrote:looks gross. like if you turned a facial recognition filter into a keyboard. how can we get a pool of millions of user's data without paying them for it.
Or better yet, "how can we get a pool of millions of users' data and bamboozle them into paying us for it"
Definitely this. This is essentially what all social media does.

onthebandwagon
Ultra Wiggler
Posts: 981
Joined: Mon Feb 11, 2019 9:53 am
Location: jersey

Post by onthebandwagon » Tue Dec 03, 2019 10:34 pm

artieTwelve wrote:
InsectInPixel wrote:Things are getting interesting out there.
Yes. Yes they are. I'm going to wait out the beta period on this and then try it when it's opened up. I would like to see what it makes of a well constructed Krell patch. Will it search forever looking for a pattern in the randomness? What will it find? A counter point no human could find? Or nothing? Probably nothing. Just surface noise.

But there is no doubt these models will get better. It's the future.

Les Paul took all kinds of shit about the electric guitar. Bob Moog caught the same when he was told he "destroyed western music". Is this on the same level? Hell, I don't know.
How deep.
“no matter how fine you grind the dead meat, you’ll not bring it to life again“

User avatar
coolshirtdotjpg
Super Deluxe Wiggler
Posts: 1260
Joined: Wed May 06, 2015 4:13 pm
Location: Santa Cruz, California

Post by coolshirtdotjpg » Wed Dec 04, 2019 1:31 am

How long is going to be until one of my friends opens up their code and tells me it's korg wavestation-esque technology like the google "neural net" synth?

edit: ok it's not a synth, never mind. Looks goofy anyway.
New album Innsport 86 available now:

https://repairerofreputations.bandcamp. ... nnsport-86

User avatar
VM
Common Wiggler
Posts: 168
Joined: Wed Apr 10, 2019 4:30 am

Post by VM » Wed Dec 04, 2019 2:04 am

More data-farm malware from one of the world's most ethically compromised companies. Can't wait.
Current curator of Perth Ambient.

Stides
Wiggling with Experience
Posts: 252
Joined: Fri Feb 08, 2013 11:26 pm
Location: Pittsburgh

Post by Stides » Wed Dec 04, 2019 1:55 pm

Haven’t had a chance to watch the video, but create 2 accounts and feed output to input. Machines training machines. Might try it, always eager to try to break things.

electricanada
Ultra Wiggler
Posts: 918
Joined: Wed Nov 29, 2017 12:26 am
Location: Norfolk, VA

Post by electricanada » Wed Dec 04, 2019 6:40 pm

Stides wrote:Haven’t had a chance to watch the video, but create 2 accounts and feed output to input. Machines training machines. Might try it, always eager to try to break things.
Now see, you try that and you’re gonna fuck around and build Skynet, and no one wants that.
Eléctrica (electric) Nāda (the yoga of sound).

p_shoulder
Learning to Wiggle
Posts: 50
Joined: Mon Oct 16, 2017 9:33 am

Post by p_shoulder » Wed Dec 04, 2019 7:21 pm

BailyDread wrote:The AI would not need any kind of "taste", it would merely have to have a suitable picture of what "crowd having a great time enjoying music that is personally suited to them" looks like, neurologically. That's not particularly hard to think up, hypothetically, so I don't see why this isn't right around the corner.
Music tends to be a part of social identity, and I cannot come up with one single definition of "enjoy" (from a pattern recognition perspective) that would fit everything. Screaming tweens at the idol bands don't look the same as the mosh pit in the metal show, to say nothing of more localized traditions (like say a New Orleans second line). And that's not counting participation-counts activities like drum circles or rap battles. Yes, experimental / transgressive art is a whole other ballpark.

This is a problem for "training" AI. A theoretical robot who trains for, say, generating K-Pop is going to have a tough time at, say, a filk convention. (Even for humans, the number of artists who actually crossover to widely different music genres is rather small) So multiple models are probably going to be needed.

I consider myself having a much higher than average understanding of musical genres, but even so, I know I'm absolutely clueless about everything. Even in the US I'm sure I don't know but a sliver of all little "local traditions" that are around. Let alone the whole frickin' world. So there's a huge risk of training bias here. This applies even for the actual generation of the music... my guess is even if they get it "right", the models will probably (unless they are careful) fit the training profile of the typical AI researcher demographic, so not be so useful elsewhere. (Of course, since the AI made an awful attempt at a Jonathan Coulton song, which fits perfectly in US IT demographics, we're a long way from even this consideration).

In the end, maybe I'm sure AI can come up with *something*. But I think there's a great chance that it will end up being in uncanny valley territory, and be useless socially. If they get it right, I'll concede eventually AI might eventually make sanitized global pop music that beautiful idols can do choreographed dance to. That type of thing isn't the best in music, though. Maybe a more narrowly focused tool that is designed for musicians to fool with, instead of farting out some bad tune on command, actually could work, too, for all I know...

Post Reply

Return to “General Gear”