MUFF WIGGLER Forum Index
 FAQ & Terms of UseFAQ & Terms Of Use   Wiggler RadioMW Radio   Muff Wiggler TwitterTwitter   Support the site @ PatreonPatreon 
 SearchSearch   RegisterSign up   Log inLog in 
WIGGLING 'LITE' IN GUEST MODE

Amazon's Deep Composer
MUFF WIGGLER Forum Index -> General Gear Goto page 1, 2  Next [all]
Author Amazon's Deep Composer
artieTwelve
I watched the video and I'm not very impressed. It's an interesting start and not something I would have expected from Amazon. It's kind of feels like early Garage Band. It would be far more interesting if hardware has CV in/out but at $99, I doubt it does.

And that's kind of a bummer because the people who put this together have to have some knowledge of the home brew synth side of the world. Eh, maybe in version 2.0

https://aws.amazon.com/deepcomposer/
Orgia Mode
I don't do Amazon.
hlprmnky
Amazon (at least AWS) is what I do for work - it is Not Permitted to be part of my musicking time.
nangu
I read an Engadget article that has more info..

Quote:
The keyboard itself costs $99, making it -- at least initially -- the most affordable of Amazon's three machine learning tools. Developers can also skip buying the keyboard entirely and instead use the included virtual keyboard. However, actually using the keyboard for experimentation and developing new musical genres could become expensive quickly. After a free three-month trial, Amazon will bill DeepComposer users $1.26 per hour of training usage and $2.14 per hour of inference usage.
artieTwelve
nangu wrote:
I read an Engadget article that has more info..

Quote:
The keyboard itself costs $99, making it -- at least initially -- the most affordable of Amazon's three machine learning tools. Developers can also skip buying the keyboard entirely and instead use the included virtual keyboard. However, actually using the keyboard for experimentation and developing new musical genres could become expensive quickly. After a free three-month trial, Amazon will bill DeepComposer users $1.26 per hour of training usage and $2.14 per hour of inference usage.


True that. No matter how much they try to mitigate the cost, training an AI model is computationally expensive, and therefore, wallet expensive. It will be interesting to see how much adoption this gets.

The closest thing to this is Google's Project Magenta. They came out with hardware device specs and a few fab shops created the PC board. I bought one but when I asked a few questions about component substitution on the Magenta forum I was met with... crickets. It seems almost no one built these things. The forum is still active, but it's all about the models.

Google, IMHO, was willing to invest in the research and software, just not the hardware. Or maybe that was never the point. Maybe the hardware was just a gimmick. Amazon, on the other hand, seems willing to put the money into hardware and give a somewhat easier on ramp to creators.

Look, I'm no fan of Amazon, but they like to play the long game and they (cough... Prime) usually win. Let's see how much support this gets down the road.
Gribs
hlprmnky wrote:
Amazon (at least AWS) is what I do for work - it is Not Permitted to be part of my musicking time.


Me too. I work at Lab126 in hardware engineering in the display-optics-touch area of Architecture and Technology. The job is challenging and interesting but it is also draining. I do a lot of computational work (computational optics) and anything that has to do with code or scripting feels like work. Also, my mind is just spent by the end of the day. I worked in R&D at 3M before this job; it was the same there.
cs1729
looks gross. like if you turned a facial recognition filter into a keyboard. how can we get a pool of millions of user's data without paying them for it.
p_shoulder
artieTwelve wrote:
True that. No matter how much they try to mitigate the cost, training an AI model is computationally expensive, and therefore, wallet expensive. It will be interesting to see how much adoption this gets.


"Machine learning" is one of the Silicon Valley buzzword bingo terms, that tends to fall apart on further analysis. There are technologies where this works well, but machine learning currently is being (cough) "over-fitted" into too many applications which it honestly makes no sense to go to.

Most AI tools are essentially pattern recognition at its core. They can be great for that. For music, though, I have been thoroughly unimpressed with all machine learning algorithmic music attempts so far I've heard. Sure, generative music can do some nifty ambient noodlings, and if you need infinite ambient noodlings, great. But AI has no concept of "narrativium" or human interaction (DJs and bar bands can respond to the crowd vibe, do you think AI has a chance here?). Training for basic music structure sounds like it would take a while to even get passable results, and also be a waste of time. You could probably use that time to practice, say, scales, chords, and fingering, to *much* greater effect.

Instead of paying up the wazoo to The Amazon Empire, it makes more sense to me to train your *own* "pattern-recognition-engine" (eg your brain) by actually listening to a lot of tunes and reading a little theory and playing out etc.
JES
Also there's the whole question of why a person wants to make music.

I have yet to hear AI-generated music I like. But even if I did, I would still rather listen to music made by people I can interact with one way or another.

But I don't think this is really about music. Amazon is promoting AWS with music. I clicked on the preview form and it's all about business uses of ML.
jorg
cs1729 wrote:
looks gross. like if you turned a facial recognition filter into a keyboard. how can we get a pool of millions of user's data without paying them for it.


Or better yet, "how can we get a pool of millions of users' data and bamboozle them into paying us for it"
EPTC
Laughing my genuine ass off at the shitshow of a song at 12:22

Flounderguts
Amazon Decomposer. Compost for your brain.
commodorejohn
p_shoulder wrote:
Instead of paying up the wazoo to The Amazon Empire, it makes more sense to me to train your *own* "pattern-recognition-engine" (eg your brain) by actually listening to a lot of tunes and reading a little theory and playing out etc.

That's a bingo. Algorithmic music is a neat, interesting challenge from a programmer's standpoint, but as a commercial product? Pffft hahaha no.
onthebandwagon
Had to cancel my Prime...felt it damaging my soul, something I can’t afford to degrade any more...
BailyDread
[quote="p_shoulder"]
artieTwelve wrote:
But AI has no concept of "narrativium" or human interaction (DJs and bar bands can respond to the crowd vibe, do you think AI has a chance here?)


I'm skim reading this thread at work so this is possibly not relevant to the point you were making, but the only thing to stop an AI from being able to do this is that it doesn't have the tech yet to be able to simultaneously monitor the nervous system reactions of the entire crowd of people. If it could, it could hypothetically run second by second trial and error for which intervals and sounds etc caused the greatest pleasure response to the greatest number of people in the room, and it would then do more of that, and less of whatever the inverse was. Within no time at all, the crowd will have their vibe responded to and accounted for. It would not be a one-size-fits all, since a crowd of people wanting something "heavy" would only have the neurological signature of pleasure if presented w/ something suitably "heavy", even if in their other listening, they respond pleasurably to jazz or something. It's basically like having a taste for hot chocolate, but being in the mood for coffee. A machine w/ suitably detailed readings on your nervous system response would be able to detect that you were in the mood for coffee over hot chocolate by measuring your physiological responses to both stimuli at a given moment. This could occur so subtly you would never notice, and possibly, so effectively, that you prefer the drink machine to the barista, or the robot DJ to the dude behind the booth.

The AI would not need any kind of "taste", it would merely have to have a suitable picture of what "crowd having a great time enjoying music that is personally suited to them" looks like, neurologically. That's not particularly hard to think up, hypothetically, so I don't see why this isn't right around the corner.

This is why considerations into "transgressive" and "challenging" art are important; they violate the logical form of art appreciation I've presented here, potentially giving a neurologically non-pleasurable response (at least initially), but nevertheless impacting the person intaking it. And so there may still be uncovered stones in the arts, that machines cannot touch, at least insofar as they can't figure out the weirder parts of human life.

I'm fairly certain that part of what allows for transgressive art to be unaccounted for by AI is that it's psychologically unsound, as in mentally illogical. Such left-turns might in fact be very difficult to account for within an AI-analyzing-human picture simply b/c it makes no damn sense to be drawn to something that legitimately disgusts you, for example... kind of in the way that economics models have a hard time accounting for people who are just completely bonkers about how they handle money. Maybe the equivalent within the arts are the only non-machine-learnable moves to make.

TL;DR: AI will be able to manipulate the emotions and beliefs of crowds of people very directly with an extreme degree of effectiveness, very, very soon

cheers! Dead Banana
dumbledog
What if you feed cloud music... into Clouds
InsectInPixel
it would be interesting to "train" it w/ music from current works by Autechre. Or to be original, train it w/ a self generating patch just to mess with it. i just found out about another AI-ish web app called DrumBot [url]drumbot.glitch.me[/url] I haven't messed with it yet, but i'd like to mess w/ it too to see what type of rhythm it would generate
with some bizarre arp or modular patch. Things are getting interesting out there.
artieTwelve
InsectInPixel wrote:
Things are getting interesting out there.

Yes. Yes they are. I'm going to wait out the beta period on this and then try it when it's opened up. I would like to see what it makes of a well constructed Krell patch. Will it search forever looking for a pattern in the randomness? What will it find? A counter point no human could find? Or nothing? Probably nothing. Just surface noise.

But there is no doubt these models will get better. It's the future.

Les Paul took all kinds of shit about the electric guitar. Bob Moog caught the same when he was told he "destroyed western music". Is this on the same level? Hell, I don't know.
(((EMP)))
jorg wrote:
cs1729 wrote:
looks gross. like if you turned a facial recognition filter into a keyboard. how can we get a pool of millions of user's data without paying them for it.


Or better yet, "how can we get a pool of millions of users' data and bamboozle them into paying us for it"


Definitely this. This is essentially what all social media does.
onthebandwagon
artieTwelve wrote:
InsectInPixel wrote:
Things are getting interesting out there.

Yes. Yes they are. I'm going to wait out the beta period on this and then try it when it's opened up. I would like to see what it makes of a well constructed Krell patch. Will it search forever looking for a pattern in the randomness? What will it find? A counter point no human could find? Or nothing? Probably nothing. Just surface noise.

But there is no doubt these models will get better. It's the future.

Les Paul took all kinds of shit about the electric guitar. Bob Moog caught the same when he was told he "destroyed western music". Is this on the same level? Hell, I don't know.


How deep.
coolshirtdotjpg
How long is going to be until one of my friends opens up their code and tells me it's korg wavestation-esque technology like the google "neural net" synth?

edit: ok it's not a synth, never mind. Looks goofy anyway.
VM
More data-farm malware from one of the world's most ethically compromised companies. Can't wait.
Stides
Haven’t had a chance to watch the video, but create 2 accounts and feed output to input. Machines training machines. Might try it, always eager to try to break things.
electricanada
Stides wrote:
Haven’t had a chance to watch the video, but create 2 accounts and feed output to input. Machines training machines. Might try it, always eager to try to break things.


Now see, you try that and you’re gonna fuck around and build Skynet, and no one wants that.
p_shoulder
BailyDread wrote:
The AI would not need any kind of "taste", it would merely have to have a suitable picture of what "crowd having a great time enjoying music that is personally suited to them" looks like, neurologically. That's not particularly hard to think up, hypothetically, so I don't see why this isn't right around the corner.


Music tends to be a part of social identity, and I cannot come up with one single definition of "enjoy" (from a pattern recognition perspective) that would fit everything. Screaming tweens at the idol bands don't look the same as the mosh pit in the metal show, to say nothing of more localized traditions (like say a New Orleans second line). And that's not counting participation-counts activities like drum circles or rap battles. Yes, experimental / transgressive art is a whole other ballpark.

This is a problem for "training" AI. A theoretical robot who trains for, say, generating K-Pop is going to have a tough time at, say, a filk convention. (Even for humans, the number of artists who actually crossover to widely different music genres is rather small) So multiple models are probably going to be needed.

I consider myself having a much higher than average understanding of musical genres, but even so, I know I'm absolutely clueless about everything. Even in the US I'm sure I don't know but a sliver of all little "local traditions" that are around. Let alone the whole frickin' world. So there's a huge risk of training bias here. This applies even for the actual generation of the music... my guess is even if they get it "right", the models will probably (unless they are careful) fit the training profile of the typical AI researcher demographic, so not be so useful elsewhere. (Of course, since the AI made an awful attempt at a Jonathan Coulton song, which fits perfectly in US IT demographics, we're a long way from even this consideration).

In the end, maybe I'm sure AI can come up with *something*. But I think there's a great chance that it will end up being in uncanny valley territory, and be useless socially. If they get it right, I'll concede eventually AI might eventually make sanitized global pop music that beautiful idols can do choreographed dance to. That type of thing isn't the best in music, though. Maybe a more narrowly focused tool that is designed for musicians to fool with, instead of farting out some bad tune on command, actually could work, too, for all I know...
MUFF WIGGLER Forum Index -> General Gear Goto page 1, 2  Next [all]
Page 1 of 2
Powered by phpBB © phpBB Group