So I want to make a simple waveform editor for my samplesource site, and I was a bit leary, but after a bit of googling today, I found a few seemingly decent resources. One specifically for Java (however the damn site is still loading after fucking 10 minutes, I'm saving that puppy to disk so I don't have to reload it again.), and a few questions on usenet...
Basically from what I can tell is I'd have to analyze the file (with a wave I think this is easier than like mp3 or ogg, but there are libs to use so I'll try those, basically I think I could just decode the ogg/mp3 to a wav, then go from there. dunno for sure) Then, in the example someone mentioned... scan every 256 samples. That seems like low res, but if you figure that there's 44000 samples per second, then that's still like between 100-200 samples a second. For each sample, merely y++, then plot the x based on the data for that sample (that is, I believe, the amplitude of the signal... In the thing I read, they suggested taking the strongest signal from that set of samples, and use that to plot it) I'm still quite hazy on it, but I have a general grasp of the concept. When I have more time I'll look at the docs in further depth and think it through...
The next issue is going to be which framework to use. I want something that's useable in the browser so it can be either Java, Flash or Javascript. The next trick is to figure out how to allow a user to draw a selection of the sample, and export that to a file to be uploaded to the wiki (this would be done on the server I believe, encoding the little selection as an ogg file, as that's what mediawiki supports) A lot of shit to deal with, of course, but I think it's doable.
I tried looking at the audacity source code, but being that it's in CVS, I'm not sure I know where to begin. I looked at a couple spots that seemed promising, but there wasn't much info that was usable to me.
Basically from what I can tell is I'd have to analyze the file (with a wave I think this is easier than like mp3 or ogg, but there are libs to use so I'll try those, basically I think I could just decode the ogg/mp3 to a wav, then go from there. dunno for sure) Then, in the example someone mentioned... scan every 256 samples. That seems like low res, but if you figure that there's 44000 samples per second, then that's still like between 100-200 samples a second. For each sample, merely y++, then plot the x based on the data for that sample (that is, I believe, the amplitude of the signal... In the thing I read, they suggested taking the strongest signal from that set of samples, and use that to plot it) I'm still quite hazy on it, but I have a general grasp of the concept. When I have more time I'll look at the docs in further depth and think it through...
The next issue is going to be which framework to use. I want something that's useable in the browser so it can be either Java, Flash or Javascript. The next trick is to figure out how to allow a user to draw a selection of the sample, and export that to a file to be uploaded to the wiki (this would be done on the server I believe, encoding the little selection as an ogg file, as that's what mediawiki supports) A lot of shit to deal with, of course, but I think it's doable.
I tried looking at the audacity source code, but being that it's in CVS, I'm not sure I know where to begin. I looked at a couple spots that seemed promising, but there wasn't much info that was usable to me.
Tags: