I've not written any code, but I've been doing a lot of reading and studying.

I spent a bunch of time looking into Neural Networks, running demos and even writing one from scratch.

While I don't think I'll be using any for vocal synthesis in the immediate future, they are clearly the the future as far as vocal synthesis is concerned.

I've also been revisiting old articles on formant synthesis on the off-chance that I could figure out how to salvage some of my old projects.

But the reality is that formants are basically a simplified way of representing the spectral envelope, with a number of built-in flaws. Fixing these would mean creating more complicated synthesizers with more parameters (among other things), and capturing spectral envelopes via DFTs just gives better results.

The main reason for not moving forward - beyond just being busy - is because I'm still not satisfied with the quality of the resynthesis. I think it's good, but not to the degree that I set out to reach when I started this summer.

So I'll continue exploring how to get better resynthesis results.


This free site is ad-supported. Learn more