Here’s a very short post. On my road to understanding autoencoders for recommenders, I am currently dissecting autoencoders. I became interested in variational autoencoders. These are a type of encoders where the input is sampled from a parametrized distribution. It says that the latent space for the encoding-decoding mapping should follow this distribution. And since we have this distribution, then we can generate new images!
For deeper details, click here.
For the notebook that generated the following, I’m using this Kaggle Notebook. I wanted to generate plausible images of flowers from this Kaggle Dataset. I followed William Falcon’s post for 90% of this code.
Here’s the original set of images.

And here are the generated images after 50 epochs. Looks a lot like overly zoomed-in and blurred flower images really.

After this, one can use the pre-trained weights for classifiers or GANs. The latter can capitalize on the weights to create better images of flowers!
Thanks for reading!
One reply on “Studying Variational Autoencoders”
Great job, Dylan! I’m looking forward new topics.
LikeLiked by 1 person