Categories
code computer vision deep learning

Studying Variational Autoencoders

Variational autoencoders can discover unique ways to encode-decode the input from a distribution. Bonus: It can generate images!

Here’s a very short post. On my road to understanding autoencoders for recommenders, I am currently dissecting autoencoders. I became interested in variational autoencoders. These are a type of encoders where the input is sampled from a parametrized distribution. It says that the latent space for the encoding-decoding mapping should follow this distribution. And since we have this distribution, then we can generate new images!

For deeper details, click here.

For the notebook that generated the following, I’m using this Kaggle Notebook. I wanted to generate plausible images of flowers from this Kaggle Dataset. I followed William Falcon’s post for 90% of this code.

Here’s the original set of images.

And here are the generated images after 50 epochs. Looks a lot like overly zoomed-in and blurred flower images really.

After this, one can use the pre-trained weights for classifiers or GANs. The latter can capitalize on the weights to create better images of flowers!

Thanks for reading!

By krsnewwave

I'm a software engineer and a data science guy on recommender systems, natural language processing, and computer vision.

One reply on “Studying Variational Autoencoders”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s