Python

KL Divergence

Colab Notebook Before seeing KL Divergence, let’s see a very simple concept called Entropy Entropy Entropy is the expected information contained in a Distribution. It measures the uncertainty. $H(x) = \sum{p(x)I(x)}$ where $I(x)$ is called the Information content of $x$. “If an event is very probable, it is no surprise (and generally uninteresting) when that event happens as expected. However, if an event is unlikely to occur, it has much more information to learn that the event happened or will happen.

Off Policy Monte Carlo Prediction with Importance sampling

Off-Policy Monte Carlo with Importance Sampling Off Policy Learning Link to the Notebook By exploration-exploitation trade-off, the agent should take sub-optimal exploratory action by which the agent may receive less reward. One way of exploration is by using an epsilon-greedy policy, where the agent takes a nongreedy action with a small probability. In an on-policy, improvement and evaluation are done on the policy which is used to select actions. In off-policy, improvement and evaluation are done on a different policy from the one used to select actions.

GAN 5

Conditional GAN

GAN 4

Deep Convolutional GAN

GAN 3

MNIST Linear GAN

GAN 2

Theory of Game between Generator and Discriminator