Python

KL Divergence

Colab Notebook Before seeing KL Divergence, let’s see a very simple concept called Entropy Entropy Entropy is the expected information contained in a Distribution. It measures the uncertainty. $H(x) = \sum{p(x)I(x)}$ where $I(x)$ is called the Information content of $x$. “If an event is very probable, it is no surprise (and generally uninteresting) when that event happens as expected. However, if an event is unlikely to occur, it has much more information to learn that the event happened or will happen.

Off Policy Monte Carlo Prediction with Importance sampling

Off-Policy Monte Carlo with Importance Sampling Off Policy Learning Link to the Notebook By exploration-exploitation trade-off, the agent should take sub-optimal exploratory action by which the agent may receive less reward. One way of exploration is by using an epsilon-greedy policy, where the agent takes a nongreedy action with a small probability. In an on-policy, improvement and evaluation are done on the policy which is used to select actions. In off-policy, improvement and evaluation are done on a different policy from the one used to select actions.

DRL Navigator

Deep Reinforcement Learning Agent for Navigator Environment

Policy Based Reinforcement Learning

Policy based RL methods for Lunar lander and Cartpole environments.

Temporal Difference

TD methods like SARSA(0), SARSAMax and Expected SARSA.

Monte Carlo Prediction

MC method for BlackJack environment.

Facial Emotion Recognition PyTorch ONNX

Recognizing the facial emotions with Deep learning model trained on PyTorch and deployed with TF.js model converted with ONNX.

Pneumonia Diagnosis with Deep Learning

Web Application for Diagnosis of Pnuemonia with deep learning model trained and backed with PyTorch framework.

Character Generating RNN

Character level language model of RNN(LSTM) in PyTorch.

Computer Vision Security System

Computer vision security system server build with Python, OpenCV, Flask web server.