Level-1
Level-2
Level-3
Level-4
Biological Neuron, From Spring to Winter of AI, The Deep Revival, From Cats to Convolutional Neural Networks, The Curious Case of Sequences, Motivation from Biological Neurons, McCulloch Pitts Neuron, Thresholding Logic Perceptrons, Error and Error Surfaces, Perceptron Learning Algorithm, Proof of Convergence of Perceptron Learning Algorithm
Linearly Separable Boolean Functions, Representation Power of a Network of Perceptrons, Sigmoid Neuron, A typical Supervised Machine Learning Setup, Learning Parameters: (Infeasible) guess work, Learning Parameters: Gradient Descent, Representation Power of Multilayer Network of Sigmoid Neurons
Multilayer Perceptron Neuron: Introduction, Model, Learning, Evaluation, Geometry Basics, Geometric Interpretation, Perceptron: Learning - General Recipe, Learning Algorithm, Perceptron: Learning - Why it Works?, Perceptron: Learning - Will it Always Work?, Perceptron: Evaluation, A simple deep neural network, A generic deep neural network, Understanding the computations in a deep neural network, The output layer of a deep neural network, Output layer of a multi-class classification problem, How do you choose the right network configuration, Loss function for binary classification, Learning Algorithm (non-mathy version),
Backpropagation
Optimization Techniques, Contours Maps, Momentum based Gradient Descent, Nesterov Accelerated Gradient Descent, Stochastic And Mini-Batch Gradient Descent, Tips for Adjusting Learning Rate and Momentum, Line Search, Gradient ,Descent with Adaptive Learning Rate, Bias Correction in Adam
Bias and Variance, Train error vs Test error, Train error vs Test error (Recap), True error and Model complexity, L2 regularization, Dataset augmentation, Parameter sharing and tying, Adding Noise to the inputs, Adding Noise to the outputs, Early stopping, Ensemble Methods, Dropout
Convolution Neural Networks
The convolution operation, Relation between input size, output size and filter size, Convolutional Neural Networks, CNNs (success stories on ImageNet), Image Classification continued (GoogLeNet and ResNet), Visualizing patches which maximally activate a neuron, Visualizing filters of a CNN, Occlusion experiments, Finding influence of input pixels using backpropagation, Guided Backpropagation, Optimization over images, Create images from embedding, Deep Dream, Deep Art, Fooling Deep Convolutional Neural Networks
Eigenvalues and Eigenvectors, Linear Algebra: Basic Definitions, Eigenvalue Decomposition, Principal Component Analysis and its Interpretations, Singular Value Decomposition. Introduction to Autoncoders, Link between PCA and Autoencoders Regularization in autoencoders, Denoising Autoencoders, Sparse Autoencoders, Contractive Autoencoders.
Sequence learning part-1
One-hot representations of words, Distributed Representations of words, SVD for learning word representations, Continuous bag of words model, Skip-gram model, Skip-gram model, Contrastive estimation, Hierarchical softmax, GloVe representations, Evaluating word representations, Relation between SVD and WordSequence Learning Problems
Sequence learning part-2
Recurrent Neural Networks, Backpropagation through time, The problem of Exploding and Vanishing Gradients, Long Short Term Memory(LSTM) and Gated Recurrent Units(GRUs), Introduction to Encoder Decoder Models, Applications of Encoder Decoder models, Attention Mechanism, Attention over images, Hierarchical Attention.