Skip to content
  • Courses
  • Workshops
  • Consulting
  • About
Login
Meerkat Statistics
Meerkat Statistics
  • Courses
  • Workshops
  • Consulting
  • About
Login
Meerkat Statistics
Meerkat Statistics
  • Home
  • Courses
  • Complete Course
  • Neural Networks with Python

Neural Networks with Python

Curriculum

  • 11 Sections
  • 42 Lessons
  • Lifetime
Expand all sectionsCollapse all sections
  • Intro
    4
    • 1.1
      Administration
    • 1.2
      Intro – Long
    • 1.3
      Notebook – Intro to Python
    • 1.4
      Notebook – Intro to PyTorch
  • Comparison to other methods
    3
    • 2.0
      Linear Regression vs. Neural Network
    • 2.1
      Logistic Regression vs. Neural Network
    • 2.2
      General Linear Model (GLM) vs. Neural Network
  • Expressivity (Capacity)
    1
    • 3.0
      Hidden Layers: 0 vs. 1 vs. 2
  • Training
    7
    • 4.1
      Backpropagation – Part 1
    • 4.2
      Backpropagation – Part 2
    • 4.3
      Implement a NN in NumPy
    • 4.4
      Notebook – Implementation Redo: Classes instead of Functions (NumPy)
    • 4.5
      Classification – Softmax and Cross Entropy – Theory
    • 4.6
      Classification – Softmax and Cross Entropy – Derivatives
    • 4.7
      Notebook – Implementing Classification (NumPy)
  • Autodiff
    2
    • 5.0
      Automatic Differentiation
    • 5.1
      Backpropagation vs. Forward Propagation (Forward vs. Reverse mode autodiff)
  • Symmetries in weight space
    2
    • 6.0
      Tanh & Permutation Symmetries
    • 6.1
      Notebook – Symmetries: tanh, permutations, ReLU
  • Generalization
    6
    • 7.1
      Generalization and the Bias-Variance Trade-off
    • 7.2
      Generalization Code
    • 7.3
      L2 Regularization / Weight Decay
    • 7.4
      Dropout Regularization
    • 7.5
      Notebook – Dropout implementation (NumPy)
    • 7.6
      Notebook – Early Stopping
  • Improved Training
    11
    • 8.1
      Weight initialization – Part 1 – What not to do
    • 8.2
      Notebook – Weight initialization Part 1
    • 8.3
      Weight initialization – Part 2 – What to do
    • 8.4
      Notebook – Weight initialization Part 2
    • 8.5
      Notebook – TensorBoard
    • 8.6
      Learning Rate Decay
    • 8.7
      Notebook – Input Normalization
    • 8.8
      Batch Normalization – Part 1: Theory
    • 8.9
      Batch Normalization – Part 2: Derivatives
    • 8.10
      Notebook – BatchNorm (PyTorch)
    • 8.11
      Notebook – BatchNorm (NumPy)
  • Activation Functions
    3
    • 9.0
      Classical Activations
    • 9.1
      ReLU Variants
    • 9.2
      A Brief History of ReLU
  • Optimizers
    2
    • 10.0
      SGD Variants: Momentum, NAG, AdaGrad, RMSprop, AdaDelta, Adam, AdaMax, Nadam- Part 1: Theory
    • 10.1
      SGD Variants: Momentum, NAG, AdaGrad, RMSprop, AdaDelta, Adam, AdaMax, Nadam – Part 2: Code
  • Auto Encoders
    1
    • 12.1
      Variational Auto Encoders (VAE)
This content is protected, please login and enroll in the course to view this content!
Linear Regression vs. Neural Network
Prev
General Linear Model (GLM) vs. Neural Network
Next
  • Courses
  • Workshops
  • Consulting
  • About
  • Contact

Cookie preferences

Terms and Conditions

Copyright © 2025 - Meerkat Statistics