Learn practical skills, build real-world projects, and advance your career

Convolutional Neural Networks: Step by Step

Welcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation.

Notation:

  • Superscript [l][l] denotes an object of the lthl^{th} layer.

    • Example: a[4]a^{[4]} is the 4th4^{th} layer activation. W[5]W^{[5]} and b[5]b^{[5]} are the 5th5^{th} layer parameters.
  • Superscript (i)(i) denotes an object from the ithi^{th} example.

    • Example: x(i)x^{(i)} is the ithi^{th} training example input.

  • Subscript ii denotes the ithi^{th} entry of a vector.

    • Example: ai[l]a^{[l]}_i denotes the ithi^{th} entry of the activations in layer ll, assuming this is a fully connected (FC) layer.

  • nHn_H, nWn_W and nCn_C denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer ll, you can also write nH[l]n_H^{[l]}, nW[l]n_W^{[l]}, nC[l]n_C^{[l]}.

  • nHprevn_{H_{prev}}, nWprevn_{W_{prev}} and nCprevn_{C_{prev}} denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer ll, this could also be denoted nH[l1]n_H^{[l-1]}, nW[l1]n_W^{[l-1]}, nC[l1]n_C^{[l-1]}.

We assume that you are already familiar with numpy and/or have completed the previous courses of the specialization. Let's get started!

Updates

If you were working on the notebook before this update...
  • The current notebook is version "v2a".
  • You can find your original work saved in the notebook with the previous version name ("v2")
  • To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
List of updates
  • clarified example used for padding function. Updated starter code for padding function.
  • conv_forward has additional hints to help students if they're stuck.
  • conv_forward places code for vert_start and vert_end within the for h in range(...) loop; to avoid redundant calculations. Similarly updated horiz_start and horiz_end. Thanks to our mentor Kevin Brown for pointing this out.
  • conv_forward breaks down the Z[i, h, w, c] single line calculation into 3 lines, for clarity.
  • conv_forward test case checks that students don't accidentally use n_H_prev instead of n_H, use n_W_prev instead of n_W, and don't accidentally swap n_H with n_W
  • pool_forward properly nests calculations of vert_start, vert_end, horiz_start, and horiz_end to avoid redundant calculations.
  • `pool_forward' has two new test cases that check for a correct implementation of stride (the height and width of the previous layer's activations should be large enough relative to the filter dimensions so that a stride can take place).
  • conv_backward: initialize Z and cache variables within unit test, to make it independent of unit testing that occurs in the conv_forward section of the assignment.
  • Many thanks to our course mentor, Paul Mielke, for proposing these test cases.

1 - Packages

Let's first import all the packages that you will need during this assignment.

  • numpy is the fundamental package for scientific computing with Python.
  • matplotlib is a library to plot graphs in Python.
  • np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
import numpy as np
import h5py
import matplotlib.pyplot as plt

%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

%load_ext autoreload
%autoreload 2

np.random.seed(1)
The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload

2 - Outline of the Assignment

You will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:

  • Convolution functions, including:
    • Zero Padding
    • Convolve window
    • Convolution forward
    • Convolution backward (optional)
  • Pooling functions, including:
    • Pooling forward
    • Create mask
    • Distribute value
    • Pooling backward (optional)

This notebook will ask you to implement these functions from scratch in numpy. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:

alt

Note that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation.