본문 바로가기
Tech Development/Deep Learning (CNN)

Neural Network with Backward Propagation

by JK from Korea 2022. 12. 16.

Neural Network with Backward Propagation

 

Date : 2022.10.11

 

*The contents of this book is heavily based on Stanford University’s CS231n course.

 

In the previous post, we programmed forward and backward propagations for each layer (Affine, ReLU, Sigmoid, Softmax) as separate classes. Now we only need to import and build the CNN.

 

[TwoLayerNet Code 1]
[TwoLayerNet Code 2]

The benefit of coding each layer as separate classes is that we can build as many layers as we wish by simply importing each layer.

 

We established two methods for optimizing the weight variable. First, the numerical gradient method, and second, the backward propagation method. The mathematical simplicity of the latter method has resulted in higher efficiency compared to the former. But how good is the backward propagation method in terms of accuracy? We shall do a gradient check for both methods and compare the results.

 

[Compare multivariable method (numerical gradient) with backpropagation method]

The difference is close to negligible. (For some reason the training file and two layer net file is not outputting the expected results. Going to create a branch to try find some errors..)

 

I decided to delete the numerical gradient method due to its comparatively lower efficiency and the need to change my main branch.

 

After creating a branch solemnly for perfecting the code to work for the backpropagation, I successfully had the code running.

 

[Backpropagation Method. Accuracy seems Legit.]

 

728x90
반응형

댓글