본문 바로가기

Tech Development35

PyTorch: CPU vs GPU Date: 2023.02.02 * The PyTorch series will mainly touch on the problem I faced. For actual code, check out my github repository. [CPU vs GPU] The guidelines I follow recommend Google Collab as the default development environment. However, I am working on my local vscode, which has several disadvantages. So far, the most significant difference is the accessibility of GPU. Google Collab provides a.. 2023. 4. 23.
PyTorch: Avoiding Dimension Problems through Basic Operations Date: 2023.01.18 * The PyTorch series will mainly touch on the problem I faced. For actual code, check out my github repository. [A Few Techniques to Avoid Error] A crucial part of deep learning is to turn data into numerical representations. For example, an image can be represented as a 3-dimensional tensor in which each dimension is sized [224, 224, 3]. Since tensors represent many dimensions,.. 2023. 4. 23.
PyTorch: The Fundamentals & Applications in CNN Date: 2023.01.11 * The PyTorch series will mainly touch on the problem I faced. For actual code, check out my github repository. [Intro to PyTorch] In the previous posts, I built a CNN from scratch using python. The process was fruitful because it touched the specific functionalities of specific functions and variables. However, the code could have been shorter and more efficient. Thus, using Py.. 2023. 4. 23.
Python Backend - Study Notes 10 날짜: 2022.09.30 [Unit Test] Unit Test는 우리가 만든 시스템이 정상작동하는지 확인해보는 절차이자 중요한 단계이다. 테스트가 왜 중요한지 간단하게 알아보고 Unit Test 구현을 바로 해보자. [Why Unit Test?] 현재 구현중인 API 서버 등의 ‘시스템'을 테스트할 때 가장 중요한 것은 자동화이다. Manual Testing은 시간이 오래 걸리고 느리다. 테스트 자동화를 통해서 다음 3가지 요소를 갖추고 있어야된다. Repetitive Frequent Accurate 테스트의 종류는 크게 3가지로 나눌 수 있는데 다음과 같다. UI Test / End-To-End Test Integration Test Unit Test UI Test는 사용자가 실제로 사용할 시스템과.. 2023. 1. 1.
A Recipe for Applying Neural Networks to a Novel Problems A Recipe for Applying Neural Networks to a Novel Problems Date : 2022.12.25 *The original contents of this post is from Andrej Karpathy’s blog. [A Recipe for Training Neural Networks] While building my own CNN, I had numerous encounters with the huge gap in between “here is how a convolutional layer works” and “our network achieves state of the art results.” Thankfully Andrej has some words of w.. 2022. 12. 29.
CNN: The Afterwork The Afterwork Date : 2022.12.25 *The original contents of this post is from Andrej Karpathy’s blog. [Additional Thoughts After Completing Deep CNN] After completing a convolutional neural network I couldn’t help but read additional blogs and research related to computer vision. It is truly amazing how we can program the computer to understand, categorize, and produce images as humans do. Andrej .. 2022. 12. 29.
An Intro to Natural Language Processing An Intro to Natural Language Processing Date : 2022.10.28 *The contents of this book is heavily based on Stanford University’s CS224d course. [Welcome to NLP] Natural Language Processing (NLP) is basically making computers understand the human language. The idea is quite simple, but the methods may not be.. Our language is constructed with letters and the definition (meaning) which lies within w.. 2022. 12. 29.
Deep Convolutional Neural Network Deep Convolutional Neural Network Date : 2022.10.27 *The contents of this book is heavily based on Stanford University’s CS231n course. [The Deeper We Go] Everything up to this point sums up into building a deep CNN. The deep version will contain the following properties. 3x3 filters for conv layer Use He weight initialization Use ReLu as activation function Use Adam as weight optimization Imple.. 2022. 12. 25.
Image Visualization and Primary Networks (feat. LeNet, AlexNet) Image Visualization and Primary Networks (feat. LeNet, AlexNet) Date : 2022.10.25 *The contents of this book is heavily based on Stanford University’s CS231n course. [What’s Happening in the Conv Layer?] We’ve seen ‘why’ conv layers are so crucial. But ‘what’ is it exactly doing? More specifically, what is the conv layer looking for? The weight (filter) is the main parameter that guides the imag.. 2022. 12. 25.
Completing the CNN Completing the CNN Date : 2022.10.23 *The contents of this book is heavily based on Stanford University’s CS231n course. [Implementing Conv & Pooling Layers] Like every other function (layer), the conv and pooling layer requires both forward and backward propagations. However, 4 dimensional matrix applications are no walk in the park. So, we’re going to implement a technique called ‘im2col’ (ima.. 2022. 12. 25.
Convolutional & Pooling Layers Convolutional & Pooling Layers Date : 2022.10.22 *The contents of this book is heavily based on Stanford University’s CS231n course. [Convolutional Neural Network] CNN is a widely used technique in Computer Vision. CNN is similar to the network we’ve already built. We need to add the Convolution and Pooling layer to make the multilayer network into a CNN. The structure looks something like the f.. 2022. 12. 25.
Batch Initialization, Overfitting, Dropout, and Optimization Batch Initialization, Overfitting, Dropout, and Optimization Date : 2022.10.16 *The contents of this book is heavily based on Stanford University’s CS231n course. [Batch Normalization] In the previous post, we explored various methods for weight initialization. The purpose of weight initialization was to evenly spread the activation outputs among all nodes. Batch normalization is a method to spr.. 2022. 12. 16.
Weight Initialization, Xavier Weights, Dropout, and Setting Hyperparameters Weight Initialization, Xavier Weights, Dropout, and Setting Hyperparameters Date : 2022.10.14 *The contents of this book is heavily based on Stanford University’s CS231n course. [Weight Initialization] We’ve explored gradient descents designed to optimize the weights. Now lets focus on the initialization. “What value shall we start with?” So far we used weight decay in order to prevent overfitti.. 2022. 12. 16.
SGD, Momentum, AdaGrad, and Adam Stochastic Gradient Descent, Momentum, AdaGrad, and Adam Date : 2022.10.11 *The contents of this book is heavily based on Stanford University’s CS231n course. Optimization is the process of finding the optimal variable value. We will explore different methods of optimization to initialize hyperparameters and input variables. The purpose of these “methods” is to increase both efficiency and accur.. 2022. 12. 16.
Neural Network with Backward Propagation Neural Network with Backward Propagation Date : 2022.10.11 *The contents of this book is heavily based on Stanford University’s CS231n course. In the previous post, we programmed forward and backward propagations for each layer (Affine, ReLU, Sigmoid, Softmax) as separate classes. Now we only need to import and build the CNN. The benefit of coding each layer as separate classes is that we can bu.. 2022. 12. 16.
728x90
반응형