본문 바로가기
Tech Development/Computer Vision (PyTorch)

PyTorch: CPU vs GPU

by JK from Korea 2023. 4. 23.

<PyTorch: CPU vs GPU>

 

Date: 2023.02.02                

 

* The PyTorch series will mainly touch on the problem I faced. For actual code, check out my github repository.

 

[CPU vs GPU]

The guidelines I follow recommend Google Collab as the default development environment. However, I am working on my local vscode, which has several disadvantages. So far, the most significant difference is the accessibility of GPU. Google Collab provides a Tesla K80 GPU. (I have no idea what this is supposed to mean, so let’s dive in)

* The post is based on “CPU vs GPU for Machine Learning

 

[CPU]

CPU is a familiar concept among computer lovers. The Central Processing Unit does whatever the name stands for. It sequentially processes general tasks and deals with I/O operations. It is the computer’s brain because it interprets and executes most of the software and hardware applications. Read the previous post for information on what it means to “interpret” a program. A CPU processes tasks sequentially with tasks divided among its multiple cores to achieve multitasking.

 

[GPU]

At first, Graphics Processing Unit can sound like it is responsible for processing images and videos, anything graphic-related. Designed initially to render high-resolution 2D and 3D graphics, which are heavy programs, today’s GPU are intended for big data analytics and machine learning applications.

 

[Why GPU?]

The critical difference between CPU and GPU is that GPU uses parallel processing, which divides tasks into smaller subtasks distributed among many processor cores in the GPU. CPUs are excellent, but GPUs are better suited for tasks requiring handling specialized computations and have thousands of cores that can run operations in parallel on multiple data points.

 

[How Does a GPU Work?]

While CPUs typically have fewer cores that run at high speeds, GPUs have many processing cores that operate at low speeds. When given a task, a GPU will divide it into thousands of smaller subtasks and process them concurrently instead of serially.

 

[Which one do we use?]

Machine learning is a form of artificial intelligence that uses algorithms and historical data to identify patterns and predict outcomes with little to no human intervention. Machine learning requires the input of large continuous data sets to improve the algorithm's accuracy.

 

But don’t get me wrong. CPUs are still powerful. They might not be as efficient as GPUs when it comes to intensive machine learning, but they are capable of complex computations as well. We want to know if CPUs are suitable for neural networks.

 

The answer is YES for SMALL models and NO for LARGE models.

 

While it’s possible to train smaller-scale neural networks using CPUs, CPUs become less efficient at processing these large volumes of data, causing training time to increase as more layers and parameters are added.

 

Neural networks form the basis of deep learning (a neural network with three or more layers) and are designed to run in parallel, with each task running independently. This makes GPUs more suitable for processing the enormous data sets and complex mathematical data used to train neural networks. Because they have thousands of cores, GPUs are optimized for training deep learning models and can process multiple parallel tasks up to three times faster than a CPU.

 

However, we aren’t competing for speed or efficiency in this project. The goal is to build and run a neural network on PyTorch. Thus, we will continue to build on a CPU environment. I will find an alternative if the run time or accuracy goes out of control.

728x90
반응형

댓글