Skip to content

Commit 622595c

Browse files
committed
gpuvstpu
1 parent 8a8f70e commit 622595c

10 files changed

Lines changed: 96 additions & 5 deletions

_pages/blog.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ pagination:
7373
<div class="float-right">
7474
<i class="fa-solid fa-thumbtack fa-xs"></i>
7575
</div>
76-
<h3 class="card-title text-lowercase">{{ post.title }}</h3>
76+
<h3 class="card-title">{{ post.title }}</h3>
7777
<p class="card-text">{{ post.description }}</p>
7878

7979
{% if post.external_source == blank %}

_posts/2025-12-11-cutile.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ date: 2025-12-11 16:00:00
55
description:
66
tags: gpu
77
categories: tutorials
8-
featured: true
8+
featured: false
99
giscus_comments: true
1010
---
1111

_posts/2025-12-11-welcome-to-my-blog.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ date: 2025-12-11 16:00:00
55
description: My first blog post - welcome to my academic journey
66
tags:
77
categories: tutorials
8-
featured: true
8+
featured: false
99
giscus_comments: true
1010
---
1111

_posts/2025-12-14-work_with_LLM.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ date: 2025-12-14 12:00:00
55
description: how to use AI tools to improve your work/research
66
tags: AI
77
categories: introduction
8-
featured: true
8+
featured: false
99
giscus_comments: true
1010
---
1111

_posts/2025-12-15-gpu-basics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ title: "The Beginner's Guide to Understanding NVIDIA GPUs"
44
date: 2025-12-15 12:00:00
55
description: how GPU works to speed up your code
66
tags: gpu
7-
categories: tutorials
7+
categories: introduction
88
featured: true
99
giscus_comments: true
1010
---

_posts/2025-12-16-TPU-vs-GPU.md

Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
---
2+
layout: post
3+
title: "The Generalist and the Specialist: Understanding the NVIDIA GPU and Google TPU Architectures"
4+
date: 2025-12-16 12:00:00
5+
description: Compare the NVIDIA GPU and Google TPU
6+
tags: gpu
7+
categories: introduction
8+
featured: true
9+
giscus_comments: true
10+
---
11+
12+
Today, I just learnt the **TPU** is what help Google Gemini 3.0 to be so amazing. So I take a deeper look at the differences between the **NVIDIA GPU** and the **Google TPU**. While they are often discussed as direct competitors, they represent two fundamentally different approaches to solving the same problem: how to process the massive mathematical workloads required by modern neural networks.
13+
14+
Rather than a simple comparison of specifications, understanding the choice between these two requires looking at their underlying design philosophies—**versatility** versus **specialization**.
15+
16+
---
17+
18+
## 1. The NVIDIA GPU: The Versatile Parallel Engine
19+
20+
{% include figure.liquid loading="eager" path="assets/img/gpu_vs_tpu/gpu_arch.png" title="gpu_arch" class="img-fluid rounded z-depth-1" %}
21+
22+
The NVIDIA GPU (Graphics Processing Unit) is a **general-purpose parallel processor**. Its design legacy comes from the world of computer graphics, where millions of pixels must be updated simultaneously. This requirement evolved into a chip capable of handling a vast number of independent tasks at once.
23+
24+
### Design Logic: Flexibility First
25+
NVIDIA’s architecture is built around **Streaming Multiprocessors (SMs)**. Inside each SM are thousands of **CUDA cores** (for general mathematical operations) and, in recent generations, specialized **Tensor Cores** (optimized for AI math).
26+
27+
* **Logic:** The GPU is designed to be programmable for almost any task that can be parallelized. Beyond AI, it handles scientific simulations, 3D rendering, and complex data analytics.
28+
* **Workflow:** Because it is a general-purpose processor, the GPU uses a traditional instruction cycle (Fetch-Decode-Execute). It pulls data from High Bandwidth Memory (HBM) into various levels of cache (L1, L2) before processing it.
29+
* **Memory & Cache:** GPUs rely on a complex cache hierarchy to manage data that might be accessed unpredictably. This makes them exceptionally good at handling models with complex, dynamic architectures or sparse data.
30+
31+
32+
33+
---
34+
35+
## 2. The Google TPU: The Matrix Multiplication Specialist
36+
37+
{% include figure.liquid loading="eager" path="assets/img/gpu_vs_tpu/tpu_arch.png" title="tpu_arch" class="img-fluid rounded z-depth-1" %}
38+
39+
The Google TPU (Tensor Processing Unit) is an **Application-Specific Integrated Circuit (ASIC)**. Unlike the GPU, it was not designed to do everything well. It was designed from the ground up for one specific mathematical operation: **matrix multiplication**, which makes up the vast majority of deep learning computations.
40+
41+
### Design Logic: The Systolic Array
42+
The heart of the TPU is the **Systolic Array**. If a GPU is like a massive team of workers each handling their own small task, a TPU is like a high-speed factory assembly line.
43+
44+
* **Logic:** In a systolic array, data flows rhythmically through a grid of processing elements (PEs). Once a piece of data enters the grid, it is passed directly from one cell to the next neighbor.
45+
* **The "Heartbeat":** This "pulse" of data—passing from neighbor to neighbor—means the chip does not have to constantly reach back to the main memory. This significantly reduces the energy cost and latency associated with moving data.
46+
* **Workflow:** By specializing in this specific flow, the TPU eliminates much of the overhead required for general-purpose programming (like complex branch prediction), dedicating almost all its physical silicon area to raw computation.
47+
48+
49+
50+
---
51+
52+
## 3. Deep Dive: The Core Difference
53+
54+
The distinction between these architectures becomes clearest when you look at how they handle data movement and software.
55+
56+
### The "Systolic" Advantage
57+
In a GPU, data is typically fetched from memory, processed, and written back. In a TPU's systolic array, data flows in a wave.
58+
1. **Weights** are loaded into the array and held stationary.
59+
2. **Data** streams across them.
60+
3. **Results** accumulate as they pass through.
61+
62+
This provides immense **arithmetic intensity**—performing many calculations for every byte of data fetched from memory. This is why TPUs often achieve higher "performance-per-watt" on dense, large-scale training tasks.
63+
64+
### Software Ecosystems: CUDA vs. XLA
65+
If the hardware is the body, the software stack is the brain.
66+
67+
* **NVIDIA & CUDA:** NVIDIA uses **CUDA**, a mature, low-level platform that gives developers fine-grained control. It supports **Dynamic Computation**, meaning the GPU can decide what to do on the fly. This makes debugging easy and allows for "messy," experimental code. It is the default language of AI research.
68+
* **Google & XLA:** TPUs rely on **XLA (Accelerated Linear Algebra)**. This compiler acts like a master architect. It analyzes the *entire* AI model at once and "fuses" operations together into a single, static graph. This requires the developer to be more disciplined (e.g., defining fixed data shapes), but the result is a highly optimized execution plan that squeezes maximum performance from the hardware.
69+
70+
---
71+
72+
73+
74+
---
75+
76+
## 4. The Hybrid Workflow: Coupling Them Together
77+
78+
{% include figure.liquid loading="eager" path="assets/img/gpu_vs_tpu/hybrid.png" title="hybrid_workflow" class="img-fluid rounded z-depth-1" %}
79+
80+
In practice, modern AI development often couples these architectures to leverage the strengths of both. A typical high-performance workflow might look like this:
81+
82+
1. **The "Lab" (GPU):**
83+
Engineers use NVIDIA GPUs for data preprocessing, experimental coding, and initial prototyping. The flexibility of the GPU allows for complex data augmentation pipelines (processing images or text) that might not fit neatly into a matrix multiplication grid.
84+
2. **The "Factory" (TPU):**
85+
Once the model architecture is finalized and the goal shifts to training on massive datasets, the workload is moved to a TPU Pod. The code is compiled via XLA, and the heavy lifting is performed where the cost-per-flop is lowest.
86+
3. **The "Field" (GPU/CPU):**
87+
After training, the model is often converted back to a format compatible with NVIDIA GPUs or even CPUs for deployment, ensuring it can run anywhere—from a cloud server to a user's laptop.
88+
89+
### Conclusion
90+
91+
The NVIDIA GPU is the **versatile workhorse**, offering the freedom to innovate and the compatibility to run anywhere. The Google TPU is the **specialized sprinter**, offering the pure, streamlined power required to train the world's largest AI models. For the modern AI engineer, understanding when to use the flexibility of the GPU and when to deploy the efficiency of the TPU is key to building scalable, effective systems.
5.15 MB
Loading

assets/img/gpu_vs_tpu/gpu_arch.png

5.99 MB
Loading

assets/img/gpu_vs_tpu/hybrid.png

4.66 MB
Loading

assets/img/gpu_vs_tpu/tpu_arch.png

5.18 MB
Loading

0 commit comments

Comments
 (0)