NumPy Lab
Build deep intuition for NumPy internals, vectorization, and performance โ the way FAANG expects ML engineers to think.
You can write fast, memory-efficient, interview-ready NumPy code and explain *why* it is efficient.
Progress โ 0/15 tasks
Asked At
Setup
Section 1 โ ndarray Fundamentals
Task 1.1: Array Creation & Shapes
ndarray Fundamentals
Section 1 โ ndarray Fundamentals
Task 1.1: Array Creation & Shapes
dtype & Memory
Explain:
- โWhat does
.shaperepresent? - โWhy does contiguous memory matter?
Section 1.2 โ dtype & Memory
Task 1.2: Compare memory usage
Indexing, Views & Copies
Interview Question:
Why does dtype selection matter in large ML pipelines?
Section 2 โ Indexing, Views & Copies
Task 2.1: Views vs Copies
Boolean Masking
Explain:
- โWhy did the original array change (or not)?
Section 2.2 โ Boolean Masking
Task 2.2: Boolean masking
Broadcasting
Section 3 โ Broadcasting
Task 3.1: Broadcasting Rules
Broadcasting Trap
Explain broadcasting step-by-step.
Section 3.2 โ Broadcasting Trap
Task 3.2: Fix a broadcasting trap
Vectorization vs Loops
What was wrong with the original shapes?
Section 4 โ Vectorization vs Loops
Task 4.1: Loop vs Vectorized
Why is vectorization faster?
Task 4.2: Pairwise Distance (FAANG Classic)
Numerical Stability
Section 5 โ Numerical Stability
Task 5.1: Softmax
Softmax converts logits into probabilities:
Stable form (subtract max logit in each row):
Linear Algebra
Why does subtracting max work?
Section 6 โ Linear Algebra
Task 6.1: Matrix Multiplication
Explain difference between dot, @, and matmul.
Task 6.2: Solving Linear Systems
Performance & Memory
Section 7 โ Performance & Memory
Task 7.1: In-Place Operations
Mini Case Study
Section 8 โ Mini Case Study
Task 8.1: Mini case study (NumPy PCA)