Big-O Notation Explained: How to Make Your Code Faster and Scalable
Learn Big-O notation to understand algorithm efficiency, and optimize your code for speed and scalability with clear examples and tips.
When most people start learning to code, the main goal is pretty simple: just get it to work. I preach this because getting it working with your own logic is so important.
Make the program run, see the right result, and move on. Later you can optimize the code and make it more efficient.
That’s a great first step. But as your projects get bigger, especially when you start working with lots of data, just getting it to work isn’t enough anymore.
This is where our dev mind starts asking another question: “how well does it work?”
Every week you’ll be introduced to a new topic in Python, think of this as a mini starter course to get you going and allow you to have a structured roadmap that actually builds to create you a solid foundation in Python. Join us today!
This is such a big topic so I’ll try to keep this short and sweet while hitting on all the main points so you have a decent understanding of Big-O Notation.
Last week I compared Computer Science vs Programming so my hope is that this will help build on that to help you all.
If coding is about solving problems, then Big-O is about figuring out how much those solutions cost in time and memory. Once you understand it, it changes how you look at every piece of code you write.
I’ll break it all down for you over the next few minutes. No heavy math because I’m not a math wizz, just clear reasoning, quick examples, and a few simple Python blocks.
Thank you guys for allowing me to do work that I find meaningful. This is my full-time job so I hope you will support my work by joining as a premium reader today.
If you’re already a premium reader, thank you from the bottom of my heart! You can leave feedback and recommend topics and projects at the bottom of all my articles.
I’m launching my Live Python Cohort that starts early next month! You can register here - Join the Live Python Cohort (Only 20 Seats)
👉 I genuinely hope you get value from these articles, if you do, please help me out, leave it a ❤️, and share it with others who would enjoy this. Thank you so much!
Thinking Like an Algorithm Designer
Think about what it means to think like someone who designs algorithms (just reading that back to myself sounds somewhat intimidating).
So try to picture yourself in a grocery store looking for a certain kind of cereal. If all the boxes are mixed up and in no order, you’d have to check them one by one until you find the right one.
That’s what’s called a linear search. The more boxes there are, the more work you have to do. It kind of stinks when you’re crunched for time.
Now, imagine the store arranged the cereals alphabetically. You could skip right to the section with the right letter and find what you need much faster.
That’s a binary search, a smarter way to narrow things down step by step.
In both cases, you’re doing the same task: finding a cereal. But one approach takes a lot longer than the other. The difference in how your effort grows as the problem gets bigger is what Big-O notation helps describe.
Big-O isn’t about exact milliseconds but rather understanding how the program scales as the amount of work increases. It helps us think in terms of growth. It answers questions like:
If I double the amount of data, will my program take twice as long to run?
If I go from 10 items to 1,000, will things slow down a bit, or will they crawl to a stop?
Let me start things off here with the simplest kind of growth to show how this works.
👉 I genuinely hope you get value from these articles, if you do, please help me out, leave it a ❤️, and share it with others who would enjoy this. Thank you so much!
O(1): Constant Time — The Ideal Scenario
When an algorithm runs in constant time, it means the amount of work it does never changes. Whether you’re working with 10 items or 10 million, it still does the same thing.
A simple way to picture this is flipping a light switch. It doesn’t matter if that switch controls one bulb or an entire room full of lights, flipping it takes the same effort.
This function grabs the first item in a list. It doesn’t matter if the list has 3 elements or 3 million , it just looks up the first spot and returns it. That’s one operation, which makes it constant time, or O(1).
Constant time is about as efficient as it gets. But in practice, most problems aren’t that simple or even that fast.
O(n): Linear Time — One Step at a Time
Now let’s flip back to that messed up grocery store. You start at one end of the aisle and check each cereal box until you find the one you want.
If there are 10 boxes, you might look 10 times. If there are 1,000, you might look 1,000 times. The amount of work grows right along with the number of boxes.
That’s what we call O(n) time complexity.
If there are n items in the list, this function might need to check all n of them before finding the right one or not finding it at all. That’s why it’s called linear time, or O(n).
You’ll run into O(n) operations all the time when you loop through data, from reading a file, filtering a list, or checking each record. It’s not bad as it just scales with the size of your data. But as n gets bigger, that scaling adds up.
If it takes one second to handle 1,000 items, it’ll take about 10 seconds to handle 10,000. As your data grows, you start to feel a real difference in time.
Learn Python. Build Projects. Get Confident!
Most people get stuck before they even start… But that doesn’t have to be you!
The Python Masterclass is designed to take you from “I don’t know where to start” to “I can build real-world Python projects” — in less than 90 days.
👉 I’m giving you my exact system that’s been proven and tested by over 1,500 students over the last 4+ years!
My masterclass is designed so you see your first win in less than 7 days — you’ll build your first working Python scripts in week one and finish projects in your first month.
The sooner you start, the sooner you’ll have projects you can actually show to employers or clients.
Imagine where you’ll be 90 days from now if you start today.
👉 Ready to get started?
P.S. — Get 20% off your First Month with the code: save20now
. Use it at checkout!
O(n²): Quadratic Time — The Nested Loop Trap
Quadratic time shows up when your code uses loops inside other loops basically when each item is compared with every other item.
Think of trying to check every student in a class against everyone else to see if there are any duplicates. I’ve done this countless times for my classes over the years.
As my class size grows, that process got slow very quickly.
If you have 10 items, that’s 100 comparisons. With 1,000 items, it jumps to a million. That’s why algorithms like bubble sort or insertion sort can be painfully slow when the data set gets large.
You’ll often run into quadratic time when your code deals with comparisons, combinations, or relationships between elements. Things like sorting, finding duplicates, or matching pairs. These algorithms can hurt performance if you’re not careful.
So when people say “nested loops are slow,” this is what they’re talking about. The work doesn’t just grow, it blows up and we don’t want that.
👉 I genuinely hope you get value from these articles, if you do, please help me out, leave it a ❤️, and share it with others who would enjoy this. Thank you so much!
O(log n): Logarithmic Time — Divide and Conquer
Okay, back to Walmart now, but now imagine it’s perfectly organized. You’re looking for a cereal that starts with “K.” Instead of checking every box, you go to the middle of the aisle, see what’s there, and decide whether to search left or right. You keep cutting the section you’re looking at in half until you find the cereal.
That’s the basic idea behind a binary search, which runs in O(log n) time.
With each step, you cut the number of items you have to check in half. Even if there are a million items, you only need about 20 checks to find what you’re looking for.
This “divide and conquer” approach is the starting point for a lot of efficient algorithms, from sorting methods like Merge Sort (O(n log n)) to how databases quickly find the data you need.
O(n log n): The Efficient Sorting Hack
Sorting is one of the most common things we do in programming, and it shows clearly how the way you write an algorithm affects speed.
A simple way to sort (comparing every item with every other item) takes O(n²) time. But smarter algorithms, like Merge Sort or Quick Sort, work in O(n log n) time.
Think of it like this: you still have to look at every item at least once (that’s the “n” part), but you also split the data into smaller pieces and handle each piece efficiently (that’s the “log n” part).
When we use Python’s built-in sorting, like sorted()
or .sort()
, you’re usually getting that O(n log n) efficiency without having to do the heavy lifting yourself.
This kind of performance hits the sweet spot. It’s fast enough for everyday tasks and able to handle large amounts of data without breaking a sweat.
👉 I genuinely hope you get value from these articles, if you do, please help me out, leave it a ❤️, and share it with others who would enjoy this. Thank you so much!
Space Complexity: Memory Use
So far, I’ve mostly hit on time like how long a program takes to run. But there’s another side: how much memory it uses, this is called space complexity.
It’s kind of like cooking in a way. You can move faster if you have more counter space, but your kitchen only has so much room. Sometimes you have to decide: do you want to save time or save space?
This function creates a new list, so it uses extra memory proportional to the size of the input — that’s O(n) space.
If you do it in place instead:
Now you’re not creating a new list where the original list is updated. That brings space complexity down more to O(1).
This kind of trade-off between time and memory comes up all the time in programming. Sometimes you store extra data to make things faster later, which uses more memory. Other times you recalculate things to save memory, which takes longer. Every algorithm falls somewhere on that spectrum.
Understanding the Pattern, Not the Time
Big-O notation isn’t about how many milliseconds your program takes to run. It’s about how the program behaves as the amount of data grows.
It’s less about exact numbers and more about patterns of growth.
O(1) stays flat — the time hardly changes no matter how big the input gets.
O(n) grows steadily — if you double the data, the time roughly doubles too.
O(n²) shoots up — doubling the data can make it take four times as long.
O(log n) grows very slowly — even huge increases in data barely affect the time.
These patterns can help make smarter decisions when optimizing code. Sometimes moving from O(n²) to O(n log n) can turn a program that would take hours into one that runs in seconds.
👉 I genuinely hope you get value from these articles, if you do, please help me out, leave it a ❤️, and share it with others who would enjoy this. Thank you so much!
👉 My Python Learning Resources
Here are the best resources I have to offer to get you started with Python no matter your background! Check these out as they’re bound to maximize your growth in the field.
Zero to Knowing: My Python Masterclass Subscription gives you everything you need to go from zero to building real-world projects — with new lessons, challenges, and support every month. Over 1,500+ students have already used this exact system to learn faster, stay motivated, and actually finish what they start.
P.S - Save 20% off your first month. Use code: save20now at checkout!
Code with Josh: This is my YouTube channel where I post videos every week designed to help break things down and help you grow.
My Books: Maybe you’re looking to get a bit more advanced in Python. I’ve written 3 books to help with that, from Data Analytics, to SQL all the way to Machine Learning.
My Favorite Books on Amazon:
Python Crash Course - Here
Automate the Boring Stuff - Here
Data Structures and Algorithms in Python - Here
Python Pocket Reference - Here
Final Thoughts
Learning Big-O isn’t about becoming a mathematician. It’s about learning to think like someone who solves problems — someone who looks at code not just as instructions for a computer, but as systems that operate in the real world.
You start noticing patterns everywhere. A nested loop becomes a warning sign. A recursive function makes you think about how deep the call stack might go. A huge dataset makes you consider memory use and trade-offs.
You begin writing code with the future in mind, not just for what works today.
So the next time you finish a problem, don’t just ask, “Does it work?” Ask one more thing: “How well will it work as it grows?”
Hope you all have an amazing week nerds ~ Josh (Chief Nerd Officer 🤓)
👉 If you’ve been enjoying these lessons, consider subscribing to the premium version. You’ll get full access to all my past and future articles, all the code examples, extra Python projects, and more.