Day 38th of #100DAYSOFCODE
Why Asymptotic Notations?
Learning and Optimizing Algorithms | Part 2
Intro
There is no fixed path to come up with a good algorithm. It's not that you follow a set of best practices/guidelines and you will come up with a good algorithm.
There are so many different ways to solve a particular problem and so many different ways which might have no been discovered yet, that is why Algorithm is still an active field of research.
We can’t learn all sorts of algorithmic problems we are gonna face in our life as of now neither we have a silver bullet solution to all algorithmic problems. Hence we can only learn how to come up with a good algorithm and what are the existing tools to solve algorithmic problems.
One such very basic tool to measure the performance of an algorithm is what we call Asymptotic Notation. With knowledge of asymptotic notation, we can measure and compare different algorithms and decide which one outperforms the other.
Heads up…
We will be using the algorithm for calculating Fibonacci from the previous story. In case the example in this story doesn’t make sense, you can supplement and your learning from the previous story.
What is the Need for a Standard Notation to Measure Performance?
How we measure the runtime of an algorithm? You can try running the algorithm on your computer and calculate the time difference from start to finish but this approach will depend upon the computer you are running the algorithm on.
Different computers, difference architecture will give out different runtime. Hence we need to have a standardized notation to measure the performance of our algorithms.
There are many factors that contribute to the final run time of an algorithm but most of them will change only by a constant. If your algorithm takes ‘x’ time to run on one computer.
Now if you run the same algorithm on a computer that is 100times faster, it will take (1/100)(x) time to run. Hence we need to find the common pattern in both of the computers that are the x.
Where did ‘Asymptotic’ Notations come from?
Measuring an algorithm through its run-time might not actually work as one algorithm can take 1 second on one computer and 600 seconds on another computer which actually won’t help as you cannot generalize it.
We will measure the algorithm by measuring how our runtime scales with the increasing size of the input. Hence the input size of ‘n’ can determine how our runtime scales. The runtime can scale by n, n² or some other exponential of n. This seems quite promising as the difference between n² and n is quite significant instead of 1 second and 600 seconds.
If we can determine asymptotic values for our algorithms we can easily measure which algorithm is better than the other one. You can replace a million with n here and see how drastically different these values really are.
Understanding Asymptotic Notation
The big-o notation is the notation used widely for measuring runtimes of algorithms. This has several advantages.
- Clarifies the growth rate — As asymptotic generalizes our understanding for a runtime of an algorithm, we don’t have to worry about any other factors for measuring the performance of an algorithm.
- Cleans up notation — We can simply write O(n²) instead of
(4n² + 3n + 2) — “The slowest step is the rate-determining step”. We can ignore out all the constant multiples and have one single notation which is easy to understand.
There is a disadvantage here,
- It ignores those constants, hence your notation won’t be differentiating between an algorithm which runs 100times faster, they have the same Big(O) notation.
- In real life, if you are able to make your algorithm twice as fast it will be a big deal. Hence, after you have achieved an algorithm you should look into the details of the other factors that were ignored by the asymptotic notation.
- Sometimes Big(O) notation might look more efficient but just on paper, in real-time you will see other algorithms beat the Big(O) asymptotic.
We will be exploring more on this in the future…
Thanks for reading as always you are awesome ❤