Introduction to Dynamic Programming and Memoization

Introduction to Dynamic Programming and Memoization

ยท

7 min read

Hi there ๐Ÿ–๏ธ

My name is Milad and welcome to my blog. In this post we are going to learn dynamic programming and memoization in a nutshell; So stay with me through this amazing and fun journey.

What dynamic programming and memoization are

Dynamic programming: It is a method for solving complex problems by breaking them down into simpler subproblems. It works by caching the results of subproblems so that they do not have to be re-computed if encountered again; It's often used for optimization problems and calculating optimal solutions to problems that can be broken into stages.

Memoization: It's a specific technique for caching intermediate results to speed up calculations. It comes from the word "memo" meaning to memoize or remember values and works by maintaining a map of input to calculated output so that when a value is needed again, it is looked up instead of re-computed. It's useful for recursive functions that repeat calculations for the same inputs.

Why they matter

Dynamic programming and memoization are important techniques in computer science and optimization because they can dramatically improve the performance of algorithms. There are a few key reasons why they matter:

  • Improves efficiency - By storing intermediate results instead of recomputing them, dynamic programming avoids unnecessary repeated computation. This can improve algorithm speed from exponential to linear time.

  • Solves bigger problems - Many complex problems have an optimal substructure that lends itself to a dynamic programming approach. This allows for solving larger instances of problems.

  • Elegant problem solving - Dynamic programming breaks down problems into overlapping sub-problems. This modular approach is often cleaner and easier to understand.

  • Optimization capabilities - Dynamic programming is well-suited for finding optimal solutions to problems by considering all possibilities and retaining optimal subsolutions.

  • Efficient use of space - By caching results, dynamic programming reduces redundant computations and doesn't waste space exploring suboptimal solutions.

When we can use them

Dynamic programming and memoization can be applied to problems that exhibit two key features:

  1. Optimal substructure - The optimal solution to the problem can be constructed from optimal solutions of its subproblems.

  2. Overlapping subproblems - The problem can be broken down into subproblems which are reused multiple times.

The key criteria are that the problem can be divided into smaller overlapping subproblems and combining their optimal solutions leads to an optimal solution for the overall problem. Problems with this structure can be efficiently solved using dynamic programming.

Understanding the concept by a simple example

Let's say we have a function that takes a long time to run which is as follows:

function addTo100(n) {
    console.log("Long time");
    return 100 + n;
}

Now let's run it:

console.log(addTo100(5));
/*
Output:

Long time
105
*/

As you can see, the function returns the result after taking a long time to run. Now, what if we run it 5 times in a row with the same input value:

console.log(addTo100(5));
console.log(addTo100(5));
console.log(addTo100(5));
console.log(addTo100(5));
console.log(addTo100(5));
/*
Output:

Long time
105
Long time
105
Long time
105
Long time
105
Long time
105
*/

As you might guess, our function is not efficient this way because, for each same input value, it re-calculates a heavy operation that takes a long time to run. This is where the memoization technique comes into play so that by storing a map of inputs to calculated outputs, whenever a value is needed again, we need to look it up from the map instead of re-computing it.

// Map variable to store input and corresponding outputs
const cache = {};

function memoizedAddTo100(n) {
    // If n is in cache variable, just pick it up and return it
    if (n in cache) return cache[n];
    // Otherwise take a long time to calculate it for the first time
    console.log("Long time");
    // Then put it in the cache variable
    cache[n] = 100 + n;
    // And finally return the output value
    return cache[n];
}

This time let's run the memoized function 5 times in a row with the same input value:

console.log(memoizedAddTo100(5));
console.log(memoizedAddTo100(5));
console.log(memoizedAddTo100(5));
console.log(memoizedAddTo100(5));
console.log(memoizedAddTo100(5));

/*
Output:

Long time
105
105
105
105
105
*/

Congratulations, we did it; we just optimized our function using the power of the memoization technique. As you can see in the output section, we just calculated that heavy operation for the first time instead of re-computing it for all other same input values.

A famous example to discuss

The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones. The sequence starts with 0 and 1, and looks like this:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987...

The sequence follows the recursive relationship:

F(n) = F(n-1) + F(n-2)

Where F(n) is the nth Fibonacci number. The first few Fibonacci numbers are:

F(0) = 0

F(1) = 1

F(2) = F(1) + F(0) = 1

F(3) = F(2) + F(1) = 2

F(4) = F(3) + F(2) = 3

F(5) = F(4) + F(3) = 5

and so on...

We are going to implement a function that computes the value of the nth Fibonacci number using recursion:

// This variable records the number of operations that the function
// have been done in order to calculate nth fibonacci number
let numCalculationsRegularFib = 0;

function fibonacci(n) {
    // Time complexity: O(2^n)
    if (n < 2) return n;    // For all n less than 2, return n itself
    numCalculationsRegularFib++;// Increase by one
    // recursive call of two previous ones
    return fibonacci(n - 1) + fibonacci(n - 2);
}

Let's say we are going to find out the 30th Fibonacci number using the function above:

const fib30 = fibonacci(30);
console.log(`Fib(30): ${fib30}`);
console.log(`${numCalculationsRegularFib} operations have been done for calculating fib(30)`);

/*
Output:

Fib(30): 832040
1346268 operations have been done for calculating fib(30)
*/

Wait a minute!, What?! 1346268 operations; Yeah, that's why its time complexity is O(2^n).

As it is obvious from the chart above, O(2^n) is the second worst time complexity and we need to optimize the previous algorithm using dynamic programming and memoization techniques.

The image above is the recursive call structure tree of the previous algorithms and as it is obvious we have a lot of repetitive calculations(the rectangles with the same color are the same and repetitive calculations). For example, we don't have to re-calculate Fib(3) on the left side, in case its value is cached when we are on the right side of the tree.

Let's implement the new Fibonacci algorithm, the memoized one:

// Cache variable to store the result of calculations
const fibCache = {};
// This variable records the number of operations that the function
// have been done in order to calculate nth fibonacci number
let numCalculationsMemoizedFib = 0

function memoizedFibonacci(n) {
    // Time complexity: O(n)
    // If we calculated fib(n) before, pick up its value from cache
    if (n in fibCache) return fibCache[n];
    else {    // Otherwise
        // For all n less than 2, return n itself
        if (n < 2) return n;
        numCalculationsMemoizedFib++;    // increase by one
        // Make a recursive call and store it in the cache variable
        fibCache[n] =  memoizedFibonacci(n - 1) + memoizedFibonacci(n - 2);
        return fibCache[n];
    }
}

Now let's say we are going to find out the 30th Fibonacci number once again using the new function above:

const memoizedFib30 = memoizedFibonacci(30);
console.log(`Fib(30): ${memoizedFib30}`);
console.log(`${numCalculationsMemoizedFib} operations have been done for calculating fib(30)`);

/*
Output:

Fib(30): 832040
29 operations have been done for calculating fib(30)
*/

Congratulations, we did it; we just optimized our function using the power of the memoization technique and calculated Fib(30) using 29 operations instead of 1346268, Yeah ๐Ÿฆพ. Moreover, Another optimization that we did is that we optimized O(2^n) to O(n) which is an acceptable time complexity.

Conclusion

Dynamic programming and memoization are powerful and versatile techniques that can help optimize the performance of algorithms. By storing intermediate results and reusing prior computations, dynamic programming enables solving complex problems that would otherwise be infeasible.

I hope you enjoyed it, see you next time ๐Ÿ‘‹

Did you find this article valuable?

Support Milad Sadeghi by becoming a sponsor. Any amount is appreciated!

ย