πŸ’»

TSP Algorithm & Discrete Optimization

Implementing the Traveling Salesman Problem using Dynamic Programming with memoization, and generalizing to discrete optimization.

πŸ•ΈοΈ

1. Recalling DP: A TSP Example

4-City Graph

graph LR 0((0)) ---|10| 1((1)) 0 ---|15| 2((2)) 0 ---|20| 3((3)) 1 ---|35| 2 1 ---|25| 3 2 ---|30| 3 style 0 fill:#bfdbfe style 1 fill:#fecaca style 2 fill:#bbf7d0 style 3 fill:#fef08a

Tracing a Trajectory

Consider the path: 0 β†’ 1 β†’ 3 β†’ 2 β†’ 0.

  • Cost = cost(0,1) + totalCost(1, {0,1})
  • Next step: cost(1,3) + totalCost(3, {0,1,3})
  • And so on...
Cost = cost[curr][next] + totalCost(next, visited, ...)

Memoization

Just like in the Fibonacci algorithm, we need to memoize results to avoid overlapping computations.

For example, the paths (0-1-2-3) and (0-2-1-3) both end up at node 3 with set {0,1,2,3} visited. The cost from 3 back to 0 is the same in both cases and should only be computed once.

🐍

2. DP for TSP in Python

Here is the complete Python implementation using recursion and memoization.

def totalCost(curr, visited, n, cost, memo):
    # Base case: if all cities are visited, return cost to start (0)
    if len(visited) == n:
        return cost[curr][0]

    # If already computed, return memoized value
    state = (curr, tuple(sorted(list(visited)))) # Use tuple for hashable key
    if state in memo:
        return memo[state]

    ans = float('inf')

    # Try visiting every unvisited city
    for next_city in range(n):
        if next_city not in visited:
            visited.add(next_city)
            ans = min(ans, cost[curr][next_city] + 
                      totalCost(next_city, visited, n, cost, memo))
            visited.remove(next_city)  # Backtrack

    memo[state] = ans
    return ans

def tsp(cost):
    n = len(cost)
    memo = {}
    visited = {0}  # Start with city 0
    return totalCost(0, visited, n, cost, memo)

# Example Cost Matrix
cost = [
    [0, 10, 15, 20],
    [10, 0, 35, 25],
    [15, 35, 0, 30],
    [20, 25, 30, 0]
]
print(f"Minimum Cost: {tsp(cost)}")
βš™οΈ

3. General Discrete Optimization

Problem Formulation

Minimize \( G(u) \) subject to \( u \in U \)

  • Assume solution \( u \) has \( N \) components: \( u = (u_0, \dots, u_{N-1}) \).
  • View components as controls of \( N \) stages.
  • Define state \( x_k = (u_0, \dots, u_{k-1}) \).
  • Start state \( x_0 = s \).
  • Terminal cost is \( G(u) \); all other stage costs are 0.

This formulation allows us to apply Dynamic Programming to a wide range of discrete optimization problems by treating the construction of a solution as a sequential decision process.

🧠

4. Test Your Knowledge

1. What is the purpose of the `memo` dictionary in the TSP code?

2. In the general discrete optimization formulation, what represents the "state" \( x_k \)?

3. What is the base case for the recursive TSP function?

←

Previous

Lecture 06

Next

Lecture 08

β†’