Traveling Salesman Problem
Problem
Given a set of N cities and the distance between every pair of cities, find the shortest possible route that visits every city exactly once and returns to the starting city.
Examples
Example 1

Input:
N = 4
dist = [[0, 10, 15, 20],
[10, 0, 35, 25],
[15, 35, 0, 30],
[20, 25, 30, 0]]
Output: 80
Explanation: The minimum cost tour is 1 → 2 → 4 → 3 → 1 with cost 10+25+30+15 = 80.
Solution
There are two main types of algorithms to solve TSP: exact algorithms and approximation algorithms.
Hamiltonian Cycle vs TSP
Note the difference between Hamiltonian Cycle and TSP. The Hamiltonian cycle problem asks if there exists a tour that visits every city exactly once. In TSP, we know a Hamiltonian tour exists (since the graph is complete), and the goal is to find the minimum weight Hamiltonian cycle.
The problem is a classic NP-hard problem. There is no known polynomial time solution.
Method 1 – Brute Force (Naive)
Intuition
Try all possible orders of visiting the cities and pick the one with the minimum total cost.
Approach
- Fix the starting city.
- Generate all (n-1)! permutations of the remaining cities.
- For each permutation, calculate the total cost of the tour (including return to start).
- Return the minimum cost found.
Code
C++
class Solution {
public:
int tsp(vector<vector<int>>& cost) {
int n = cost.size(), ans = INT_MAX;
vector<int> cities(n-1);
iota(cities.begin(), cities.end(), 1);
do {
int cur = cost[0][cities[0]];
for (int i = 0; i < n-2; ++i) cur += cost[cities[i]][cities[i+1]];
cur += cost[cities.back()][0];
ans = min(ans, cur);
} while (next_permutation(cities.begin(), cities.end()));
return ans;
}
};
Java
class Solution {
public int tsp(int[][] cost) {
int n = cost.length, ans = Integer.MAX_VALUE;
List<Integer> cities = new ArrayList<>();
for (int i = 1; i < n; ++i) cities.add(i);
List<List<Integer>> perms = permute(cities);
for (List<Integer> perm : perms) {
int cur = cost[0][perm.get(0)];
for (int i = 0; i < n-2; ++i) cur += cost[perm.get(i)][perm.get(i+1)];
cur += cost[perm.get(n-2)][0];
ans = Math.min(ans, cur);
}
return ans;
}
private List<List<Integer>> permute(List<Integer> nums) {
List<List<Integer>> res = new ArrayList<>();
permuteHelper(nums, 0, res);
return res;
}
private void permuteHelper(List<Integer> nums, int start, List<List<Integer>> res) {
if (start == nums.size()) { res.add(new ArrayList<>(nums)); return; }
for (int i = start; i < nums.size(); ++i) {
Collections.swap(nums, i, start);
permuteHelper(nums, start+1, res);
Collections.swap(nums, i, start);
}
}
}
Python
class Solution:
def tsp(self, cost: list[list[int]]) -> int:
from itertools import permutations
n = len(cost)
ans = float('inf')
for perm in permutations(range(1, n)):
cur = cost[0][perm[0]]
for i in range(n-2):
cur += cost[perm[i]][perm[i+1]]
cur += cost[perm[-1]][0]
ans = min(ans, cur)
return ans
Complexity
- ⏰ Time complexity:
O(n * n!), as all permutations are checked. - 🧺 Space complexity:
O(n), for storing permutations.
Method 2 – Dynamic Programming (Held–Karp)
Intuition
The idea is to compute the minimum-cost path for every subset of cities and every possible endpoint using dynamic programming. Fix city 0 as the start. For any subset S that includes city 0 and any city i in S (i != 0), let C(S, i) be the minimum cost to start at 0, visit every city in S exactly once, and end at i. If we can compute C(S, i) for all S and i, the answer is min_i C(All, i) + dist[i][0]. This is the classic Held–Karp algorithm.
This works because any optimal path to (S, i) must come from some j in S \ {i} by taking an optimal path to (S \ {i}, j) and adding edge (j, i). We reuse previously computed optimal subpaths to build larger ones.
Approach
-
Number the cities 0..n-1 and fix 0 as the start/finish city.
-
Use a DP table
dp[mask][i], wheremaskis a bitmask representing a subsetS(mask includes bit0), andiis the last city inS(i != 0).dp[mask][i]storesC(S, i). -
Base case: for every city
i != 0,dp[(1<<0)|(1<<i)][i] = dist[0][i]. This represents visiting only{0,i}and ending ati. -
Transition: for increasing sizes of
mask, for everymaskand everyiinmask(i != 0):
dp[mask][i] = min_{j in mask \ {i,0}} (dp[mask ^ (1<<i)][j] + dist[j][i])
-
Final answer:
ans = min_{i != 0} dp[(1<<n)-1][i] + dist[i][0]. -
Edge cases:
n == 1-> answer0; handle unreachable/invalid distances by using a large sentinel value (INF) and skipping updates that would overflow.
Implementation notes:
- Iterate masks in increasing order so subproblems are ready when needed.
- Use bit operations to enumerate members of mask efficiently.
- Use INF = large number (e.g., 1e9) to represent impossible states.
Code
C++
class Solution {
public:
int tsp(vector<vector<int>>& dist) {
int n = dist.size();
if (n == 1) return 0;
int ALL = (1 << n) - 1;
const int INF = 1e9;
vector<vector<int>> dp(1<<n, vector<int>(n, INF));
// base cases
for (int i = 1; i < n; ++i) dp[(1<<0)|(1<<i)][i] = dist[0][i];
for (int mask = 0; mask <= ALL; ++mask) {
if (!(mask & 1)) continue; // must include start(0)
for (int i = 1; i < n; ++i) {
if (!(mask & (1<<i))) continue;
int prevMask = mask ^ (1<<i);
for (int j = 1; j < n; ++j) {
if (!(prevMask & (1<<j))) continue;
dp[mask][i] = min(dp[mask][i], dp[prevMask][j] + dist[j][i]);
}
}
}
int ans = INF;
for (int i = 1; i < n; ++i) ans = min(ans, dp[ALL][i] + dist[i][0]);
return ans;
}
};
Java
class Solution {
public int tsp(int[][] dist) {
int n = dist.length;
if (n == 1) return 0;
int ALL = (1<<n) - 1;
int INF = 1_000_000_000;
int[][] dp = new int[1<<n][n];
for (int[] row : dp) Arrays.fill(row, INF);
for (int i = 1; i < n; ++i) dp[(1<<0)|(1<<i)][i] = dist[0][i];
for (int mask = 0; mask <= ALL; ++mask) {
if ((mask & 1) == 0) continue;
for (int i = 1; i < n; ++i) {
if ((mask & (1<<i)) == 0) continue;
int prevMask = mask ^ (1<<i);
for (int j = 1; j < n; ++j) {
if ((prevMask & (1<<j)) == 0) continue;
dp[mask][i] = Math.min(dp[mask][i], dp[prevMask][j] + dist[j][i]);
}
}
}
int ans = INF;
for (int i = 1; i < n; ++i) ans = Math.min(ans, dp[ALL][i] + dist[i][0]);
return ans;
}
}
Python
class Solution:
def tsp(self, dist: list[list[int]]) -> int:
n = len(dist)
if n == 1:
return 0
ALL = (1 << n) - 1
INF = 10**9
dp = [[INF] * n for _ in range(1<<n)]
for i in range(1, n):
dp[(1<<0)|(1<<i)][i] = dist[0][i]
for mask in range(1<<n):
if (mask & 1) == 0:
continue
for i in range(1, n):
if (mask & (1<<i)) == 0:
continue
prev = mask ^ (1<<i)
for j in range(1, n):
if (prev & (1<<j)) == 0:
continue
dp[mask][i] = min(dp[mask][i], dp[prev][j] + dist[j][i])
ans = min(dp[ALL][i] + dist[i][0] for i in range(1, n))
return ans
Complexity
- ⏰ Time complexity:
O(n^2 * 2^n), because there are2^nmasks and for each mask and endpoint we iterate over up tonprevious endpoints (and another factornwhen enumerating endpoints), resulting inn^2 * 2^nwork. - 🧺 Space complexity:
O(n * 2^n), for the DP tabledp[mask][i]storing2^n * nstates.
Method 3 – MST-based 2-Approximation
Intuition
When the distance function satisfies the triangle inequality, a cheap way to get a near-optimal tour is:
- Build a minimum spanning tree (MST) of the complete graph.
- Walk the tree in a preorder (DFS) from the root and record the first time each vertex is visited.
- Shortcut repeated visits using triangle inequality (direct edges are no more expensive than the path through visited nodes).
This simple transformation of an MST gives a tour no more than twice the cost of an optimal tour.
Approach
- Require the distance matrix
distto satisfy the triangle inequality. If distances are missing, treat them asINFand abort if MST is not spanning. - Compute an MST of the
nvertices (Prim's or Kruskal). Use vertex0as the root. - Perform a preorder traversal (DFS) of the MST, record the visitation order
order(each vertex appears once when first visited). - Form the tour by visiting vertices in
orderand then returning to0. Compute the total cost by summingdist[order[i]][order[i+1]]and finallydist[order.back()][0]. - Return the tour cost and the tour (optional). The triangle inequality guarantees that replacing repeated visits in the full Euler walk by direct edges does not increase cost.
Edge cases and checks:
- If
n == 1return cost0and tour[0]. - If MST is not spanning (some vertex unreachable) return
INF/indicate impossibility. - Use
long long/large sentinel when summing weights to avoid overflow.
Code
C++
class Solution {
public:
// dist: n x n matrix (complete graph). Returns tour cost (approx) or large INF if impossible.
long long tsp_approx(vector<vector<int>>& dist) {
int n = dist.size();
if (n == 1) return 0;
const long long INF = 9e18;
// Prim's algorithm (dense version) to build MST parent array
vector<int> parent(n, -1);
vector<long long> key(n, INF);
vector<char> inMST(n, 0);
key[0] = 0; parent[0] = -1;
for (int count = 0; count < n; ++count) {
int u = -1;
long long best = INF;
for (int v = 0; v < n; ++v) if (!inMST[v] && key[v] < best) { best = key[v]; u = v; }
if (u == -1) return INF; // disconnected
inMST[u] = 1;
for (int v = 0; v < n; ++v) if (!inMST[v]) {
long long w = dist[u][v];
if (w < key[v]) { key[v] = w; parent[v] = u; }
}
}
// build adjacency list for MST
vector<vector<int>> adj(n);
for (int v = 1; v < n; ++v) {
int p = parent[v];
if (p >= 0) { adj[p].push_back(v); adj[v].push_back(p); }
}
// preorder traversal (stack iterative)
vector<int> order; order.reserve(n);
vector<int> st; st.push_back(0);
vector<int> it(n, 0);
vector<char> vis(n, 0);
while (!st.empty()) {
int u = st.back(); st.pop_back();
if (vis[u]) continue;
vis[u] = 1; order.push_back(u);
// push neighbors in reverse to get a deterministic preorder
for (int i = (int)adj[u].size()-1; i >= 0; --i) {
int v = adj[u][i]; if (!vis[v]) st.push_back(v);
}
}
if ((int)order.size() != n) return INF; // not spanning
long long ans = 0;
for (int i = 0; i < n-1; ++i) ans += dist[order[i]][order[i+1]];
ans += dist[order.back()][0];
return ans;
}
};
Java
class Solution {
public long tsp_approx(int[][] dist) {
int n = dist.length;
if (n == 1) return 0;
final long INF = Long.MAX_VALUE / 4;
long[] key = new long[n]; Arrays.fill(key, INF);
int[] parent = new int[n]; Arrays.fill(parent, -1);
boolean[] inMST = new boolean[n];
key[0] = 0;
for (int k = 0; k < n; ++k) {
int u = -1; long best = INF;
for (int i = 0; i < n; ++i) if (!inMST[i] && key[i] < best) { best = key[i]; u = i; }
if (u == -1) return INF;
inMST[u] = true;
for (int v = 0; v < n; ++v) if (!inMST[v]) {
long w = dist[u][v]; if (w < key[v]) { key[v] = w; parent[v] = u; }
}
}
List<Integer>[] adj = new List[n]; for (int i = 0; i < n; ++i) adj[i] = new ArrayList<>();
for (int v = 1; v < n; ++v) { int p = parent[v]; if (p >= 0) { adj[p].add(v); adj[v].add(p); } }
// iterative preorder
List<Integer> order = new ArrayList<>();
Deque<Integer> st = new ArrayDeque<>(); st.push(0);
boolean[] vis = new boolean[n];
while (!st.isEmpty()) {
int u = st.pop(); if (vis[u]) continue; vis[u] = true; order.add(u);
List<Integer> nbrs = adj[u]; for (int i = nbrs.size()-1; i >= 0; --i) { int v = nbrs.get(i); if (!vis[v]) st.push(v); }
}
if (order.size() != n) return INF;
long ans = 0;
for (int i = 0; i < n-1; ++i) ans += dist[order.get(i)][order.get(i+1)];
ans += dist[order.get(n-1)][0];
return ans;
}
}
Python
class Solution:
def tsp_approx(self, dist: list[list[int]]) -> int:
n = len(dist)
if n == 1:
return 0
INF = 10**18
key = [INF]*n
parent = [-1]*n
inMST = [False]*n
key[0] = 0
for _ in range(n):
u = -1
best = INF
for i in range(n):
if not inMST[i] and key[i] < best:
best = key[i]; u = i
if u == -1:
return INF
inMST[u] = True
for v in range(n):
if not inMST[v] and dist[u][v] < key[v]:
key[v] = dist[u][v]; parent[v] = u
adj = [[] for _ in range(n)]
for v in range(1, n):
p = parent[v]
if p >= 0:
adj[p].append(v); adj[v].append(p)
order = []
st = [0]
vis = [False]*n
while st:
u = st.pop()
if vis[u]:
continue
vis[u] = True
order.append(u)
for v in reversed(adj[u]):
if not vis[v]: st.append(v)
if len(order) != n:
return INF
ans = 0
for i in range(n-1): ans += dist[order[i]][order[i+1]]
ans += dist[order[-1]][0]
return ans
Complexity
- ⏰ Time complexity:
O(n^2), because Prim's dense implementation takesO(n^2)on ann x ndistance matrix and DFS/preorder isO(n)on the MST; overall dominated byO(n^2). - 🧺 Space complexity:
O(n^2)if the input distance matrix is counted, otherwise additional auxiliary space isO(n)for MST structures and traversal (parents, adjacency, stack).
Notes
- This method requires the triangle inequality to guarantee the 2-approximation factor. If triangle inequality does not hold, the shortcutting step can increase cost.
- Christofides' algorithm improves the guarantee to 1.5 for metric TSP (triangle inequality) but is more complex to implement (requires minimum-weight perfect matching on odd-degree vertices). See: http://en.wikipedia.org/wiki/Christofides_algorithm