Maximum Sum Submatrix
Problem
Given an m x n integer matrix, find the sub-rectangle (contiguous rows and columns) with the maximum possible sum and return that sum. Optionally return the rectangle coordinates (top, left) and (bottom, right).
Examples
Example 1
Input:
mat = [
[1, -2, 0, -15],
[2, 3, -1, 4],
[-1, 2, 5, -3]
]
Output: 10
Explanation: The maximum-sum rectangle is the 2x3 sub-rectangle covering rows 1..2 and columns 1..3, shown in green in diagram.
Solution
We show two standard approaches: a brute-force enumeration and an efficient reduction to 1D Kadane per column-pair.
Method 1 — Brute-force enumeration
Intuition
Try every rectangle and compute its sum. Straightforward but expensive.
Approach
- Enumerate
topandbottomrow indices. - For each
top..bottomrange enumerateleftandrightcolumn indices. - Compute rectangle sum (use 2D prefix sums to compute each rectangle in O(1)).
- Track the maximum sum and coordinates.
Code
C++
// Brute-force with optional 2D prefix sums
class Solution {
public:
long long maxSumSubmatrix(const std::vector<std::vector<int>>& A) {
int rows = A.size();
if (rows == 0) return 0;
int cols = A[0].size();
// build prefix sums (PS[i+1][j+1] = sum of A[0..i][0..j])
std::vector<std::vector<long long>> PS(rows+1, std::vector<long long>(cols+1, 0));
for (int i = 0; i < rows; ++i)
for (int j = 0; j < cols; ++j)
PS[i+1][j+1] = PS[i+1][j] + PS[i][j+1] - PS[i][j] + A[i][j];
long long best = LLONG_MIN;
for (int top = 0; top < rows; ++top) {
for (int bottom = top; bottom < rows; ++bottom) {
for (int left = 0; left < cols; ++left) {
for (int right = left; right < cols; ++right) {
long long sum = PS[bottom+1][right+1] - PS[top][right+1] - PS[bottom+1][left] + PS[top][left];
best = std::max(best, sum);
}
}
}
}
return best;
}
};
Complexity
- ⏰ Time complexity:
O(rows^2 * cols^2)— four nested loops over rectangle boundaries; with prefix sums each rectangle sum is O(1). - 🧺 Space complexity:
O(rows * cols)for the prefix-sum table.
Method 2 — Fix columns and run Kadane on rows (recommended)
Intuition
Fix two column boundaries left and right. Collapse the columns between them into a 1D array tmp[row] holding the row-wise sums between left and right. The maximum-sum rectangle that spans those columns is the maximum subarray of tmp (Kadane). Iterate all column pairs.
Approach
- For each
leftfrom0..cols-1:- Initialize
tmpas a zero array of lengthrows.
- Initialize
- For each
rightfromleft..cols-1:- For each row
r, dotmp[r] += A[r][right]. - Run Kadane on
tmpto get the maximum contiguous row-range sum and (optionally) top/bottom indices. - Update global maximum and store coordinates if needed.
- For each row
Edge cases: empty matrix, single row/column, all-negative elements (Kadane handles this by returning the maximum element).
Code
C++
class Solution {
public:
long long maxSumSubmatrix(const std::vector<std::vector<int>>& A) {
int rows = A.size();
if (rows == 0) return 0;
int cols = A[0].size();
long long best = LLONG_MIN;
for (int left = 0; left < cols; ++left) {
std::vector<long long> tmp(rows, 0);
for (int right = left; right < cols; ++right) {
for (int r = 0; r < rows; ++r) tmp[r] += A[r][right];
// Kadane on tmp
long long cur = tmp[0], curBest = tmp[0];
for (int i = 1; i < rows; ++i) {
cur = std::max(tmp[i], cur + tmp[i]);
curBest = std::max(curBest, cur);
}
best = std::max(best, curBest);
}
}
return best;
}
};
Go
package problems
func MaxSumSubmatrix(A [][]int) int {
rows := len(A)
if rows == 0 { return 0 }
cols := len(A[0])
best := -1 << 63
for left := 0; left < cols; left++ {
tmp := make([]int, rows)
for right := left; right < cols; right++ {
for r := 0; r < rows; r++ { tmp[r] += A[r][right] }
// Kadane
cur := tmp[0]
curBest := tmp[0]
for i := 1; i < rows; i++ {
if cur + tmp[i] > tmp[i] { cur = cur + tmp[i] } else { cur = tmp[i] }
if cur > curBest { curBest = cur }
}
if curBest > best { best = curBest }
}
}
return best
}
Java
class Solution {
public long maxSumSubmatrix(int[][] A) {
int rows = A.length;
if (rows == 0) return 0;
int cols = A[0].length;
long best = Long.MIN_VALUE;
for (int left = 0; left < cols; left++) {
long[] tmp = new long[rows];
for (int right = left; right < cols; right++) {
for (int r = 0; r < rows; r++) tmp[r] += A[r][right];
long cur = tmp[0], curBest = tmp[0];
for (int i = 1; i < rows; i++) {
cur = Math.max(tmp[i], cur + tmp[i]);
curBest = Math.max(curBest, cur);
}
best = Math.max(best, curBest);
}
}
return best;
}
}
Python
from typing import List
class Solution:
def max_sum_submatrix(self, A: List[List[int]]) -> int:
rows = len(A)
if rows == 0:
return 0
cols = len(A[0])
best = float('-inf')
for left in range(cols):
tmp = [0] * rows
for right in range(left, cols):
for r in range(rows):
tmp[r] += A[r][right]
cur = tmp[0]
cur_best = tmp[0]
for x in tmp[1:]:
cur = max(x, cur + x)
cur_best = max(cur_best, cur)
best = max(best, cur_best)
return best
Complexity
- ⏰ Time complexity:
O(cols^2 * rows)— we iterate over all column pairs (O(cols^2)) and for each pair run Kadane overrows. - 🧺 Space complexity:
O(rows)— temporary arraytmpof lengthrows.