Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

You have 2 free stories left this month. Upgrade for unlimited access.

Solving the Target Sum problem


with dynamic programming and
more
Fabian Terh Follow
Jul 22, 2019 · 7 min read
Photo by David Paschke on Unsplash

Previously, I wrote about solving the 0–1 Knapsack Problem using


dynamic programming. Today, I want to discuss a similar
problem: the Target Sum problem (link to LeetCode problem —
read this before continuing the rest of this article!).

The Target Sum problem is similar to the 0–1 Knapsack Problem,


but with one key diHerence: instead of deciding whether to
include an item or not, we now must decide whether to add or
subtract an item. This doesn’t seem like a signiJcant diHerence,
but it actually makes building our dynamic programming table a
lot trickier.

I’ll Jrst brieKy cover several more ineLcient recursive solutions,


before turning to the dynamic programming solutions in greater
detail.

Solution 1: Recursion (brute force)


The most straightforward (and least eLcient) solution is to
explore every possibility using a backtracking algorithm.

public class Solution {


public int findTargetSumWays(int[] nums, int s) {
return backtrack(nums, 0, 0, s);
}

private int backtrack(int[] nums, int index, int sum,


int s) {
if (index == nums.length) {
return sum == s ? 1 : 0;
}
return backtrack(nums, index + 1, sum +
nums[index], s)
+ backtrack(nums, index + 1, sum - nums[index],
s);
}
}

377ms. There’s deHnitely room for improvement!

Solution 2: Recursion with memoization


Notice how in the previous solution we end up re-computing the
solutions to sub-problems. We could optimize this by caching our
solutions using memoization.

Notice how the twist in the problem — forcing us to either add or


subtract a number — has already complicated the solution. We
are forced to either use an oHset of +1000 to the array index
(question stipulates that the sum of elements in the array ≤
1000) to account for negative values (since we can’t have negative
indices in an array), or use a HashMap (Java)/dictionary
(Python) instead.

I went with the former, because the overhead of using a Hashmap


signiJcantly increased the runtime (you can try both and
compare!).

public class Solution {


public int findTargetSumWays(int[] nums, int s) {
int[][] memo = new int[nums.length + 1][2001];
for (int[] row: memo) {
Arrays.fill(row, Integer.MIN_VALUE);
}

return backtrack(memo, nums, 0, 0, s);


}

private int backtrack(int[][] memo,


int[] nums,
int index,
int sum,
int s) {
if (index == nums.length) {
return sum == s ? 1 : 0;
}

if (memo[index][sum + 1000] != Integer.MIN_VALUE) {


return memo[index][sum + 1000];
}

int ans =
backtrack(memo, nums, index + 1, sum +
nums[index], s) +
backtrack(memo, nums, index + 1, sum -
nums[index], s);
memo[index][sum + 1000] = ans;
return ans;
}
}
68.82th percentile. Not great, not terrible either.
Solution 3: Dynamic programming
Finally, we turn to the dynamic programming solutions.

As with all dynamic programming solutions, we solve for the base


case Jrst, and reuse our previous solutions to smaller sub-
problems to solve larger ones.

Our base case: when we have no values to add/subtract, we have


a sum of 0, and therefore there is only one way to arrive at S = 0.

The key idea in this dynamic programming solution is to only


branch out from reachable target sums.

At the (rst iteration (i.e. the outer for-loop), assume that we are
on value x in our nums array. Therefore, we know intuitively that
there is only one way to reach the target sum of +x and -x.
(Explanation: you start with 0, and you either +x or -x.)
Programmatically, we can express this logic by setting ways to

reach x = ways to reach 0 and ways to reach -x = ways to

reach 0 .

In subsequent iterations, we only care about starting from


target sums that are already reachable. Assume that the second
value is y. We know intuitively that after 2 iterations, we can only
arrive at 4 possible target sums: x + y , x — y , -x + y , and -x —

y . Note: These 4 values may not be distinct!


class Solution {
public int findTargetSumWays(int[] nums, int S) {
int sum = 0;
for (int num: nums) sum += num;
if (S > sum || -S < -sum) return 0;

int[] dp = new int[2 * sum + 1];


dp[sum] = 1;

for (int num: nums) {


int[] next = new int[2 * sum + 1];
for (int i = 0; i < dp.length; i++) {
// Only branch out from reachable target
sums
if (dp[i] == 0) continue;

next[i + num] += dp[i];


next[i - num] += dp[i];
}
dp = next;
}

return dp[sum + S];


}
}

This solution uses an array and skips unreachable target sums at


each iteration (through the use of the continue statement).

You can also use a HashMap and only iterate through all existing
keys, which is theoretically faster but slower in practice. It is
theoretically more eLcient because you don’t have to iterate
through all 2 * sum + 1 indices in the array at each iteration of
the nums array; however, it seems that the overhead of using the
HashMap data structure far outweighs the cost of performing that
iteration.
83.83th percentile: we’re Hnally getting somewhere!

Solution 4: Dynamic programming


(simpli@ed and optimized)
We can further optimize our solution in several ways.

Firstly, we can trivially conclude that there are 0 ways to attain a


target sum if the target sum exceeds the sum of all the values in
the array.

Secondly, we can optimize the space complexity of our algorithm


by using only a single array — see my previous post on space-
optimizing the dynamic programming solution. This will most
likely also improve the runtime of the algorithm, as we don’t need
to allocate memory to create a new array at each iteration.

But more importantly, we can simplify this problem into a 0–1


Knapsack Problem. Credits to yuxiangmusic who posted this
solution in LeetCode’s discussion forum — it’s absolutely genius!
I’ll try to paraphrase his idea here:
The original problem statement is equivalent to: Jnd the number
of ways to gather a subset of nums that needs to be positive (P),
and the rest negative (N), such that their sum is equal to target .

Therefore…

sum(P) - sum(N) = target


sum(P) - sum(N) + sum(P) + sum(N) = target + sum(P) +
sum(N)
2 * sum(P) = target + sum(nums)
sum(P) = (target + sum(nums)) / 2

… the original problem statement may be converted into the


following subset sum problem: Bnd the number of ways to
gather a subset of nums such that its sum is equal to (target +

sum(nums)) / 2 .

We can also add another trivial check to our algorithm: if target

+ sum(nums) is not divisible by 2, we can return 0 because there is


no solution (this follows from the last line in our derivation
above).

Therefore, the crux of this reformulated problem statement is (in


pseudo-code):

for the set of values up to some value V in nums:


for some target sum S:
number of ways to attain S =
number of ways to attain S without V
+ number of ways to attain S-V with V

number of ways to attain S without V is the value in the same


column, but 1 row above, in our dynamic programming table.

number of ways to attain S-V with V is the value in the column


V columns to the left, but 1 row above, in our dynamic
programming table.

Therefore, in code, our Jnal solution looks like this:

class Solution {
public int findTargetSumWays(int[] nums, int S) {
int sum = 0;
for (int num: nums) sum += num;
if (S > sum || -S < -sum || (S + sum) % 2 == 1)
return 0;

int[] dp = new int[(S + sum) / 2 + 1];


dp[0] = 1;

for (int num: nums) {


for (int i = dp.length - 1; i >= num; i--) {
dp[i] += dp[i - num]; // Crux
}
}

return dp[dp.length - 1];


}
}
The Startup
Medium's largest
active publication,
followed by +680K
people. Follow to
100th percentile. Perfection.
join our community.

Follow
Honestly, this solution took me a while to wrap my head around.
The reformulated problem statement so faintly resembled the
original
338
one that I had trouble convincing myself of the
correctness of the solution (i.e. that solving the reformulated
1
problem solves the original one). I found that the best way to
truly understand the solution is to step through it line-by-line, and
really understanding what each step does.

. . .

This article has been a blast to write, and I think it’s amazing how
we were able to bring our runtime down from 377ms to 1ms by
optimizing our solution. Even just simply caching our solutions
(through memoization) reduced our runtime to 8ms, which is an
extremely signiJcant improvement.

I hope you had as much fun digesting these solutions as I did


writing them!
Sign up for Top Stories
By The Startup

A newsletter that delivers The Startup's most popular stories to your inbox
once a month. Take a look

Emails will be sent to sandhyah098@gmail.com.


Get this newsletter Not you?

Programming Dynamic Programming Recursion Algorithms Coding Interviews

Discover Medium Make Medium yours Become a member


Welcome to a place where Follow all the topics you Get unlimited access to the
words matter. On Medium, care about, and we’ll deliver best stories on Medium —
smart voices and original the best stories for you to and support writers while
ideas take center stage - your homepage and inbox. you’re at it. Just $5/month.
with no ads in sight. Watch Explore Upgrade

About Help Legal

You might also like