3180. Maximum Total Reward Using Operations I
Description
You are given an integer array rewardValues
of length n
, representing the values of rewards.
Initially, your total reward x
is 0, and all indices are unmarked. You are allowed to perform the following operation any number of times:
- Choose an unmarked index
i
from the range[0, n - 1]
. - If
rewardValues[i]
is greater than your current total rewardx
, then addrewardValues[i]
tox
(i.e.,x = x + rewardValues[i]
), and mark the indexi
.
Return an integer denoting the maximum total reward you can collect by performing the operations optimally.
Example 1:
Input: rewardValues = [1,1,3,3]
Output: 4
Explanation:
During the operations, we can choose to mark the indices 0 and 2 in order, and the total reward will be 4, which is the maximum.
Example 2:
Input: rewardValues = [1,6,4,3,2]
Output: 11
Explanation:
Mark the indices 0, 2, and 1 in order. The total reward will then be 11, which is the maximum.
Constraints:
1 <= rewardValues.length <= 2000
1 <= rewardValues[i] <= 2000
Solutions
Solution 1: Sorting + Memoization + Binary Search
We can sort the rewardValues
array and then use memoization to solve for the maximum total reward.
We define a function \(\textit{dfs}(x)\), representing the maximum total reward that can be obtained when the current total reward is \(x\). Thus, the answer is \(\textit{dfs}(0)\).
The execution process of the function \(\textit{dfs}(x)\) is as follows:
- Perform a binary search in the
rewardValues
array for the index \(i\) of the first element greater than \(x\); - Iterate over the elements in the
rewardValues
array starting from index \(i\), and for each element \(v\), calculate the maximum value of \(v + \textit{dfs}(x + v)\). - Return the result.
To avoid repeated calculations, we use a memoization array f
to record the results that have already been computed.
The time complexity is \(O(n \times (\log n + M))\), and the space complexity is \(O(M)\). Where \(n\) is the length of the rewardValues
array, and \(M\) is twice the maximum value in the rewardValues
array.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
Solution 2: Dynamic Programming
We define \(f[i][j]\) as whether it is possible to obtain a total reward of \(j\) using the first \(i\) reward values. Initially, \(f[0][0] = \textit{True}\), and all other values are \(\textit{False}\).
We consider the \(i\)-th reward value \(v\). If we do not choose it, then \(f[i][j] = f[i - 1][j]\). If we choose it, then \(f[i][j] = f[i - 1][j - v]\), where \(0 \leq j - v < v\). Thus, the state transition equation is:
The final answer is \(\max\{j \mid f[n][j] = \textit{True}\}\).
Since \(f[i][j]\) only depends on \(f[i - 1][j]\) and \(f[i - 1][j - v]\), we can optimize away the first dimension and use only a one-dimensional array for state transitions.
The time complexity is \(O(n \times M)\), and the space complexity is \(O(M)\). Where \(n\) is the length of the rewardValues
array, and \(M\) is twice the maximum value in the rewardValues
array.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
Solution 3: Dynamic Programming + Bit Manipulation
We can optimize Solution 2 by defining a binary number \(f\) to save the current state, where the \(i\)-th bit of \(f\) being \(1\) indicates that a total reward of \(i\) is reachable.
Observing the state transition equation from Solution 2, \(f[j] = f[j] \vee f[j - v]\), this is equivalent to taking the lower \(v\) bits of \(f\), shifting them left by \(v\) bits, and then performing an OR operation with the original \(f\).
Thus, the answer is the position of the highest bit in \(f\).
The time complexity is \(O(n \times M / w)\), and the space complexity is \(O(n + M / w)\). Where \(n\) is the length of the rewardValues
array, \(M\) is twice the maximum value in the rewardValues
array, and the integer \(w = 32\) or \(64\).
1 2 3 4 5 6 7 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
1 2 3 4 5 6 7 8 9 10 11 12 |
|
1 2 3 4 5 6 7 8 9 10 |
|