Consider the knapsack problem:

Maximize

Subject to

for all *i*

If we ignore the constraint, we are left with a very simple problem:

Maximize

Subject to

for all *i*

Clearly the optimal solution to this problem is to set
to one for each *i*. This gives an objective value of 26. This is
obviously a relaxation of the original knapsack problem, so no
feasible knapsack solution can have value more than 26 (which is not a
very insightful bound). Now, look at the following problem, for some
fixed value of (called ):

Maximize

Subject to

for all *i*

I claim that this problem is also a relaxation of the knapsack problem for any value of . For any feasible knapsack solution, the objective of for that solution is certainly no less (since we are adding a nonnegative value). Therefore, the maximum of is certainly more than the optimal knapsack value.

Note that is very easy to solve for any fixed . For instance, for equals 1, we get:

Maximize

Subject to

for all *i*

The solution to this is to set all of the variables to 0 and gives objective 42.

If we graph the value of as a function of , we get the following graph:

The lowest value of this function is 19.5 and occurs at roughly . Note that since we are solving a relaxation, we would like the smallest upper bound possible, so the ``best'' relaxation value is 19.5. No knapsack solution can have value more than 19.5.

This approach can be used for larger, more difficult problems. In general, each constraint that is to be placed into the objective function gets a separate multiplier, or value. Constraints are placed into the objective until the remaining constraints form an ``easily solved'' problem. For instance, we might place all of the constraints into the objective except for the bound constraints. That certainly results in an easy problem. In the Navy example, all of the asset constraints are placed into the objective, leaving an easy-to-solve assignment problem.

The problem of finding the values that minimize the result of (which is itself a maximization problem) is the lagrangian dual problem. In general, this can be a rather tedious thing to solve, but here is a fairly straitforward approach:

Suppose the problem is a maximization, and all of the relaxed constraints are constraints.

- Begin with each at 0. Let the step size be some
(problem dependent value)
*k*. - Solve to get current solution
*x*. - For every constraint violated by
*x*, increase the corresponding by*k*. - For every constraint with positive slack relative to
*x*, decrease the corresponding by*k*. - If
*m*iterations have passed since the best relaxation value has decreased, cut*k*in half. - Go to 2.

Let's do an example:

Maximize

Subject to

We can form a lagrangian relaxation by placing both constraints in the objective, leaving just the bound constraints:

Maximize

Subject to

This is easy to solve for any choice of and . We can begin with and step size 0.5. The resulting problem is

Maximize

Subject to

The solution to this sets all variables to 1 and has value 22. This solution violates the first constraint and satisfies the second, so we can let and . This gives the problem:

Maximize

Subject to

Again the solution is to set all variables to 1. The bound itself is better, though, at 20. No knapsack solution has value more than 20. Again the first constraint is violated but the second is not. This gives and . The problem becomes:

Maximize

Subject to

Same solution, bound of 18. , . Problem becomes:

Maximize

Subject to

Same solution, bound of 16. , . Problem becomes:

Maximize

Subject to

Now an optimal solution is to have and the rest 0, bound 15. This gives slack to constraint 1, and satisfies constraint 2, so we go back to . We will bounce between the last two solutions for a while, not improving the bound, until we decide to decrease the step size. We would decrease the step size to 0.25 and solve the problem with to get problem:

Maximize

Subject to

Now a solution is for a bound of 15. This satisfies the first constraint exactly but not the second, so we change the prices to get . We continue this approach until our step size is so small that we are not having any further effect. The optimal solution is and a bound of 14.67.

Mon Nov 11 15:16:52 EST 1996