Tài liệu Thuật toán Algorithms (Phần 51) doc - Pdf 87


493
for to Vdo
for to Vdo
if div 2 then
for to Vdo

then a[x,j]:=a[x,y]+a[y,j];
The value maxint div 2 is used as a sentinel in matrix positions corresponding
to edges not present in the graph. This eliminates the need to test explicitly
in the inner loop whether there is an edge from x to j or from to j. A “small”
sentinel value is used so that there will be no overflow.
This is virtually the same program that we used to compute the transitive
closure of a directed graph: logical operations have been replaced by arithmetic
operations. The following table shows the adjacency matrix before and after
this algorithm is run on directed graph example of Chapter 32, with all edge
weights set to 1:
ABCDEFGHIJKLM
A01 1 1
B 0
Cl
0
D
0 1
E 10
F 10
G 110
1
H
101
I 10

the bit matrix produced by the transitive closure algorithm of Chapter 32.
From a dynamic programming standpoint, note that the amount of in-
formation saved about small subproblems is nearly the same as the amount
of information to be output, so little space is wasted.
CHAPTER 37
One advantage of this algorithm over the shortest paths algorithm of
Chapter 31 is that it works properly even if negative edge weights are allowed,
as long as there are no cycles of negative weight in the graph (in which case
the shortest paths connecting nodes on the cycle are not defined). If a cycle
of negative weight is present in the graph, then the algorithm can detect that
fact, because in that case a[i, i] will become negative for some i at some point
during the algorithm.
Time and Space Requirements
The above examples demonstrate that dynamic programming applications can
have quite different time and space requirements depending on the amount of
information about small subproblems that must be saved. For the shortest
paths algorithm, no extra space was required; for the knapsack problem,
space proportional to the size of the knapsack was needed; and for the other
problems space was needed. For each problem, the time required was a
factor of N greater than the space required.
The range of possible applicability of dynamic programming is far larger
than covered in the examples. From a dynamic programming point of view,
divide-and-conquer recursion could be thought of as a special case in which
a minimal amount of information about small cases must be computed and
stored, and exhaustive search (which we’ll examine in Chapter 39) could be
thought of as a special case in which a maximal amount of information about
small cases must be computed and stored. Dynamic programming is a natural
design technique that appears in many guises to solve problems throughout
this range.
DYNAMIC PROGRAMMING

Why not solve the knapsack problem in the same way as the matrix chain
and optimum binary search tree problems, by minimizing, for k from 1
to M, the sum of the best value achievable for a knapsack of size k and
the best value achievable for a knapsack of size M-k?
10.
Extend the program for the shortest paths problem to include a procedure
j: integer) that will fill an array path with the shortest path from
i to j. This procedure should take time proportional to the length of the
path each time it is called, using an auxiliary data structure built up by
a modified version of the program given in the text.

3 8. Linear Programming
Many practical problems involve complicated interactions between a
number of varying quantities. One example of this is the network flow
problem discussed in Chapter 33: the flows in the various pipes in the network
must obey physical laws over a rather complicated network. Another example
is scheduling various tasks in (say) a manufacturing process in the face of
deadlines, priorities, etc. Very often it is possible to develop a precise math-
ematical formulation which captures the interactions involved and reduces
the problem at hand to a more straightforward mathematical problem. This
process of deriving a set of mathematical equations whose solution implies the
solution of a given practical problem is called mathematical programming. In
this section, we consider a fundamental variant of mathematical programming,
linear programming, and an efficient algorithm for solving linear programs, the
simplex method.
Linear programming and the simplex method are of fundamental impor-
tance because a wide variety of important problems are amenable to formula-
tion as linear programs and solution by the simplex method. Better
algorithms are known for some specific problems, but few problem-solving
techniques are as widely applicable as the process of first formulating the


Nhờ tải bản gốc
Music ♫

Copyright: Tài liệu đại học © DMCA.com Protection Status