# тіеТђЂУДётѕњ

тіеТђЂУДётѕњТў»СИђуДЇт╝║тцДуџёу«ЌТ│ЋУ«ЙУ«АУїЃСЙІсђѓтйЊУ┤фтЕфТѕќтѕєУђїТ▓╗С╣ІТЌаТЋѕТЌХ№╝їжђџтИИС╝џС║ДућЪС╝ўжЏЁжФўТЋѕуџёу«ЌТ│ЋсђѓDPУ┐ўт░єжЌ«жбўтѕєУДБСИ║тГљжЌ«жбў№╝їСйєтГљжЌ«жбўСИЇТў»уІгуФІуџёсђѓDPтѕЌтЄ║тГљжЌ«жбўуџёУДБтє│Тќ╣ТАѕ№╝їС╗ЦжЂ┐тЁЇтєЇТгАУДБтє│т«ЃС╗гсђѓтіеТђЂУДётѕњжђџтИИт║ћућеС║јС╝ўтїќжЌ«жбў№╝џУ«ИтцџтЈ»УАїуџёУДБтє│Тќ╣ТАѕ; ТЅЙтѕ░ТюђСй│С╗итђ╝С╣ІСИђсђѓтЁ│жћ«Тў»ТюђС╝ўТђДтјЪтѕЎ№╝џућ▒ТюђС╝ўтГљжЌ«жбўУДБтє│Тќ╣ТАѕу╗ёТѕљуџёУДБтє│Тќ╣ТАѕсђѓ
т▒Ћт╝ђТЪЦуюІУ»дТЃЁ

1. Dynamic Programming Рђб A powerful paradigm for algorithm design. Рђб Often leads to elegant and efficient algorithms when greedy or divide-and-conquer donРђЎt work. Рђб DP also breaks a problem into subproblems, but subproblems are not independent. Рђб DP tabulates solutions of subproblems to avoid solving them again. Subhash Suri UC Santa Barbara

2. Dynamic Programming Рђб Typically applied to optimization problems: many feasible solutions; find one of optimal value. Рђб Key is the principle of optimality: solution composed of optimal subproblem solutions. Рђб Example: Matrix Chain Product. Рђб A sequence M1, M2, . . . , Mn of n matrices to be multiplied. Рђб Adjacent matrices must agree on dim. Subhash Suri UC Santa Barbara

3. Matrix Product Matrix-Multiply (A, B) 1. Let A be p ├Ќ q; let B be q ├Ќ r. 2. If dim of A and B donРђЎt agree, error. 3. for i = 1 to p 4. for j = 1 to r 5. C[i, j] = 0 6. for k = 1 to q 7. C[i, j] + = A[i, k] ├Ќ B[k, j] 8. return C. Рђб Cost of multiplying these matrices is p ├Ќ q ├Ќ r. Subhash Suri UC Santa Barbara

4. Matrix Chain Рђб Consider 4 matrices: M1, M2, M3, M4. Рђб We can compute the product in many different ways, depending on how we parenthesize. (M1(M2(M3M4))) (M1((M2M3)M4)) ((M1M2)(M3M4)) (((M1M2)M3)M4) Рђб Different multiplication orders can lead to very different total costs. Subhash Suri UC Santa Barbara

5. Matrix Chain Рђб Example: M1 = 10 ├Ќ 100, M2 = 100 ├Ќ 5, M3 = 5 ├Ќ 50. Рђб Parentheses order ((M1M2)M3) has cost 10 ┬и 100 ┬и 5 + 10 ┬и 5 ┬и 50 = 7500. Рђб Parentheses order (M1(M2M3)) has cost 100 ┬и 5 ┬и 50 + 10 ┬и 100 ┬и 50 = 75, 000! Subhash Suri UC Santa Barbara

6. Matrix Chain Рђб Input: a chain M1, M2, . . . , Mn of n matrices. Рђб Matrix Mi has size piРѕњ1 ├Ќ pi, where i = 1, 2, . . . , n. Рђб Find optimal parentheses order to minimize cost of chain multiplying MiРђЎs. Рђб Checking all possible ways of parenthesizing is infeasible. Рђб There are roughly 2nn ways to put parentheses, which is of the order of 4n! Subhash Suri UC Santa Barbara

7. Principle of Optimality Рђб Consider computing M1 ├Ќ M2 . . . ├Ќ Mn. Рђб Compute M1,k = M1 ├Ќ . . . ├Ќ Mk , in some order. Рђб Compute Mk+1,n = Mk+1 ├Ќ . . . ├Ќ Mn, in some order. Рђб Finally, compute M1,n = M1,k ├Ќ Mk+1,n. Рђб Principle of Optimality: To optimize M1,n, we must optimize M1,k and Mk+1,n too. Subhash Suri UC Santa Barbara

8. Recursive Solution Рђб A subproblem is subchain Mi, Mi+1 . . . , Mj . Рђб m[i, j] = optimal cost to multiply Mi, . . . , Mj . Рђб Use principle of optimality to determine m[i, j] recursively. Рђб Clearly, m[i, i] = 0, for all i. Рђб If an algorithm computes Mi, Mi+1 . . . , Mj as (Mi, . . . , Mk ) ├Ќ (Mk+1, . . . , Mj ), then m[i, j] = m[i, k] + m[k + 1, j] + piРѕњ1pk pj Subhash Suri UC Santa Barbara

9. Recursive Solution m[i, j] = m[i, k] + m[k + 1, j] + piРѕњ1pk pj Рђб We donРђЎt know which k the optimal algorithm will use. Рђб But k must be between i and j Рѕњ 1. Рђб Thus, we can write: m[i, j] = min {m[i, k] + m[k + 1, j] + piРѕњ1pk pj } iРЅцk<j Subhash Suri UC Santa Barbara

10. The DP Approach Рђб Thus, we wish to solve: m[i, j] = min {m[i, k] + m[k + 1, j] + piРѕњ1pk pj } iРЅцk<j Рђб A direct recursive solution is exponential: brute force checking of all parentheses orders. Рђб What is the recurrence? What does it solve to? Рђб DPРђЎs insight: only a small number of subproblems actually occur, one per choice of i, j. Subhash Suri UC Santa Barbara

11. The DP Approach Рђб Naive recursion is exponential because it solves the same subproblem over and over again in different branches of recursion. Рђб DP avoids this wasted computation by organizing the subproblems differently: bottom up. Рђб Start with m[i, i] = 0, for all i. Рђб Next, we determine m[i, i + 1], and then m[i, i + 2], and so on. Subhash Suri UC Santa Barbara

12. The Algorithm Рђб Input: [p0, p1, . . . , pn] the dimension vector of the matrix chain. Рђб Output: m[i, j], the optimal cost of multiplying each subchain Mi ├Ќ . . . ├Ќ Mj . Рђб Array s[i, j] stores the optimal k for each subchain. Subhash Suri UC Santa Barbara

13. The Algorithm Matrix-Chain-Multiply (p) 1. Set m[i, i] = 0, for i = 1, 2, . . . , n. 2. Set d = 1. 3. For all i, j such that j Рѕњ i = d, compute m[i, j] = min {m[i, k] + m[k + 1, j] + piРѕњ1pk pj } iРЅцk<j Set s[i, j] = k РѕЌ, where k РѕЌ is the choice that gives min value in above expression. 4. If d < n, increment d and repeat Step 3. Subhash Suri UC Santa Barbara

14. Illustration m array M1 = 30 x 35 6 1 M2 = 35 x 15 15,125 5 2 M3 = 15 x 5 j 11,875 10,500 i 4 3 M4 = 5 x 10 9,375 7,125 5,375 4 M5 = 10 x 20 3 7,875 4,375 2,500 3,500 5 M6 = 20 x 25 2 15,750 2,625 750 1000 5000 6 1 0 0 0 0 0 0 M1 M2 M3 M4 M5 M6 Subhash Suri UC Santa Barbara

15. Illustration m array 6 1 15,125 5 2 j 11,875 10,500 i 4 3 9,375 7,125 5,375 4 3 2 7,875 4,375 2,500 3,500 5 15,750 2,625 750 1000 5000 6 1 0 0 0 0 0 0 M1 M2 M3 M4 M5 M6 Рђб Computing m[2, 5]. №Б▒ №Б┤ №Б▓ m[2, 2] + m[3, 5] + p1p2p5 = 0 + 2500 + 35.15.20 = 13000 min m[2, 3] + m[4, 5] + p1p3p5 = 2625 + 1000 + 35.5.20 = 7125 №Б┤ №Б│ m[2, 4] + m[5, 5] + p1p4p5 = 4375 + 0 + 35.10.20 = 11375 Subhash Suri UC Santa Barbara

16. Finishing Up Рђб The algorithm clearly takes O(n3) time. Рђб The m matrix only outputs the cost. Рђб The parentheses order from the s matrix. Matrix-Chain (M, s, i, j) 1. if j > i then 2. X Рєљ Matrix-Chain (A, s, i, s[i, j]) 3. Y Рєљ Matrix-Chain (A, s, s[i, j] + 1, j) 4. return X РѕЌ Y Subhash Suri UC Santa Barbara

17. Longest Common Subsequence Рђб Consider a string of characters: X = ABCBDAB. Рђб A subsequence is obtained by deleting some (any) characters of X. Рђб E.g. ABBB is a subsequence of X, as is ABD. But AABB is not a subsequence. Рђб Let X = (x1, x2, . . . , xm) be a sequence. Рђб Z = (z1, z2, . . . , zk ) is subseq. of X if there is an index sequence (i1, . . . , ik ) s.t. zj = xij , for j = 1, . . . , k. Рђб Index sequence for ABBB is (1, 2, 4, 7). Subhash Suri UC Santa Barbara

18. Longest Common Subsequence Рђб Given two sequences X and Y , find their longest common subsequence. Рђб If X = (A, B, C, B, D, A, B) and Y = (B, D, C, A, B, A), then (B, C, A) is a common sequence, but not LCS. Рђб (B, D, A, B) is a LCS. Рђб How do we find an LCS? Рђб Can some form of Greedy work? Suggestions? Subhash Suri UC Santa Barbara

19. Trial Ideas Рђб Greedy-1: Scan X. Find the first letter matching y1; take it and continue. Рђб Problem: only matches prefix substrings of Y . Рђб Greedy-2: Find the most frequent letters of X; or sort the letters by their frequency. Try to match in frequency order. Рђб Problem: Frequency can be irrelevant. E.g. suppose all letters of X are distinct. Subhash Suri UC Santa Barbara

20. Properties Рђб 2m subsequences of X. Рђб LCS obeys the principle of optimality. Рђб Let Xi = (x1, x2, . . . , xi) be the i-long prefix of X. Рђб Examples: if X = (A, B, C, B, D, A, B), then X2 = (A, B); X5 = (A, B, C, B, D). Subhash Suri UC Santa Barbara

21. LCS Structure Рђб Suppose Z = (z1, z2, . . . , zk ) is a LCS of X and Y . Then, 1. If xm = yn, then zk = xm = yn and ZkРѕњ1 = LCS(XmРѕњ1, YnРѕњ1). 2. If xm = yn, then zk = xm implies Z = LCS(XmРѕњ1, Y ). 3. If xm = yn, then zk = yn implies Z = LCS(X, YnРѕњ1). Subhash Suri UC Santa Barbara

22. Recursive Solution Рђб Let c[i, j] = |LCS(Xi, Yj )| be the optimal solution for Xi, Yj . Рђб Obviously, c[i, j] = 0 if either i = 0 or j = 0. Рђб In general, we have the recurrence: №Б▒ №Б╝ №Б┤ №Б▓ 0 if i or j = 0 №Б┤ №Бй c[i, j] = c[i Рѕњ 1, j Рѕњ 1] + 1 if xi = yj №Б┤ №Б│ max{c[i, j Рѕњ 1], c[i Рѕњ 1, j]} if xi = yj №Б┤ №БЙ Subhash Suri UC Santa Barbara

23. Algorithm Рђб A direct recursive solution is exponential: T (n) = 2T (n Рѕњ 1) + 1, which solves to 2n. Рђб DP builds a table of subproblem solutions, bottom up. Рђб Starting from c[i, 0] and c[0, j], we compute c[1, j], c[2, j], etc. Subhash Suri UC Santa Barbara

24. Algorithm LCS-Length (X, Y ) c[i, 0] Рєљ 0, c[0, j] Рєљ 0, for all i, j; for i = 1 to m do for j = 1 to n do if xi = yj then c[i, j] Рєљ c[i Рѕњ 1, j Рѕњ 1] + 1; b[i, j] Рєљ D else if c[i Рѕњ 1, j] РЅЦ c[i, j Рѕњ 1] then c[i, j] Рєљ c[i Рѕњ 1, j]; b[i, j] Рєљ U else c[i, j] Рєљ c[i, j Рѕњ 1]; b[i, j] Рєљ L return b, c Subhash Suri UC Santa Barbara

25. LCS Algorithm Рђб LCS-Length (X, Y ) only computes the length of the common subsequence. Рђб By keeping track of matches, xi = yj , the LCS itself can be constructed. Subhash Suri UC Santa Barbara

26. LCS Algorithm PRINT-LCS(b, X, i, j) if i = 0 or j = 0 then return if b[i, j] = D then PRINT-LCS(b, X, i Рѕњ 1, j Рѕњ 1) print xi elseif b[i, j] = U then PRINT-LCS(b, X, i Рѕњ 1, j) else PRINT-LCS(b, X, i, j Рѕњ 1) Рђб Initial call is PRINT-LCS(b, X, |X|, |Y |). Рђб By inspection, the time complexity of the algorithm is O(nm). Subhash Suri UC Santa Barbara

27.Optimal Polygon Triangulation Рђб Polygon is a piecewise linear closed curve. Рђб Only consecutive edges intersect, and they do so at vertices. Рђб P is convex if line segment xy is inside P whenever x, y are inside. v0 v1 v5 v2 v4 v3 Subhash Suri UC Santa Barbara

28.Optimal Polygon Triangulation Рђб Vertices in counter-clockwise order: v0, v1, . . . , vnРѕњ1. Edges are vivi+1, where vn = v0. Рђб A chord vivj joins two non-adjacent vertices. Рђб A triangulation is a set of chords that divide P into non-overlapping triangles. v0 v0 v1 v1 v5 v5 v2 v2 v4 v4 v3 v3 Subhash Suri UC Santa Barbara

29. Triangulation Problem Рђб Given a convex polygon P = (v0, . . . , vnРѕњ1), and a weight function w on triangles, find a triangulation minimizing the total weight. Рђб Every triangulation of a n-gon has n Рѕњ 2 triangles and n Рѕњ 3 chords. v0 v0 v1 v1 v5 v5 v2 v2 v4 v4 v3 v3 Subhash Suri UC Santa Barbara

ТѕЉт░▒Тў»ТѕЉ№╝Ђ
ти▓т░єжЊЙТјЦтцЇтѕХУЄ│тЅфУ┤┤ТЮ┐