Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
129 views
in Technique[技术] by (71.8m points)

How is dynamic programming different from greedy algorithms?

In the book I am using Introduction to the Design & Analysis of Algorithms, dynamic programming is said to focus on the Principle of Optimality, "An optimal solution to any instance of an optimization problem is composed of optimal solutions to its subinstances".

Whereas, the greedy technique focuses on expanding partially constructed solutions until you arrive at a solution for a complete problem. It is then said, it must be "the best local choice among all feasible choices available on that step".

Since both involve local optimality, isn't one a subset of the other?

question from:https://stackoverflow.com/questions/13713572/how-is-dynamic-programming-different-from-greedy-algorithms

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Dynamic programming is applicable to problems exhibiting the properties of:

  • overlapping subproblems, and
  • optimal substructure.

Optimal substructure means that you can greedily solve subproblems and combine the solutions to solve the larger problem. The difference between dynamic programming and greedy algorithms is that with dynamic programming, there are overlapping subproblems, and those subproblems are solved using memoization. "Memoization" is the technique whereby solutions to subproblems are used to solve other subproblems more quickly.

This answer has gotten some attention, so I'll give some examples.

Consider the problem "Making change with dollars, nickels, and pennies." This is a greedy problem. It exhibits optimal substructure because you can solve for the number of dollars. Then, solve for the number of nickels. Then the number of pennies. You can then combine the solutions to these subproblems efficiently. It does not really exhibit overlapping subproblems since solving each subproblem doesn't help much with the others (maybe a little bit).

Consider the problem "Fibonnaci numbers." It exhibits optimal substructure because you can solve F(10) from F(9) and F(8) efficiently (by addition). These subproblems overlap because they both share F(7). If you memoize the result of F(7) when you're solving F(8), you can solve F(9) more quickly.

In response to the comment about dynamic programming having to do with "reconsidering decisions": This is obviously not true for any linear dynamic programming algorithm like the maximum subarray problem or the Fibonacci problem above.

Essentially, imagine a problem having optimal substructure as a directed acyclic graph whose nodes represent subproblems (wherein the whole problem is represented by a node whose indegree is zero), and whose directed edges represent dependencies between subproblems. Then, a greedy problem is a tree (all nodes except the root have unit indegree). A dynamic programming problem has some nodes with indegree greater than one. This illustrates the overlapping subproblems.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...