Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
514 views
in Technique[技术] by (71.8m points)

algorithm - How can I better understand the one-comparison-per-iteration binary search?

What is the point of the one-comparison-per-iteration binary search? And can you explain how it works?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

There are two reasons to binary search with one comparison per iteration. The less important is performance. Detecting an exact match early using two comparisons per iteration saves an average one iteration of the loop, whereas (assuming comparisons involve significant work) binary searching with one comparison per iteration almost halves the work done per iteration.

Binary searching an array of integers, it probably makes little difference either way. Even with a fairly expensive comparison, asymptotically the performance is the same, and the half-rather-than-minus-one probably isn't worth pursuing in most cases. Besides, expensive comparisons are often coded as functions that return negative, zero or positive for <, == or >, so you can get both comparisons for pretty much the price of one anyway.

The important reason to do binary searches with one comparison per iteration is because you can get more useful results than just some-equal-match. The main searches you can do are...

  • First key > goal
  • First key >= goal
  • First key == goal
  • Last key < goal
  • Last key <= goal
  • Last key == goal

These all reduce to the same basic algorithm. Understanding this well enough that you can code all the variants easily isn't that difficult, but I've not really seen a good explanation - only pseudocode and mathematical proofs. This is my attempt at an explanation.

There are games where the idea is to get as close as possible to a target without overshooting. Change that to "undershooting", and that's what "Find First >" does. Consider the ranges at some stage during the search...

| lower bound     | goal                    | upper bound
+-----------------+-------------------------+--------------
|         Illegal | better            worse |
+-----------------+-------------------------+--------------

The range between the current upper and lower bound still need to be searched. Our goal is (normally) in there somewhere, but we don't yet know where. The interesting point about items above the upper bound is that they are legal in the sense that they are greater than the goal. We can say that the item just above the current upper bound is our best-so-far solution. We can even say this at the very start, even though there is probably no item at that position - in a sense, if there is no valid in-range solution, the best solution that hasn't been disproved is just past the upper bound.

At each iteration, we pick an item to compare between the upper and lower bound. For binary search, that's a rounded half-way item. For binary tree search, it's dictated by the structure of the tree. The principle is the same either way.

As we are searching for an item greater-than our goal, we compare the test item using Item [testpos] > goal. If the result is false, we have overshot (or undershot) our goal, so we keep our existing best-so-far solution, and adjust our lower bound upwards. If the result is true, we have found a new best-so-far solution, so we adjust the upper bound down to reflect that.

Either way, we never want to compare that test item again, so we adjust our bound to eliminate (only just) the test item from the range to search. Being careless with this usually results in infinite loops.

Normally, half-open ranges are used - an inclusive lower bound and an exclusive upper bound. Using this system, the item at the upper bound index is not in the search range (at least not now), but it is the best-so-far solution. When you move the lower bound up, you move it to testpos+1 (to exclude the item you just tested from the range). When you move the upper bound down, you move it to testpos (the upper bound is exclusive anyway).

if (item[testpos] > goal)
{
  //  new best-so-far
  upperbound = testpos;
}
else
{
  lowerbound = testpos + 1;
}

When the range between the lower and upper bounds is empty (using half-open, when both have the same index), your result is your most recent best-so-far solution, just above your upper bound (ie at the upper bound index for half-open).

So the full algorithm is...

while (upperbound > lowerbound)
{
  testpos = lowerbound + ((upperbound-lowerbound) / 2);

  if (item[testpos] > goal)
  {
    //  new best-so-far
    upperbound = testpos;
  }
  else
  {
    lowerbound = testpos + 1;
  }
}

To change from first key > goal to first key >= goal, you literally switch the comparison operator in the if line. The relative operator and goal could be replaced by a single parameter - a predicate function that returns true if (and only if) its parameter is on the greater-than side of the goal.

That gives you "first >" and "first >=". To get "first ==", use "first >=" and add an equality check after the loop exits.

For "last <" etc, the principle is the same as above, but the range is reflected. This just means you swap over the bound-adjustments (but not the comment) as well as changing the operator. But before doing that, consider the following...

a >  b  ==  !(a <= b)
a >= b  ==  !(a <  b)

Also...

  • position (last key < goal) = position (first key >= goal) - 1
  • position (last key <= goal) = position (first key > goal ) - 1

When we move our bounds during the search, both sides are being moved towards the goal until they meet at the goal. And there is a special item just below the lower bound, just as there is just above the upper bound...

while (upperbound > lowerbound)
{
  testpos = lowerbound + ((upperbound-lowerbound) / 2);

  if (item[testpos] > goal)
  {
    //  new best-so-far for first key > goal at [upperbound]
    upperbound = testpos;
  }
  else
  {
    //  new best-so-far for last key <= goal at [lowerbound - 1]
    lowerbound = testpos + 1;
  }
}

So in a way, we have two complementary searches running at once. When the upperbound and lowerbound meet, we have a useful search result on each side of that single boundary.

For all cases, there's the chance that that an original "imaginary" out-of-bounds best-so-far position was your final result (there was no match in the search range). This needs to be checked before doing a final == check for the first == and last == cases. It might be useful behaviour, as well - e.g. if you're searching for the position to insert your goal item, adding it after the end of your existing items is the right thing to do if all the existing items are smaller than your goal item.

A couple of notes on the selection of the testpos...

testpos = lowerbound + ((upperbound-lowerbound) / 2);

First off, this will never overflow, unlike the more obvious ((lowerbound + upperbound)/2). It also works with pointers as well as integer indexes.

Second, the division is assumed to round down. Rounding down for non-negatives is OK (all you can be sure of in C) as the difference is always non-negative anyway.

This is one aspect that may need care if you use non-half-open ranges, though - make sure the test position is inside the search range, and not just outside (on one of the already-found best-so-far positions).

Finally, in a binary tree search, the moving of bounds is implicit and the choice of testpos is built into the structure of the tree (which may be unbalanced), yet the same principles apply for what the search is doing. In this case, we choose our child node to shrink the implicit ranges. For first match cases, either we've found a new smaller best match (go to the lower child in hopes of finding an even smaller and better one) or we've overshot (go to the higher child in hopes of recovering). Again, the four main cases can be handled by switching the comparison operator.

BTW - there are more possible operators to use for that template parameter. Consider an array sorted by year then month. Maybe you want to find the first item for a particular year. To do this, write a comparison function that compares the year and ignores the month - the goal compares as equal if the year is equal, but the goal value may be a different type to the key that doesn't even have a month value to compare. I think of this as a "partial key comparison", and plug that into your binary search template and you get what I think of as a "partial key search".

EDIT The paragraph below used to say "31 Dec 1999 to be equal to 1 Feb 2000". That wouldn't work unless the whole range in-between was also considered equal. The point is that all three parts of the begin- and end-of-range dates differ, so you're not deal with a "partial" key, but the keys considered equivalent for the search must form a contiguous block in the container, which will normally imply a contiguous block in the ordered set of possible keys.

It's not strictly just "partial" keys, either. Your custom comparison might consider 31 Dec 1999 to be equal to 1 Jan 2000, yet all other dates different. The point is the custom comparison must agree with the original key about the ordering, but it might not be so picky about considering all different values different - it can treat a range of keys as an "equivalence class".


An extra note about bounds that I really should have included before, but I may not have thought about it this way at the time.

One way of thinking about bounds is that they aren't item indexes at all. A bound is the boundary line between two items, so you can number the boundary lines as easily as you can number the items...

|     |     |     |     |     |     |     |     |
| +-+ | +-+ | +-+ | +-+ | +-+ | +-+ | +-+ | +-+ |
| |0| | |1| | |2| | |3| | |4| | |5| | |6| | |7| |
| +-+ | +-+ | +-+ | +-+ | +-+ | +-+ | +-+ | +-+ |
|     |     |     |     |     |     |     |     |
0     1     2     3     4     5     6     7     8

Obviously the numbering of bounds is related to the numbering of the items. As long as you number your bounds left-to-right and the same way you number your items (in this case starting from zero) the result is effectively the same as the common half-open convention.

It would be possible to select a middle bound to bisect the range precisely into two, but that's not what a binary search does. For binary search, you select an item to test - not a bound. That item will be tested in this iteration and must never be tested again, so it's excluded from both subranges.

|     |     |     |     |     |     |     |     |
| +-+ | +-+ | +-+ | +-+ | +-+ | +-+ | +-+ | +-+ |
| |0| | |1| | |2| | |3| | |4| | |5| | |6| | |7| |
| +-+ | +-+ | +-+ | +-+ | +-+ | +-+ | +-+ | +-+ |
|     |     |     |     |     |     |     |     |
0     1     2     3     4     5     6     7     8
                           ^
      |<-------------------|------------->|
                           |
      |<--------------->|  |  |<--------->|
          low range        i     hi range

So the testpos and testpos+1 in the algorithm are the two cases of translating the item index into the bound index. Of course if the two bounds are equal, there's no items in that range to choose so the loop cannot continue, and the only possible result is that one bound value.

The ranges shown above are


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...