Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
435 views
in Technique[技术] by (71.8m points)

objective c - Why can't we use a dispatch_sync on the current queue?

I ran into a scenario where I had a delegate callback which could occur on either the main thread or another thread, and I wouldn't know which until runtime (using StoreKit.framework).

I also had UI code that I needed to update in that callback which needed to happen before the function executed, so my initial thought was to have a function like this:

-(void) someDelegateCallback:(id) sender
{
    dispatch_sync(dispatch_get_main_queue(), ^{
        // ui update code here
    });

    // code here that depends upon the UI getting updated
}

That works great, when it is executed on the background thread. However, when executed on the main thread, the program comes to a deadlock.

That alone seems interesting to me, if I read the docs for dispatch_sync right, then I would expect it to just execute the block outright, not worrying about scheduling it into the runloop, as said here:

As an optimization, this function invokes the block on the current thread when possible.

But, that's not too big of a deal, it simply means a bit more typing, which lead me to this approach:

-(void) someDelegateCallBack:(id) sender
{
    dispatch_block_t onMain = ^{
        // update UI code here
    };

    if (dispatch_get_current_queue() == dispatch_get_main_queue())
       onMain();
    else
       dispatch_sync(dispatch_get_main_queue(), onMain);
}

However, this seems a bit backwards. Was this a bug in the making of GCD, or is there something that I am missing in the docs?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

dispatch_sync does two things:

  1. queue a block
  2. blocks the current thread until the block has finished running

Given that the main thread is a serial queue (which means it uses only one thread), if you run the following statement on the main queue:

dispatch_sync(dispatch_get_main_queue(), ^(){/*...*/});

the following events will happen:

  1. dispatch_sync queues the block in the main queue.
  2. dispatch_sync blocks the thread of the main queue until the block finishes executing.
  3. dispatch_sync waits forever because the thread where the block is supposed to run is blocked.

The key to understanding this issue is that dispatch_sync does not execute blocks, it only queues them. Execution will happen on a future iteration of the run loop.

The following approach:

if (queueA == dispatch_get_current_queue()){
    block();
} else {
    dispatch_sync(queueA, block);
}

is perfectly fine, but be aware that it won't protect you from complex scenarios involving a hierarchy of queues. In such case, the current queue may be different than a previously blocked queue where you are trying to send your block. Example:

dispatch_sync(queueA, ^{
    dispatch_sync(queueB, ^{
        // dispatch_get_current_queue() is B, but A is blocked, 
        // so a dispatch_sync(A,b) will deadlock.
        dispatch_sync(queueA, ^{
            // some task
        });
    });
});

For complex cases, read/write key-value data in the dispatch queue:

dispatch_queue_t workerQ = dispatch_queue_create("com.meh.sometask", NULL);
dispatch_queue_t funnelQ = dispatch_queue_create("com.meh.funnel", NULL);
dispatch_set_target_queue(workerQ,funnelQ);

static int kKey;
 
// saves string "funnel" in funnelQ
CFStringRef tag = CFSTR("funnel");
dispatch_queue_set_specific(funnelQ, 
                            &kKey,
                            (void*)tag,
                            (dispatch_function_t)CFRelease);

dispatch_sync(workerQ, ^{
    // is funnelQ in the hierarchy of workerQ?
    CFStringRef tag = dispatch_get_specific(&kKey);
    if (tag){
        dispatch_sync(funnelQ, ^{
            // some task
        });
    } else {
        // some task
    }
});

Explanation:

  • I create a workerQ queue that points to a funnelQ queue. In real code this is useful if you have several “worker” queues and you want to resume/suspend all at once (which is achieved by resuming/updating their target funnelQ queue).
  • I may funnel my worker queues at any point in time, so to know if they are funneled or not, I tag funnelQ with the word "funnel".
  • Down the road I dispatch_sync something to workerQ, and for whatever reason I want to dispatch_sync to funnelQ, but avoiding a dispatch_sync to the current queue, so I check for the tag and act accordingly. Because the get walks up the hierarchy, the value won't be found in workerQ but it will be found in funnelQ. This is a way of finding out if any queue in the hierarchy is the one where we stored the value. And therefore, to prevent a dispatch_sync to the current queue.

If you are wondering about the functions that read/write context data, there are three:

  • dispatch_queue_set_specific: Write to a queue.
  • dispatch_queue_get_specific: Read from a queue.
  • dispatch_get_specific: Convenience function to read from the current queue.

The key is compared by pointer, and never dereferenced. The last parameter in the setter is a destructor to release the key.

If you are wondering about “pointing one queue to another”, it means exactly that. For example, I can point a queue A to the main queue, and it will cause all blocks in the queue A to run in the main queue (usually this is done for UI updates).


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...