Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
233 views
in Technique[技术] by (71.8m points)

Distributed tensorflow: the difference between In-graph replication and Between-graph replication

I got confused about the two concepts: In-graph replication and Between-graph replication when reading the Replicated training in tensorflow's official How-to.

  1. It's said in above link that

    In-graph replication. In this approach, the client builds a single tf.Graph that contains one set of parameters (in tf.Variable nodes pinned to /job:ps); ...

    Does this mean there are multiple tf.Graphs in Between-graph replication approach? If yes, where are the corresponding codes in the provided examples?

  2. While there is already a Between-graph replication example in above link, could anyone provide a In-graph replication implementation (pseudo code is fine) and highlight its main differences from Between-graph replication?

    Thanks in advance!


Edit_1: more questions

Thanks a lot for your detailed explanations and gist code @mrry @YaroslavBulatov ! After looking your responses, I have the following two questions:

  1. There is the following statement in Replicated training:

    Between-graph replication. In this approach, there is a separate client for each /job:worker task, typically in the same process as the worker task. Each client builds a similar graph containing the parameters (pinned to /job:ps as before using tf.train.replica_device_setter() to map them deterministically to the same tasks); and a single copy of the compute-intensive part of the model, pinned to the local task in /job:worker.

    I have two sub-questions related to above words in bold.

    (A) Why do we say each client builds similar graph, but not same graph? I wonder the graph built in each client in the example of Replicated training should be the same because below graph construction codes are shared within all workers.:

    # Build model...

    loss = ...

    global_step = tf.Variable(0)

    (B) Shouldn't it be multiple copies of compute-intensive part of the model, since we have multiple workers?

  2. Does the example in Replicated training support training on multiple machines, each of which has multiple GPUs? If not, can we use simultaneously both the In-graph replication to support training on multiple GPUs on each machine and Between-graph replication for cross-machine training? I ask this question because @mrry indicated that the In-graph replication is essentially same to the way used in CIFAR-10 example model for multiple GPUs.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

First of all, for some historical context, "in-graph replication" is the first approach that we tried in TensorFlow, and it did not achieve the performance that many users required, so the more complicated "between-graph" approach is the current recommended way to perform distributed training. Higher-level libraries such as tf.learn use the "between-graph" approach for distributed training.

To answer your specific questions:

  1. Does this mean there are multiple tf.Graphs in the between-graph replication approach? If yes, where are the corresponding codes in the provided examples?

    Yes. The typical between-graph replication setup will use a separate TensorFlow process for each worker replica, and each of this will build a separate tf.Graph for the model. Usually each process uses the global default graph (accessible through tf.get_default_graph()) and it is not created explicitly.

    (In principle, you could use a single TensorFlow process with the same tf.Graph and multiple tf.Session objects that share the same underlying graph, as long as you configured the tf.ConfigProto.device_filters option for each session differently, but this is an uncommon setup.)

  2. While there is already a between-graph replication example in above link, could anyone provide an in-graph replication implementation (pseudocode is fine) and highlight its main differences from between-graph replication?

    For historical reasons, there are not many examples of in-graph replication (Yaroslav's gist is one exception). A program using in-graph replication will typically include a loop that creates the same graph structure for each worker (e.g. the loop on line 74 of the gist), and use variable sharing between the workers.

    The one place where in-graph replication persists is for using multiple devices in a single process (e.g. multiple GPUs). The CIFAR-10 example model for multiple GPUs is an example of this pattern (see the loop over GPU devices here).

(In my opinion, the inconsistency between how multiple workers and multiple devices in a single worker are treated is unfortunate. In-graph replication is simpler to understand than between-graph replication, because it doesn't rely on implicit sharing between the replicas. Higher-level libraries, such as tf.learn and TF-Slim, hide some of these issues, and offer hope that we can offer a better replication scheme in the future.)

  1. Why do we say each client builds a similar graph, but not the same graph?

    Because they aren't required to be identical (and there is no integrity check that enforces this). In particular, each worker might create a graph with different explicit device assignments ("/job:worker/task:0", "/job:worker/task:1", etc.). The chief worker might create additional operations that are not created on (or used by) the non-chief workers. However, in most cases, the graphs are logically (i.e. modulo device assignments) the same.

    Shouldn't it be multiple copies of the compute-intensive part of the model, since we have multiple workers?

    Typically, each worker has a separate graph that contains a single copy of the compute-intensive part of the model. The graph for worker i does not contain the nodes for worker j (assuming i ≠ j). (An exception would be the case where you're using between-graph replication for distributed training, and in-graph replication for using multiple GPUs in each worker. In that case, the graph for a worker would typically contain N copies of the compute-intensive part of the graph, where N is the number of GPUs in that worker.)

  2. Does the example in Replicated training support training on multiple machines, each of which has multiple GPUs?

    The example code only covers training on multiple machines, and says nothing about how to train on multiple GPUs in each machine. However, the techniques compose easily. In this part of the example:

    # Build model...
    loss = ...
    

    ...you could add a loop over the GPUs in the local machine, to achieve distributed training multiple workers each with multiple GPUs.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...