Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
575 views
in Technique[技术] by (71.8m points)

tensorflow - Training and Predicting with instance keys

I am able to train my model and use ML Engine for prediction but my results don't include any identifying information. This works fine when submitting one row at a time for prediction but when submitting multiple rows I have no way of connecting the prediction back to the original input data. The GCP documentation discusses using instance keys but I can't find any example code that trains and predicts using an instance key. Taking the GCP census example how would I update the input functions to pass a unique ID through the graph and ignore it during training yet return the unique ID with predictions? Or alternatively if anyone knows of a different example already using keys that would help as well.

From Census Estimator Sample

def serving_input_fn():
    feature_placeholders = {
      column.name: tf.placeholder(column.dtype, [None])
      for column in INPUT_COLUMNS
    }

    features = {
      key: tf.expand_dims(tensor, -1)
      for key, tensor in feature_placeholders.items()
    }

    return input_fn_utils.InputFnOps(
      features,
      None,
      feature_placeholders
    )


def generate_input_fn(filenames,
                  num_epochs=None,
                  shuffle=True,
                  skip_header_lines=0,
                  batch_size=40):

    def _input_fn():
        files = tf.concat([
          tf.train.match_filenames_once(filename)
          for filename in filenames
        ], axis=0)

        filename_queue = tf.train.string_input_producer(
          files, num_epochs=num_epochs, shuffle=shuffle)
        reader = tf.TextLineReader(skip_header_lines=skip_header_lines)

        _, rows = reader.read_up_to(filename_queue, num_records=batch_size)

        row_columns = tf.expand_dims(rows, -1)
        columns = tf.decode_csv(row_columns, record_defaults=CSV_COLUMN_DEFAULTS)
        features = dict(zip(CSV_COLUMNS, columns))

        # Remove unused columns
        for col in UNUSED_COLUMNS:
          features.pop(col)

        if shuffle:
           features = tf.train.shuffle_batch(
             features,
             batch_size,
             capacity=batch_size * 10,
             min_after_dequeue=batch_size*2 + 1,
             num_threads=multiprocessing.cpu_count(),
             enqueue_many=True,
             allow_smaller_final_batch=True
           )
        label_tensor = parse_label_column(features.pop(LABEL_COLUMN))
        return features, label_tensor

    return _input_fn

Update: I was able to use the suggested code from this answer below I just needed to alter it slightly to update the output alternatives in the model_fn_ops instead of just the prediction dict. However, this only works if my serving input function is coded for json inputs similar to this. My serving input function was previously modeled after the CSV serving input function in the Census Core Sample.

I think my problem is coming from the build_standardized_signature_def function and even more so the is_classification_problem function that it calls. The input dict length using the csv serving function is 1 so this logic ends up using the classification_signature_def which only ends up displaying the scores (which turns out are actually the probabilities) whereas the input dict length is greater than 1 with the json serving input function and instead the predict_signature_def is used which includes all of the outputs.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...