InfeedEnqueueTuple issue when trying to restore updated BERT model checkpoint using Cloud TPU












2















I'd appreciate any help on the below, thank you in advance. I made a copy of Google Bert's notebook on fine-tuning and trained the SQUAD dataset on it using Cloud TPU and Bucket. The predictions on the dev set are ok, so I downloaded the checkpoint, model.ckpt.meta, model.ckpt.index and model.ckpt.data files locally and tried to restore using code:



sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
saver = tf.train.import_meta_graph(META_FILE) # META_FILE being path to .meta
saver.restore(sess, 'model.ckpt')


However, I got the error:



    op_def = op_dict[node.op]
KeyError: 'InfeedEnqueueTuple'


I assume it is part of Cloud TPU Tools and I should continue on Cloud TPU, so I tried the below (reference):



# code from cells before includes
...
tf.contrib.cloud.configure_gcs(session, credentials=auth_info)
...
tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(TPU_ADDRESS)
run_config = tf.contrib.tpu.RunConfig(
cluster=tpu_cluster_resolver,
model_dir=OUTPUT_DIR,
save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS,
tpu_config=tf.contrib.tpu.TPUConfig(
iterations_per_loop=ITERATIONS_PER_LOOP,
num_shards=NUM_TPU_CORES,
per_host_input_for_training=tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2))
...


Problem cell:



"""
# not valid checkpoint error. <bucket> placeholder for cloud bucket name
sess = tf.Session()
META_FILE = "gs://<bucket>/bert/models/bertsquad/model.ckpt-10949.meta"
CKPT_FILE = "gs://<bucket>/bert/models/bertsquad/model.ckpt"
saver = tf.train.import_meta_graph(META_FILE)
saver.restore(sess, CKPT_FILE)
"""

from google.cloud import storage
from tensorflow import MetaGraphDef

client = storage.Client(project="agent-helper-4a014")
bucket = client.get_bucket(<bucket>)
metafile = "bert/models/bertsquad/model.ckpt-10949.meta"
# using full path gs://<bucket>/bert/models/bertsquad doesn't work

blob = bucket.get_blob(metafile)
#blob = bucket.blob(metafile)
#model_graph = blob.download_to_filename("model.ckpt")
model_graph = blob.download_as_string()

mgd = MetaGraphDef()
mgd.ParseFromString(model_graph)

with tf.Session() as sess:
saver = tf.train.import_meta_graph(mgd, clear_devices=True)
init_checkpoint = saver.restore(sess, 'model.ckpt')


That in turn gave the following error:



InvalidArgumentError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

No OpKernel was registered to support Op 'InfeedEnqueueTuple' with these attrs. Registered devices: [CPU,XLA_CPU], Registered kernels:
<no registered kernels>

[[node input_pipeline_task0/while/InfeedQueue/enqueue/0 (defined at <ipython-input-67-e4b52b7b5944>:21) = InfeedEnqueueTuple[_class=["loc:@input_pipeline_task0/while/IteratorGetNext"], device_ordinal=0, dtypes=[DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32], shapes=[[2], [2,384], [2,384], [2,384], [2], [2]], _device="/job:worker/task:0/device:CPU:0"](input_pipeline_task0/while/IteratorGetNext, input_pipeline_task0/while/IteratorGetNext:1, input_pipeline_task0/while/IteratorGetNext:2, input_pipeline_task0/while/IteratorGetNext:3, input_pipeline_task0/while/IteratorGetNext:4, input_pipeline_task0/while/IteratorGetNext:5)]]









share|improve this question



























    2















    I'd appreciate any help on the below, thank you in advance. I made a copy of Google Bert's notebook on fine-tuning and trained the SQUAD dataset on it using Cloud TPU and Bucket. The predictions on the dev set are ok, so I downloaded the checkpoint, model.ckpt.meta, model.ckpt.index and model.ckpt.data files locally and tried to restore using code:



    sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
    saver = tf.train.import_meta_graph(META_FILE) # META_FILE being path to .meta
    saver.restore(sess, 'model.ckpt')


    However, I got the error:



        op_def = op_dict[node.op]
    KeyError: 'InfeedEnqueueTuple'


    I assume it is part of Cloud TPU Tools and I should continue on Cloud TPU, so I tried the below (reference):



    # code from cells before includes
    ...
    tf.contrib.cloud.configure_gcs(session, credentials=auth_info)
    ...
    tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(TPU_ADDRESS)
    run_config = tf.contrib.tpu.RunConfig(
    cluster=tpu_cluster_resolver,
    model_dir=OUTPUT_DIR,
    save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS,
    tpu_config=tf.contrib.tpu.TPUConfig(
    iterations_per_loop=ITERATIONS_PER_LOOP,
    num_shards=NUM_TPU_CORES,
    per_host_input_for_training=tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2))
    ...


    Problem cell:



    """
    # not valid checkpoint error. <bucket> placeholder for cloud bucket name
    sess = tf.Session()
    META_FILE = "gs://<bucket>/bert/models/bertsquad/model.ckpt-10949.meta"
    CKPT_FILE = "gs://<bucket>/bert/models/bertsquad/model.ckpt"
    saver = tf.train.import_meta_graph(META_FILE)
    saver.restore(sess, CKPT_FILE)
    """

    from google.cloud import storage
    from tensorflow import MetaGraphDef

    client = storage.Client(project="agent-helper-4a014")
    bucket = client.get_bucket(<bucket>)
    metafile = "bert/models/bertsquad/model.ckpt-10949.meta"
    # using full path gs://<bucket>/bert/models/bertsquad doesn't work

    blob = bucket.get_blob(metafile)
    #blob = bucket.blob(metafile)
    #model_graph = blob.download_to_filename("model.ckpt")
    model_graph = blob.download_as_string()

    mgd = MetaGraphDef()
    mgd.ParseFromString(model_graph)

    with tf.Session() as sess:
    saver = tf.train.import_meta_graph(mgd, clear_devices=True)
    init_checkpoint = saver.restore(sess, 'model.ckpt')


    That in turn gave the following error:



    InvalidArgumentError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

    No OpKernel was registered to support Op 'InfeedEnqueueTuple' with these attrs. Registered devices: [CPU,XLA_CPU], Registered kernels:
    <no registered kernels>

    [[node input_pipeline_task0/while/InfeedQueue/enqueue/0 (defined at <ipython-input-67-e4b52b7b5944>:21) = InfeedEnqueueTuple[_class=["loc:@input_pipeline_task0/while/IteratorGetNext"], device_ordinal=0, dtypes=[DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32], shapes=[[2], [2,384], [2,384], [2,384], [2], [2]], _device="/job:worker/task:0/device:CPU:0"](input_pipeline_task0/while/IteratorGetNext, input_pipeline_task0/while/IteratorGetNext:1, input_pipeline_task0/while/IteratorGetNext:2, input_pipeline_task0/while/IteratorGetNext:3, input_pipeline_task0/while/IteratorGetNext:4, input_pipeline_task0/while/IteratorGetNext:5)]]









    share|improve this question

























      2












      2








      2


      1






      I'd appreciate any help on the below, thank you in advance. I made a copy of Google Bert's notebook on fine-tuning and trained the SQUAD dataset on it using Cloud TPU and Bucket. The predictions on the dev set are ok, so I downloaded the checkpoint, model.ckpt.meta, model.ckpt.index and model.ckpt.data files locally and tried to restore using code:



      sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
      saver = tf.train.import_meta_graph(META_FILE) # META_FILE being path to .meta
      saver.restore(sess, 'model.ckpt')


      However, I got the error:



          op_def = op_dict[node.op]
      KeyError: 'InfeedEnqueueTuple'


      I assume it is part of Cloud TPU Tools and I should continue on Cloud TPU, so I tried the below (reference):



      # code from cells before includes
      ...
      tf.contrib.cloud.configure_gcs(session, credentials=auth_info)
      ...
      tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(TPU_ADDRESS)
      run_config = tf.contrib.tpu.RunConfig(
      cluster=tpu_cluster_resolver,
      model_dir=OUTPUT_DIR,
      save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS,
      tpu_config=tf.contrib.tpu.TPUConfig(
      iterations_per_loop=ITERATIONS_PER_LOOP,
      num_shards=NUM_TPU_CORES,
      per_host_input_for_training=tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2))
      ...


      Problem cell:



      """
      # not valid checkpoint error. <bucket> placeholder for cloud bucket name
      sess = tf.Session()
      META_FILE = "gs://<bucket>/bert/models/bertsquad/model.ckpt-10949.meta"
      CKPT_FILE = "gs://<bucket>/bert/models/bertsquad/model.ckpt"
      saver = tf.train.import_meta_graph(META_FILE)
      saver.restore(sess, CKPT_FILE)
      """

      from google.cloud import storage
      from tensorflow import MetaGraphDef

      client = storage.Client(project="agent-helper-4a014")
      bucket = client.get_bucket(<bucket>)
      metafile = "bert/models/bertsquad/model.ckpt-10949.meta"
      # using full path gs://<bucket>/bert/models/bertsquad doesn't work

      blob = bucket.get_blob(metafile)
      #blob = bucket.blob(metafile)
      #model_graph = blob.download_to_filename("model.ckpt")
      model_graph = blob.download_as_string()

      mgd = MetaGraphDef()
      mgd.ParseFromString(model_graph)

      with tf.Session() as sess:
      saver = tf.train.import_meta_graph(mgd, clear_devices=True)
      init_checkpoint = saver.restore(sess, 'model.ckpt')


      That in turn gave the following error:



      InvalidArgumentError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

      No OpKernel was registered to support Op 'InfeedEnqueueTuple' with these attrs. Registered devices: [CPU,XLA_CPU], Registered kernels:
      <no registered kernels>

      [[node input_pipeline_task0/while/InfeedQueue/enqueue/0 (defined at <ipython-input-67-e4b52b7b5944>:21) = InfeedEnqueueTuple[_class=["loc:@input_pipeline_task0/while/IteratorGetNext"], device_ordinal=0, dtypes=[DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32], shapes=[[2], [2,384], [2,384], [2,384], [2], [2]], _device="/job:worker/task:0/device:CPU:0"](input_pipeline_task0/while/IteratorGetNext, input_pipeline_task0/while/IteratorGetNext:1, input_pipeline_task0/while/IteratorGetNext:2, input_pipeline_task0/while/IteratorGetNext:3, input_pipeline_task0/while/IteratorGetNext:4, input_pipeline_task0/while/IteratorGetNext:5)]]









      share|improve this question














      I'd appreciate any help on the below, thank you in advance. I made a copy of Google Bert's notebook on fine-tuning and trained the SQUAD dataset on it using Cloud TPU and Bucket. The predictions on the dev set are ok, so I downloaded the checkpoint, model.ckpt.meta, model.ckpt.index and model.ckpt.data files locally and tried to restore using code:



      sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
      saver = tf.train.import_meta_graph(META_FILE) # META_FILE being path to .meta
      saver.restore(sess, 'model.ckpt')


      However, I got the error:



          op_def = op_dict[node.op]
      KeyError: 'InfeedEnqueueTuple'


      I assume it is part of Cloud TPU Tools and I should continue on Cloud TPU, so I tried the below (reference):



      # code from cells before includes
      ...
      tf.contrib.cloud.configure_gcs(session, credentials=auth_info)
      ...
      tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(TPU_ADDRESS)
      run_config = tf.contrib.tpu.RunConfig(
      cluster=tpu_cluster_resolver,
      model_dir=OUTPUT_DIR,
      save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS,
      tpu_config=tf.contrib.tpu.TPUConfig(
      iterations_per_loop=ITERATIONS_PER_LOOP,
      num_shards=NUM_TPU_CORES,
      per_host_input_for_training=tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2))
      ...


      Problem cell:



      """
      # not valid checkpoint error. <bucket> placeholder for cloud bucket name
      sess = tf.Session()
      META_FILE = "gs://<bucket>/bert/models/bertsquad/model.ckpt-10949.meta"
      CKPT_FILE = "gs://<bucket>/bert/models/bertsquad/model.ckpt"
      saver = tf.train.import_meta_graph(META_FILE)
      saver.restore(sess, CKPT_FILE)
      """

      from google.cloud import storage
      from tensorflow import MetaGraphDef

      client = storage.Client(project="agent-helper-4a014")
      bucket = client.get_bucket(<bucket>)
      metafile = "bert/models/bertsquad/model.ckpt-10949.meta"
      # using full path gs://<bucket>/bert/models/bertsquad doesn't work

      blob = bucket.get_blob(metafile)
      #blob = bucket.blob(metafile)
      #model_graph = blob.download_to_filename("model.ckpt")
      model_graph = blob.download_as_string()

      mgd = MetaGraphDef()
      mgd.ParseFromString(model_graph)

      with tf.Session() as sess:
      saver = tf.train.import_meta_graph(mgd, clear_devices=True)
      init_checkpoint = saver.restore(sess, 'model.ckpt')


      That in turn gave the following error:



      InvalidArgumentError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

      No OpKernel was registered to support Op 'InfeedEnqueueTuple' with these attrs. Registered devices: [CPU,XLA_CPU], Registered kernels:
      <no registered kernels>

      [[node input_pipeline_task0/while/InfeedQueue/enqueue/0 (defined at <ipython-input-67-e4b52b7b5944>:21) = InfeedEnqueueTuple[_class=["loc:@input_pipeline_task0/while/IteratorGetNext"], device_ordinal=0, dtypes=[DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32], shapes=[[2], [2,384], [2,384], [2,384], [2], [2]], _device="/job:worker/task:0/device:CPU:0"](input_pipeline_task0/while/IteratorGetNext, input_pipeline_task0/while/IteratorGetNext:1, input_pipeline_task0/while/IteratorGetNext:2, input_pipeline_task0/while/IteratorGetNext:3, input_pipeline_task0/while/IteratorGetNext:4, input_pipeline_task0/while/IteratorGetNext:5)]]






      python tensorflow google-colaboratory google-cloud-tpu






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 16 '18 at 8:07









      tofucattofucat

      133




      133
























          1 Answer
          1






          active

          oldest

          votes


















          0














          If your motive is prediction, then just give the model_dir location(must be a GCS bucket) where the checkpoints and meta file are saved. The code will not go for training again (as the checkpoint is saved for the number of training steps and there is no change in the model graph). It will directly jump to the prediction.



          But if your use case really want to save the checkpoints, and restore it only for inference then follow the steps:




          • Create the model network for each and every layer manually as that of original model or use saved .meta file to recreate the network using tf.train.import() function like this:


          saver = tf.train.import_meta_graph('<filename>.meta')




          • Now, restore the checkpoints using: saver.restore(sess, 'model.ckpt')


          NOTE: Model graph into which checkpoints are being restored, should be exactly the same of that original graph for which those checkpoints are saved.



          Hope this solves your issue.






          share|improve this answer
























          • Thank you, I still can't get it to work but perhaps I need to read up a bit more and refer to this after.

            – tofucat
            Nov 23 '18 at 6:17











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53333769%2finfeedenqueuetuple-issue-when-trying-to-restore-updated-bert-model-checkpoint-us%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          If your motive is prediction, then just give the model_dir location(must be a GCS bucket) where the checkpoints and meta file are saved. The code will not go for training again (as the checkpoint is saved for the number of training steps and there is no change in the model graph). It will directly jump to the prediction.



          But if your use case really want to save the checkpoints, and restore it only for inference then follow the steps:




          • Create the model network for each and every layer manually as that of original model or use saved .meta file to recreate the network using tf.train.import() function like this:


          saver = tf.train.import_meta_graph('<filename>.meta')




          • Now, restore the checkpoints using: saver.restore(sess, 'model.ckpt')


          NOTE: Model graph into which checkpoints are being restored, should be exactly the same of that original graph for which those checkpoints are saved.



          Hope this solves your issue.






          share|improve this answer
























          • Thank you, I still can't get it to work but perhaps I need to read up a bit more and refer to this after.

            – tofucat
            Nov 23 '18 at 6:17
















          0














          If your motive is prediction, then just give the model_dir location(must be a GCS bucket) where the checkpoints and meta file are saved. The code will not go for training again (as the checkpoint is saved for the number of training steps and there is no change in the model graph). It will directly jump to the prediction.



          But if your use case really want to save the checkpoints, and restore it only for inference then follow the steps:




          • Create the model network for each and every layer manually as that of original model or use saved .meta file to recreate the network using tf.train.import() function like this:


          saver = tf.train.import_meta_graph('<filename>.meta')




          • Now, restore the checkpoints using: saver.restore(sess, 'model.ckpt')


          NOTE: Model graph into which checkpoints are being restored, should be exactly the same of that original graph for which those checkpoints are saved.



          Hope this solves your issue.






          share|improve this answer
























          • Thank you, I still can't get it to work but perhaps I need to read up a bit more and refer to this after.

            – tofucat
            Nov 23 '18 at 6:17














          0












          0








          0







          If your motive is prediction, then just give the model_dir location(must be a GCS bucket) where the checkpoints and meta file are saved. The code will not go for training again (as the checkpoint is saved for the number of training steps and there is no change in the model graph). It will directly jump to the prediction.



          But if your use case really want to save the checkpoints, and restore it only for inference then follow the steps:




          • Create the model network for each and every layer manually as that of original model or use saved .meta file to recreate the network using tf.train.import() function like this:


          saver = tf.train.import_meta_graph('<filename>.meta')




          • Now, restore the checkpoints using: saver.restore(sess, 'model.ckpt')


          NOTE: Model graph into which checkpoints are being restored, should be exactly the same of that original graph for which those checkpoints are saved.



          Hope this solves your issue.






          share|improve this answer













          If your motive is prediction, then just give the model_dir location(must be a GCS bucket) where the checkpoints and meta file are saved. The code will not go for training again (as the checkpoint is saved for the number of training steps and there is no change in the model graph). It will directly jump to the prediction.



          But if your use case really want to save the checkpoints, and restore it only for inference then follow the steps:




          • Create the model network for each and every layer manually as that of original model or use saved .meta file to recreate the network using tf.train.import() function like this:


          saver = tf.train.import_meta_graph('<filename>.meta')




          • Now, restore the checkpoints using: saver.restore(sess, 'model.ckpt')


          NOTE: Model graph into which checkpoints are being restored, should be exactly the same of that original graph for which those checkpoints are saved.



          Hope this solves your issue.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 20 '18 at 20:10









          aman2930aman2930

          1705




          1705













          • Thank you, I still can't get it to work but perhaps I need to read up a bit more and refer to this after.

            – tofucat
            Nov 23 '18 at 6:17



















          • Thank you, I still can't get it to work but perhaps I need to read up a bit more and refer to this after.

            – tofucat
            Nov 23 '18 at 6:17

















          Thank you, I still can't get it to work but perhaps I need to read up a bit more and refer to this after.

          – tofucat
          Nov 23 '18 at 6:17





          Thank you, I still can't get it to work but perhaps I need to read up a bit more and refer to this after.

          – tofucat
          Nov 23 '18 at 6:17




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53333769%2finfeedenqueuetuple-issue-when-trying-to-restore-updated-bert-model-checkpoint-us%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Bressuire

          Vorschmack

          Quarantine