What's the difference of name scope and a variable scope in tensorflow?












215















What's the differences between these functions?




tf.variable_op_scope(values, name, default_name, initializer=None)



Returns a context manager for defining an op that creates variables.
This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope and a variable scope.







tf.op_scope(values, name, default_name=None)



Returns a context manager for use when defining a Python op.
This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope.







tf.name_scope(name)



Wrapper for Graph.name_scope() using the default graph.
See Graph.name_scope() for more details.







tf.variable_scope(name_or_scope, reuse=None, initializer=None)



Returns a context for variable scope.
Variable scope allows to create new variables and to share already created ones while providing checks to not create or share by accident. For details, see the Variable Scope How To, here we present only a few basic examples.











share|improve this question




















  • 10





    Can you just accept the answer which you think is the best?

    – nbro
    Jan 4 '18 at 3:19
















215















What's the differences between these functions?




tf.variable_op_scope(values, name, default_name, initializer=None)



Returns a context manager for defining an op that creates variables.
This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope and a variable scope.







tf.op_scope(values, name, default_name=None)



Returns a context manager for use when defining a Python op.
This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope.







tf.name_scope(name)



Wrapper for Graph.name_scope() using the default graph.
See Graph.name_scope() for more details.







tf.variable_scope(name_or_scope, reuse=None, initializer=None)



Returns a context for variable scope.
Variable scope allows to create new variables and to share already created ones while providing checks to not create or share by accident. For details, see the Variable Scope How To, here we present only a few basic examples.











share|improve this question




















  • 10





    Can you just accept the answer which you think is the best?

    – nbro
    Jan 4 '18 at 3:19














215












215








215


142






What's the differences between these functions?




tf.variable_op_scope(values, name, default_name, initializer=None)



Returns a context manager for defining an op that creates variables.
This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope and a variable scope.







tf.op_scope(values, name, default_name=None)



Returns a context manager for use when defining a Python op.
This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope.







tf.name_scope(name)



Wrapper for Graph.name_scope() using the default graph.
See Graph.name_scope() for more details.







tf.variable_scope(name_or_scope, reuse=None, initializer=None)



Returns a context for variable scope.
Variable scope allows to create new variables and to share already created ones while providing checks to not create or share by accident. For details, see the Variable Scope How To, here we present only a few basic examples.











share|improve this question
















What's the differences between these functions?




tf.variable_op_scope(values, name, default_name, initializer=None)



Returns a context manager for defining an op that creates variables.
This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope and a variable scope.







tf.op_scope(values, name, default_name=None)



Returns a context manager for use when defining a Python op.
This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope.







tf.name_scope(name)



Wrapper for Graph.name_scope() using the default graph.
See Graph.name_scope() for more details.







tf.variable_scope(name_or_scope, reuse=None, initializer=None)



Returns a context for variable scope.
Variable scope allows to create new variables and to share already created ones while providing checks to not create or share by accident. For details, see the Variable Scope How To, here we present only a few basic examples.








tensorflow






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 11 '17 at 17:56









nbro

5,63184994




5,63184994










asked Mar 10 '16 at 14:19









user2886263user2886263

1,23941010




1,23941010








  • 10





    Can you just accept the answer which you think is the best?

    – nbro
    Jan 4 '18 at 3:19














  • 10





    Can you just accept the answer which you think is the best?

    – nbro
    Jan 4 '18 at 3:19








10




10





Can you just accept the answer which you think is the best?

– nbro
Jan 4 '18 at 3:19





Can you just accept the answer which you think is the best?

– nbro
Jan 4 '18 at 3:19












7 Answers
7






active

oldest

votes


















307














Let's begin by a short introduction to variable sharing. It is a mechanism in TensorFlow that allows for sharing variables accessed in different parts of the code without passing references to the variable around.



The method tf.get_variable can be used with the name of the variable as the argument to either create a new variable with such name or retrieve the one that was created before. This is different from using the tf.Variable constructor which will create a new variable every time it is called (and potentially add a suffix to the variable name if a variable with such name already exists).



It is for the purpose of the variable sharing mechanism that a separate type of scope (variable scope) was introduced.



As a result, we end up having two different types of scopes:





  • name scope, created using tf.name_scope


  • variable scope, created using tf.variable_scope


Both scopes have the same effect on all operations as well as variables created using tf.Variable, i.e., the scope will be added as a prefix to the operation or variable name.



However, name scope is ignored by tf.get_variable. We can see that in the following example:



with tf.name_scope("my_scope"):
v1 = tf.get_variable("var1", [1], dtype=tf.float32)
v2 = tf.Variable(1, name="var2", dtype=tf.float32)
a = tf.add(v1, v2)

print(v1.name) # var1:0
print(v2.name) # my_scope/var2:0
print(a.name) # my_scope/Add:0


The only way to place a variable accessed using tf.get_variable in a scope is to use a variable scope, as in the following example:



with tf.variable_scope("my_scope"):
v1 = tf.get_variable("var1", [1], dtype=tf.float32)
v2 = tf.Variable(1, name="var2", dtype=tf.float32)
a = tf.add(v1, v2)

print(v1.name) # my_scope/var1:0
print(v2.name) # my_scope/var2:0
print(a.name) # my_scope/Add:0


This allows us to easily share variables across different parts of the program, even within different name scopes:



with tf.name_scope("foo"):
with tf.variable_scope("var_scope"):
v = tf.get_variable("var", [1])
with tf.name_scope("bar"):
with tf.variable_scope("var_scope", reuse=True):
v1 = tf.get_variable("var", [1])
assert v1 == v
print(v.name) # var_scope/var:0
print(v1.name) # var_scope/var:0




UPDATE



As of version r0.11, op_scope and variable_op_scope are both deprecated and replaced by name_scope and variable_scope.






share|improve this answer





















  • 26





    Thanks for the clear explanation. Naturally, a follow up question would be "Why does Tensorflow have both of these confusingly similar mechanisms? Why not replace them with just one scope method which effectively does a variable_scope?"

    – John
    Feb 25 '17 at 18:02






  • 6





    I don't think I understand conceptually why the distinction between variable_scope vs name_scope is even needed. If one creates a variable (in any way with tf.Variable or tf.get_variable), it seems more natural to me that we should always be able to get it if we specify the scope or its full name. I don't understand why one ignores the scope name thing while the other doesn't. Do you understand the rational for this weird behaviour?

    – Charlie Parker
    Mar 20 '17 at 19:31






  • 17





    The reason is that with variable scope, one can define separate scopes for re-usable variables that are not affected by the current name scope used to define operations.

    – Andrzej Pronobis
    Mar 21 '17 at 5:23








  • 1





    What I understand is name_scope is mainly useful when you name two variables with same name but in different operations.

    – DINESHKUMAR MURUGAN
    Jul 14 '17 at 17:45






  • 4





    Hello, can you explain why the variable name in a variable_scope always ends with a :0? Does this means there may be variable names ends with :1, :2, etc, so how can this happens?

    – James Fan
    Aug 31 '17 at 8:49



















46














Both variable_op_scope and op_scope are now deprecated and should not be used at all.



Regarding the other two, I also had problems understanding the difference between variable_scope and name_scope (they looked almost the same) before I tried to visualize everything by creating a simple example:



import tensorflow as tf


def scoping(fn, scope1, scope2, vals):
with fn(scope1):
a = tf.Variable(vals[0], name='a')
b = tf.get_variable('b', initializer=vals[1])
c = tf.constant(vals[2], name='c')

with fn(scope2):
d = tf.add(a * b, c, name='res')

print 'n '.join([scope1, a.name, b.name, c.name, d.name]), 'n'
return d

d1 = scoping(tf.variable_scope, 'scope_vars', 'res', [1, 2, 3])
d2 = scoping(tf.name_scope, 'scope_name', 'res', [1, 2, 3])

with tf.Session() as sess:
writer = tf.summary.FileWriter('logs', sess.graph)
sess.run(tf.global_variables_initializer())
print sess.run([d1, d2])
writer.close()


Here I create a function that creates some variables and constants and groups them in scopes (depending on the type I provided). In this function, I also print the names of all the variables. After that, I executes the graph to get values of the resulting values and save event-files to investigate them in TensorBoard. If you run this, you will get the following:



scope_vars
scope_vars/a:0
scope_vars/b:0
scope_vars/c:0
scope_vars/res/res:0

scope_name
scope_name/a:0
b:0
scope_name/c:0
scope_name/res/res:0


You see the similar pattern if you open TensorBoard (as you see b is outside of scope_name rectangular):
enter image description here





This gives you the answer:



Now you see that tf.variable_scope() adds a prefix to the names of all variables (no matter how you create them), ops, constants. On the other hand tf.name_scope() ignores variables created with tf.get_variable() because it assumes that you know which variable and in which scope you wanted to use.



A good documentation on Sharing variables tells you that




tf.variable_scope(): Manages namespaces for names passed to tf.get_variable().




The same documentation provides a more details how does Variable Scope work and when it is useful.






share|improve this answer





















  • 2





    Fabulous answer with the example and the visuals, let's get this answer upvoted folks!

    – David Parks
    Mar 14 '18 at 18:12



















41














Namespaces is a way to organize names for variables and operators in hierarchical manner (e.g. "scopeA/scopeB/scopeC/op1")





  • tf.name_scope creates namespace for operators in the default graph.

  • tf.variable_scope creates namespace for both variables and operators in the default graph.


  • tf.op_scope same as tf.name_scope, but for the graph in which specified variables were created.


  • tf.variable_op_scope same as tf.variable_scope, but for the graph in which specified variables were created.



Links to the sources above help to disambiguate this documentation issue.



This example shows that all types of scopes define namespaces for both variables and operators with following differences:




  1. scopes defined by tf.variable_op_scope or tf.variable_scope are compatible with tf.get_variable (it ignores two other scopes)


  2. tf.op_scope and tf.variable_op_scope just select a graph from a list of specified variables to create a scope for. Other than than their behavior equal to tf.name_scope and tf.variable_scope accordingly


  3. tf.variable_scope and variable_op_scope add specified or default initializer.






share|improve this answer


























  • For the graph in which specified variables were created? Does this means such as above example by fabrizioM, with tf.variable_op_scope([a,b],name,"mysum2") as scope, here parameter a and b are not affected by this function and variables defined in this scope are affected?

    – user2886263
    Mar 11 '16 at 1:54













  • The answer for both questions is yes: the graph in which specified variables were created and they are not modified.

    – Alexander Gorban
    Mar 11 '16 at 4:56











  • Does this mean that tf.name_scope and tf.variable_scope only be used in default graph, but when you obviously define and constuct a graph using tf.Graph(), the other two functions tf.op_scope and tf.variable_op_scope can not be used in this graph!

    – user2886263
    Mar 12 '16 at 11:21



















9















As for API r0.11, op_scope and variable_op_scope are both deprecated.
name_scope and variable_scope can be nested:



with tf.name_scope('ns'):
with tf.variable_scope('vs'): #scope creation
v1 = tf.get_variable("v1",[1.0]) #v1.name = 'vs/v1:0'
v2 = tf.Variable([2.0],name = 'v2') #v2.name= 'ns/vs/v2:0'
v3 = v1 + v2 #v3.name = 'ns/vs/add:0'





share|improve this answer

































    7














    You can think them as two groups: variable_op_scope and op_scope take a set of variables as input and are designed to create operations. The difference is in how they affect the creation of variables with tf.get_variable:



    def mysum(a,b,name=None):
    with tf.op_scope([a,b],name,"mysum") as scope:
    v = tf.get_variable("v", 1)
    v2 = tf.Variable([0], name="v2")
    assert v.name == "v:0", v.name
    assert v2.name == "mysum/v2:0", v2.name
    return tf.add(a,b)

    def mysum2(a,b,name=None):
    with tf.variable_op_scope([a,b],name,"mysum2") as scope:
    v = tf.get_variable("v", 1)
    v2 = tf.Variable([0], name="v2")
    assert v.name == "mysum2/v:0", v.name
    assert v2.name == "mysum2/v2:0", v2.name
    return tf.add(a,b)

    with tf.Graph().as_default():
    op = mysum(tf.Variable(1), tf.Variable(2))
    op2 = mysum2(tf.Variable(1), tf.Variable(2))
    assert op.name == 'mysum/Add:0', op.name
    assert op2.name == 'mysum2/Add:0', op2.name


    notice the name of the variable v in the two examples.



    same for tf.name_scope and tf.variable_scope:



    with tf.Graph().as_default():
    with tf.name_scope("name_scope") as scope:
    v = tf.get_variable("v", [1])
    op = tf.add(v, v)
    v2 = tf.Variable([0], name="v2")
    assert v.name == "v:0", v.name
    assert op.name == "name_scope/Add:0", op.name
    assert v2.name == "name_scope/v2:0", v2.name

    with tf.Graph().as_default():
    with tf.variable_scope("name_scope") as scope:
    v = tf.get_variable("v", [1])
    op = tf.add(v, v)
    v2 = tf.Variable([0], name="v2")
    assert v.name == "name_scope/v:0", v.name
    assert op.name == "name_scope/Add:0", op.name
    assert v2.name == "name_scope/v2:0", v2.name


    You can read more about variable scope in the tutorial.
    A similar question was asked before on Stack Overflow.






    share|improve this answer

































      6














      Let's make it simple: just use tf.variable_scope. Quoting a TF developer,:




      Currently, we recommend everyone to use variable_scope and not use name_scope except for internal code and libraries.




      Besides the fact that variable_scope's functionality basically extends those of name_scope, consider how they do not play so nice together:



      with tf.name_scope('foo'):
      with tf.variable_scope('bar'):
      x = tf.get_variable('x', shape=())
      x2 = tf.square(x**2, name='x2')
      print(x.name)
      # bar/x:0
      print(x2.name)
      # foo/bar/x2:0


      By sticking to variable_scope only you avoid some headaches due to this kind of incompatibility.






      share|improve this answer

































        2














        From the last section of this page of the tensorflow documentation: Names of ops in tf.variable_scope()




        [...] when we do with tf.variable_scope("name"), this implicitly opens a tf.name_scope("name"). For example:




        with tf.variable_scope("foo"):
        x = 1.0 + tf.get_variable("v", [1])
        assert x.op.name == "foo/add"



        Name scopes can be opened in addition to a variable scope, and then they will only affect the names of the ops, but not of variables.




        with tf.variable_scope("foo"):
        with tf.name_scope("bar"):
        v = tf.get_variable("v", [1])
        x = 1.0 + v
        assert v.name == "foo/v:0"
        assert x.op.name == "foo/bar/add"



        When opening a variable scope using a captured object instead of a string, we do not alter the current name scope for ops.







        share|improve this answer

























          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f35919020%2fwhats-the-difference-of-name-scope-and-a-variable-scope-in-tensorflow%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          7 Answers
          7






          active

          oldest

          votes








          7 Answers
          7






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          307














          Let's begin by a short introduction to variable sharing. It is a mechanism in TensorFlow that allows for sharing variables accessed in different parts of the code without passing references to the variable around.



          The method tf.get_variable can be used with the name of the variable as the argument to either create a new variable with such name or retrieve the one that was created before. This is different from using the tf.Variable constructor which will create a new variable every time it is called (and potentially add a suffix to the variable name if a variable with such name already exists).



          It is for the purpose of the variable sharing mechanism that a separate type of scope (variable scope) was introduced.



          As a result, we end up having two different types of scopes:





          • name scope, created using tf.name_scope


          • variable scope, created using tf.variable_scope


          Both scopes have the same effect on all operations as well as variables created using tf.Variable, i.e., the scope will be added as a prefix to the operation or variable name.



          However, name scope is ignored by tf.get_variable. We can see that in the following example:



          with tf.name_scope("my_scope"):
          v1 = tf.get_variable("var1", [1], dtype=tf.float32)
          v2 = tf.Variable(1, name="var2", dtype=tf.float32)
          a = tf.add(v1, v2)

          print(v1.name) # var1:0
          print(v2.name) # my_scope/var2:0
          print(a.name) # my_scope/Add:0


          The only way to place a variable accessed using tf.get_variable in a scope is to use a variable scope, as in the following example:



          with tf.variable_scope("my_scope"):
          v1 = tf.get_variable("var1", [1], dtype=tf.float32)
          v2 = tf.Variable(1, name="var2", dtype=tf.float32)
          a = tf.add(v1, v2)

          print(v1.name) # my_scope/var1:0
          print(v2.name) # my_scope/var2:0
          print(a.name) # my_scope/Add:0


          This allows us to easily share variables across different parts of the program, even within different name scopes:



          with tf.name_scope("foo"):
          with tf.variable_scope("var_scope"):
          v = tf.get_variable("var", [1])
          with tf.name_scope("bar"):
          with tf.variable_scope("var_scope", reuse=True):
          v1 = tf.get_variable("var", [1])
          assert v1 == v
          print(v.name) # var_scope/var:0
          print(v1.name) # var_scope/var:0




          UPDATE



          As of version r0.11, op_scope and variable_op_scope are both deprecated and replaced by name_scope and variable_scope.






          share|improve this answer





















          • 26





            Thanks for the clear explanation. Naturally, a follow up question would be "Why does Tensorflow have both of these confusingly similar mechanisms? Why not replace them with just one scope method which effectively does a variable_scope?"

            – John
            Feb 25 '17 at 18:02






          • 6





            I don't think I understand conceptually why the distinction between variable_scope vs name_scope is even needed. If one creates a variable (in any way with tf.Variable or tf.get_variable), it seems more natural to me that we should always be able to get it if we specify the scope or its full name. I don't understand why one ignores the scope name thing while the other doesn't. Do you understand the rational for this weird behaviour?

            – Charlie Parker
            Mar 20 '17 at 19:31






          • 17





            The reason is that with variable scope, one can define separate scopes for re-usable variables that are not affected by the current name scope used to define operations.

            – Andrzej Pronobis
            Mar 21 '17 at 5:23








          • 1





            What I understand is name_scope is mainly useful when you name two variables with same name but in different operations.

            – DINESHKUMAR MURUGAN
            Jul 14 '17 at 17:45






          • 4





            Hello, can you explain why the variable name in a variable_scope always ends with a :0? Does this means there may be variable names ends with :1, :2, etc, so how can this happens?

            – James Fan
            Aug 31 '17 at 8:49
















          307














          Let's begin by a short introduction to variable sharing. It is a mechanism in TensorFlow that allows for sharing variables accessed in different parts of the code without passing references to the variable around.



          The method tf.get_variable can be used with the name of the variable as the argument to either create a new variable with such name or retrieve the one that was created before. This is different from using the tf.Variable constructor which will create a new variable every time it is called (and potentially add a suffix to the variable name if a variable with such name already exists).



          It is for the purpose of the variable sharing mechanism that a separate type of scope (variable scope) was introduced.



          As a result, we end up having two different types of scopes:





          • name scope, created using tf.name_scope


          • variable scope, created using tf.variable_scope


          Both scopes have the same effect on all operations as well as variables created using tf.Variable, i.e., the scope will be added as a prefix to the operation or variable name.



          However, name scope is ignored by tf.get_variable. We can see that in the following example:



          with tf.name_scope("my_scope"):
          v1 = tf.get_variable("var1", [1], dtype=tf.float32)
          v2 = tf.Variable(1, name="var2", dtype=tf.float32)
          a = tf.add(v1, v2)

          print(v1.name) # var1:0
          print(v2.name) # my_scope/var2:0
          print(a.name) # my_scope/Add:0


          The only way to place a variable accessed using tf.get_variable in a scope is to use a variable scope, as in the following example:



          with tf.variable_scope("my_scope"):
          v1 = tf.get_variable("var1", [1], dtype=tf.float32)
          v2 = tf.Variable(1, name="var2", dtype=tf.float32)
          a = tf.add(v1, v2)

          print(v1.name) # my_scope/var1:0
          print(v2.name) # my_scope/var2:0
          print(a.name) # my_scope/Add:0


          This allows us to easily share variables across different parts of the program, even within different name scopes:



          with tf.name_scope("foo"):
          with tf.variable_scope("var_scope"):
          v = tf.get_variable("var", [1])
          with tf.name_scope("bar"):
          with tf.variable_scope("var_scope", reuse=True):
          v1 = tf.get_variable("var", [1])
          assert v1 == v
          print(v.name) # var_scope/var:0
          print(v1.name) # var_scope/var:0




          UPDATE



          As of version r0.11, op_scope and variable_op_scope are both deprecated and replaced by name_scope and variable_scope.






          share|improve this answer





















          • 26





            Thanks for the clear explanation. Naturally, a follow up question would be "Why does Tensorflow have both of these confusingly similar mechanisms? Why not replace them with just one scope method which effectively does a variable_scope?"

            – John
            Feb 25 '17 at 18:02






          • 6





            I don't think I understand conceptually why the distinction between variable_scope vs name_scope is even needed. If one creates a variable (in any way with tf.Variable or tf.get_variable), it seems more natural to me that we should always be able to get it if we specify the scope or its full name. I don't understand why one ignores the scope name thing while the other doesn't. Do you understand the rational for this weird behaviour?

            – Charlie Parker
            Mar 20 '17 at 19:31






          • 17





            The reason is that with variable scope, one can define separate scopes for re-usable variables that are not affected by the current name scope used to define operations.

            – Andrzej Pronobis
            Mar 21 '17 at 5:23








          • 1





            What I understand is name_scope is mainly useful when you name two variables with same name but in different operations.

            – DINESHKUMAR MURUGAN
            Jul 14 '17 at 17:45






          • 4





            Hello, can you explain why the variable name in a variable_scope always ends with a :0? Does this means there may be variable names ends with :1, :2, etc, so how can this happens?

            – James Fan
            Aug 31 '17 at 8:49














          307












          307








          307







          Let's begin by a short introduction to variable sharing. It is a mechanism in TensorFlow that allows for sharing variables accessed in different parts of the code without passing references to the variable around.



          The method tf.get_variable can be used with the name of the variable as the argument to either create a new variable with such name or retrieve the one that was created before. This is different from using the tf.Variable constructor which will create a new variable every time it is called (and potentially add a suffix to the variable name if a variable with such name already exists).



          It is for the purpose of the variable sharing mechanism that a separate type of scope (variable scope) was introduced.



          As a result, we end up having two different types of scopes:





          • name scope, created using tf.name_scope


          • variable scope, created using tf.variable_scope


          Both scopes have the same effect on all operations as well as variables created using tf.Variable, i.e., the scope will be added as a prefix to the operation or variable name.



          However, name scope is ignored by tf.get_variable. We can see that in the following example:



          with tf.name_scope("my_scope"):
          v1 = tf.get_variable("var1", [1], dtype=tf.float32)
          v2 = tf.Variable(1, name="var2", dtype=tf.float32)
          a = tf.add(v1, v2)

          print(v1.name) # var1:0
          print(v2.name) # my_scope/var2:0
          print(a.name) # my_scope/Add:0


          The only way to place a variable accessed using tf.get_variable in a scope is to use a variable scope, as in the following example:



          with tf.variable_scope("my_scope"):
          v1 = tf.get_variable("var1", [1], dtype=tf.float32)
          v2 = tf.Variable(1, name="var2", dtype=tf.float32)
          a = tf.add(v1, v2)

          print(v1.name) # my_scope/var1:0
          print(v2.name) # my_scope/var2:0
          print(a.name) # my_scope/Add:0


          This allows us to easily share variables across different parts of the program, even within different name scopes:



          with tf.name_scope("foo"):
          with tf.variable_scope("var_scope"):
          v = tf.get_variable("var", [1])
          with tf.name_scope("bar"):
          with tf.variable_scope("var_scope", reuse=True):
          v1 = tf.get_variable("var", [1])
          assert v1 == v
          print(v.name) # var_scope/var:0
          print(v1.name) # var_scope/var:0




          UPDATE



          As of version r0.11, op_scope and variable_op_scope are both deprecated and replaced by name_scope and variable_scope.






          share|improve this answer















          Let's begin by a short introduction to variable sharing. It is a mechanism in TensorFlow that allows for sharing variables accessed in different parts of the code without passing references to the variable around.



          The method tf.get_variable can be used with the name of the variable as the argument to either create a new variable with such name or retrieve the one that was created before. This is different from using the tf.Variable constructor which will create a new variable every time it is called (and potentially add a suffix to the variable name if a variable with such name already exists).



          It is for the purpose of the variable sharing mechanism that a separate type of scope (variable scope) was introduced.



          As a result, we end up having two different types of scopes:





          • name scope, created using tf.name_scope


          • variable scope, created using tf.variable_scope


          Both scopes have the same effect on all operations as well as variables created using tf.Variable, i.e., the scope will be added as a prefix to the operation or variable name.



          However, name scope is ignored by tf.get_variable. We can see that in the following example:



          with tf.name_scope("my_scope"):
          v1 = tf.get_variable("var1", [1], dtype=tf.float32)
          v2 = tf.Variable(1, name="var2", dtype=tf.float32)
          a = tf.add(v1, v2)

          print(v1.name) # var1:0
          print(v2.name) # my_scope/var2:0
          print(a.name) # my_scope/Add:0


          The only way to place a variable accessed using tf.get_variable in a scope is to use a variable scope, as in the following example:



          with tf.variable_scope("my_scope"):
          v1 = tf.get_variable("var1", [1], dtype=tf.float32)
          v2 = tf.Variable(1, name="var2", dtype=tf.float32)
          a = tf.add(v1, v2)

          print(v1.name) # my_scope/var1:0
          print(v2.name) # my_scope/var2:0
          print(a.name) # my_scope/Add:0


          This allows us to easily share variables across different parts of the program, even within different name scopes:



          with tf.name_scope("foo"):
          with tf.variable_scope("var_scope"):
          v = tf.get_variable("var", [1])
          with tf.name_scope("bar"):
          with tf.variable_scope("var_scope", reuse=True):
          v1 = tf.get_variable("var", [1])
          assert v1 == v
          print(v.name) # var_scope/var:0
          print(v1.name) # var_scope/var:0




          UPDATE



          As of version r0.11, op_scope and variable_op_scope are both deprecated and replaced by name_scope and variable_scope.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited May 2 '18 at 16:12









          pradeep

          8,224101930




          8,224101930










          answered May 30 '16 at 23:09









          Andrzej PronobisAndrzej Pronobis

          17.5k94569




          17.5k94569








          • 26





            Thanks for the clear explanation. Naturally, a follow up question would be "Why does Tensorflow have both of these confusingly similar mechanisms? Why not replace them with just one scope method which effectively does a variable_scope?"

            – John
            Feb 25 '17 at 18:02






          • 6





            I don't think I understand conceptually why the distinction between variable_scope vs name_scope is even needed. If one creates a variable (in any way with tf.Variable or tf.get_variable), it seems more natural to me that we should always be able to get it if we specify the scope or its full name. I don't understand why one ignores the scope name thing while the other doesn't. Do you understand the rational for this weird behaviour?

            – Charlie Parker
            Mar 20 '17 at 19:31






          • 17





            The reason is that with variable scope, one can define separate scopes for re-usable variables that are not affected by the current name scope used to define operations.

            – Andrzej Pronobis
            Mar 21 '17 at 5:23








          • 1





            What I understand is name_scope is mainly useful when you name two variables with same name but in different operations.

            – DINESHKUMAR MURUGAN
            Jul 14 '17 at 17:45






          • 4





            Hello, can you explain why the variable name in a variable_scope always ends with a :0? Does this means there may be variable names ends with :1, :2, etc, so how can this happens?

            – James Fan
            Aug 31 '17 at 8:49














          • 26





            Thanks for the clear explanation. Naturally, a follow up question would be "Why does Tensorflow have both of these confusingly similar mechanisms? Why not replace them with just one scope method which effectively does a variable_scope?"

            – John
            Feb 25 '17 at 18:02






          • 6





            I don't think I understand conceptually why the distinction between variable_scope vs name_scope is even needed. If one creates a variable (in any way with tf.Variable or tf.get_variable), it seems more natural to me that we should always be able to get it if we specify the scope or its full name. I don't understand why one ignores the scope name thing while the other doesn't. Do you understand the rational for this weird behaviour?

            – Charlie Parker
            Mar 20 '17 at 19:31






          • 17





            The reason is that with variable scope, one can define separate scopes for re-usable variables that are not affected by the current name scope used to define operations.

            – Andrzej Pronobis
            Mar 21 '17 at 5:23








          • 1





            What I understand is name_scope is mainly useful when you name two variables with same name but in different operations.

            – DINESHKUMAR MURUGAN
            Jul 14 '17 at 17:45






          • 4





            Hello, can you explain why the variable name in a variable_scope always ends with a :0? Does this means there may be variable names ends with :1, :2, etc, so how can this happens?

            – James Fan
            Aug 31 '17 at 8:49








          26




          26





          Thanks for the clear explanation. Naturally, a follow up question would be "Why does Tensorflow have both of these confusingly similar mechanisms? Why not replace them with just one scope method which effectively does a variable_scope?"

          – John
          Feb 25 '17 at 18:02





          Thanks for the clear explanation. Naturally, a follow up question would be "Why does Tensorflow have both of these confusingly similar mechanisms? Why not replace them with just one scope method which effectively does a variable_scope?"

          – John
          Feb 25 '17 at 18:02




          6




          6





          I don't think I understand conceptually why the distinction between variable_scope vs name_scope is even needed. If one creates a variable (in any way with tf.Variable or tf.get_variable), it seems more natural to me that we should always be able to get it if we specify the scope or its full name. I don't understand why one ignores the scope name thing while the other doesn't. Do you understand the rational for this weird behaviour?

          – Charlie Parker
          Mar 20 '17 at 19:31





          I don't think I understand conceptually why the distinction between variable_scope vs name_scope is even needed. If one creates a variable (in any way with tf.Variable or tf.get_variable), it seems more natural to me that we should always be able to get it if we specify the scope or its full name. I don't understand why one ignores the scope name thing while the other doesn't. Do you understand the rational for this weird behaviour?

          – Charlie Parker
          Mar 20 '17 at 19:31




          17




          17





          The reason is that with variable scope, one can define separate scopes for re-usable variables that are not affected by the current name scope used to define operations.

          – Andrzej Pronobis
          Mar 21 '17 at 5:23







          The reason is that with variable scope, one can define separate scopes for re-usable variables that are not affected by the current name scope used to define operations.

          – Andrzej Pronobis
          Mar 21 '17 at 5:23






          1




          1





          What I understand is name_scope is mainly useful when you name two variables with same name but in different operations.

          – DINESHKUMAR MURUGAN
          Jul 14 '17 at 17:45





          What I understand is name_scope is mainly useful when you name two variables with same name but in different operations.

          – DINESHKUMAR MURUGAN
          Jul 14 '17 at 17:45




          4




          4





          Hello, can you explain why the variable name in a variable_scope always ends with a :0? Does this means there may be variable names ends with :1, :2, etc, so how can this happens?

          – James Fan
          Aug 31 '17 at 8:49





          Hello, can you explain why the variable name in a variable_scope always ends with a :0? Does this means there may be variable names ends with :1, :2, etc, so how can this happens?

          – James Fan
          Aug 31 '17 at 8:49













          46














          Both variable_op_scope and op_scope are now deprecated and should not be used at all.



          Regarding the other two, I also had problems understanding the difference between variable_scope and name_scope (they looked almost the same) before I tried to visualize everything by creating a simple example:



          import tensorflow as tf


          def scoping(fn, scope1, scope2, vals):
          with fn(scope1):
          a = tf.Variable(vals[0], name='a')
          b = tf.get_variable('b', initializer=vals[1])
          c = tf.constant(vals[2], name='c')

          with fn(scope2):
          d = tf.add(a * b, c, name='res')

          print 'n '.join([scope1, a.name, b.name, c.name, d.name]), 'n'
          return d

          d1 = scoping(tf.variable_scope, 'scope_vars', 'res', [1, 2, 3])
          d2 = scoping(tf.name_scope, 'scope_name', 'res', [1, 2, 3])

          with tf.Session() as sess:
          writer = tf.summary.FileWriter('logs', sess.graph)
          sess.run(tf.global_variables_initializer())
          print sess.run([d1, d2])
          writer.close()


          Here I create a function that creates some variables and constants and groups them in scopes (depending on the type I provided). In this function, I also print the names of all the variables. After that, I executes the graph to get values of the resulting values and save event-files to investigate them in TensorBoard. If you run this, you will get the following:



          scope_vars
          scope_vars/a:0
          scope_vars/b:0
          scope_vars/c:0
          scope_vars/res/res:0

          scope_name
          scope_name/a:0
          b:0
          scope_name/c:0
          scope_name/res/res:0


          You see the similar pattern if you open TensorBoard (as you see b is outside of scope_name rectangular):
          enter image description here





          This gives you the answer:



          Now you see that tf.variable_scope() adds a prefix to the names of all variables (no matter how you create them), ops, constants. On the other hand tf.name_scope() ignores variables created with tf.get_variable() because it assumes that you know which variable and in which scope you wanted to use.



          A good documentation on Sharing variables tells you that




          tf.variable_scope(): Manages namespaces for names passed to tf.get_variable().




          The same documentation provides a more details how does Variable Scope work and when it is useful.






          share|improve this answer





















          • 2





            Fabulous answer with the example and the visuals, let's get this answer upvoted folks!

            – David Parks
            Mar 14 '18 at 18:12
















          46














          Both variable_op_scope and op_scope are now deprecated and should not be used at all.



          Regarding the other two, I also had problems understanding the difference between variable_scope and name_scope (they looked almost the same) before I tried to visualize everything by creating a simple example:



          import tensorflow as tf


          def scoping(fn, scope1, scope2, vals):
          with fn(scope1):
          a = tf.Variable(vals[0], name='a')
          b = tf.get_variable('b', initializer=vals[1])
          c = tf.constant(vals[2], name='c')

          with fn(scope2):
          d = tf.add(a * b, c, name='res')

          print 'n '.join([scope1, a.name, b.name, c.name, d.name]), 'n'
          return d

          d1 = scoping(tf.variable_scope, 'scope_vars', 'res', [1, 2, 3])
          d2 = scoping(tf.name_scope, 'scope_name', 'res', [1, 2, 3])

          with tf.Session() as sess:
          writer = tf.summary.FileWriter('logs', sess.graph)
          sess.run(tf.global_variables_initializer())
          print sess.run([d1, d2])
          writer.close()


          Here I create a function that creates some variables and constants and groups them in scopes (depending on the type I provided). In this function, I also print the names of all the variables. After that, I executes the graph to get values of the resulting values and save event-files to investigate them in TensorBoard. If you run this, you will get the following:



          scope_vars
          scope_vars/a:0
          scope_vars/b:0
          scope_vars/c:0
          scope_vars/res/res:0

          scope_name
          scope_name/a:0
          b:0
          scope_name/c:0
          scope_name/res/res:0


          You see the similar pattern if you open TensorBoard (as you see b is outside of scope_name rectangular):
          enter image description here





          This gives you the answer:



          Now you see that tf.variable_scope() adds a prefix to the names of all variables (no matter how you create them), ops, constants. On the other hand tf.name_scope() ignores variables created with tf.get_variable() because it assumes that you know which variable and in which scope you wanted to use.



          A good documentation on Sharing variables tells you that




          tf.variable_scope(): Manages namespaces for names passed to tf.get_variable().




          The same documentation provides a more details how does Variable Scope work and when it is useful.






          share|improve this answer





















          • 2





            Fabulous answer with the example and the visuals, let's get this answer upvoted folks!

            – David Parks
            Mar 14 '18 at 18:12














          46












          46








          46







          Both variable_op_scope and op_scope are now deprecated and should not be used at all.



          Regarding the other two, I also had problems understanding the difference between variable_scope and name_scope (they looked almost the same) before I tried to visualize everything by creating a simple example:



          import tensorflow as tf


          def scoping(fn, scope1, scope2, vals):
          with fn(scope1):
          a = tf.Variable(vals[0], name='a')
          b = tf.get_variable('b', initializer=vals[1])
          c = tf.constant(vals[2], name='c')

          with fn(scope2):
          d = tf.add(a * b, c, name='res')

          print 'n '.join([scope1, a.name, b.name, c.name, d.name]), 'n'
          return d

          d1 = scoping(tf.variable_scope, 'scope_vars', 'res', [1, 2, 3])
          d2 = scoping(tf.name_scope, 'scope_name', 'res', [1, 2, 3])

          with tf.Session() as sess:
          writer = tf.summary.FileWriter('logs', sess.graph)
          sess.run(tf.global_variables_initializer())
          print sess.run([d1, d2])
          writer.close()


          Here I create a function that creates some variables and constants and groups them in scopes (depending on the type I provided). In this function, I also print the names of all the variables. After that, I executes the graph to get values of the resulting values and save event-files to investigate them in TensorBoard. If you run this, you will get the following:



          scope_vars
          scope_vars/a:0
          scope_vars/b:0
          scope_vars/c:0
          scope_vars/res/res:0

          scope_name
          scope_name/a:0
          b:0
          scope_name/c:0
          scope_name/res/res:0


          You see the similar pattern if you open TensorBoard (as you see b is outside of scope_name rectangular):
          enter image description here





          This gives you the answer:



          Now you see that tf.variable_scope() adds a prefix to the names of all variables (no matter how you create them), ops, constants. On the other hand tf.name_scope() ignores variables created with tf.get_variable() because it assumes that you know which variable and in which scope you wanted to use.



          A good documentation on Sharing variables tells you that




          tf.variable_scope(): Manages namespaces for names passed to tf.get_variable().




          The same documentation provides a more details how does Variable Scope work and when it is useful.






          share|improve this answer















          Both variable_op_scope and op_scope are now deprecated and should not be used at all.



          Regarding the other two, I also had problems understanding the difference between variable_scope and name_scope (they looked almost the same) before I tried to visualize everything by creating a simple example:



          import tensorflow as tf


          def scoping(fn, scope1, scope2, vals):
          with fn(scope1):
          a = tf.Variable(vals[0], name='a')
          b = tf.get_variable('b', initializer=vals[1])
          c = tf.constant(vals[2], name='c')

          with fn(scope2):
          d = tf.add(a * b, c, name='res')

          print 'n '.join([scope1, a.name, b.name, c.name, d.name]), 'n'
          return d

          d1 = scoping(tf.variable_scope, 'scope_vars', 'res', [1, 2, 3])
          d2 = scoping(tf.name_scope, 'scope_name', 'res', [1, 2, 3])

          with tf.Session() as sess:
          writer = tf.summary.FileWriter('logs', sess.graph)
          sess.run(tf.global_variables_initializer())
          print sess.run([d1, d2])
          writer.close()


          Here I create a function that creates some variables and constants and groups them in scopes (depending on the type I provided). In this function, I also print the names of all the variables. After that, I executes the graph to get values of the resulting values and save event-files to investigate them in TensorBoard. If you run this, you will get the following:



          scope_vars
          scope_vars/a:0
          scope_vars/b:0
          scope_vars/c:0
          scope_vars/res/res:0

          scope_name
          scope_name/a:0
          b:0
          scope_name/c:0
          scope_name/res/res:0


          You see the similar pattern if you open TensorBoard (as you see b is outside of scope_name rectangular):
          enter image description here





          This gives you the answer:



          Now you see that tf.variable_scope() adds a prefix to the names of all variables (no matter how you create them), ops, constants. On the other hand tf.name_scope() ignores variables created with tf.get_variable() because it assumes that you know which variable and in which scope you wanted to use.



          A good documentation on Sharing variables tells you that




          tf.variable_scope(): Manages namespaces for names passed to tf.get_variable().




          The same documentation provides a more details how does Variable Scope work and when it is useful.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 11 '17 at 18:01









          nbro

          5,63184994




          5,63184994










          answered Apr 24 '17 at 6:48









          Salvador DaliSalvador Dali

          113k81494582




          113k81494582








          • 2





            Fabulous answer with the example and the visuals, let's get this answer upvoted folks!

            – David Parks
            Mar 14 '18 at 18:12














          • 2





            Fabulous answer with the example and the visuals, let's get this answer upvoted folks!

            – David Parks
            Mar 14 '18 at 18:12








          2




          2





          Fabulous answer with the example and the visuals, let's get this answer upvoted folks!

          – David Parks
          Mar 14 '18 at 18:12





          Fabulous answer with the example and the visuals, let's get this answer upvoted folks!

          – David Parks
          Mar 14 '18 at 18:12











          41














          Namespaces is a way to organize names for variables and operators in hierarchical manner (e.g. "scopeA/scopeB/scopeC/op1")





          • tf.name_scope creates namespace for operators in the default graph.

          • tf.variable_scope creates namespace for both variables and operators in the default graph.


          • tf.op_scope same as tf.name_scope, but for the graph in which specified variables were created.


          • tf.variable_op_scope same as tf.variable_scope, but for the graph in which specified variables were created.



          Links to the sources above help to disambiguate this documentation issue.



          This example shows that all types of scopes define namespaces for both variables and operators with following differences:




          1. scopes defined by tf.variable_op_scope or tf.variable_scope are compatible with tf.get_variable (it ignores two other scopes)


          2. tf.op_scope and tf.variable_op_scope just select a graph from a list of specified variables to create a scope for. Other than than their behavior equal to tf.name_scope and tf.variable_scope accordingly


          3. tf.variable_scope and variable_op_scope add specified or default initializer.






          share|improve this answer


























          • For the graph in which specified variables were created? Does this means such as above example by fabrizioM, with tf.variable_op_scope([a,b],name,"mysum2") as scope, here parameter a and b are not affected by this function and variables defined in this scope are affected?

            – user2886263
            Mar 11 '16 at 1:54













          • The answer for both questions is yes: the graph in which specified variables were created and they are not modified.

            – Alexander Gorban
            Mar 11 '16 at 4:56











          • Does this mean that tf.name_scope and tf.variable_scope only be used in default graph, but when you obviously define and constuct a graph using tf.Graph(), the other two functions tf.op_scope and tf.variable_op_scope can not be used in this graph!

            – user2886263
            Mar 12 '16 at 11:21
















          41














          Namespaces is a way to organize names for variables and operators in hierarchical manner (e.g. "scopeA/scopeB/scopeC/op1")





          • tf.name_scope creates namespace for operators in the default graph.

          • tf.variable_scope creates namespace for both variables and operators in the default graph.


          • tf.op_scope same as tf.name_scope, but for the graph in which specified variables were created.


          • tf.variable_op_scope same as tf.variable_scope, but for the graph in which specified variables were created.



          Links to the sources above help to disambiguate this documentation issue.



          This example shows that all types of scopes define namespaces for both variables and operators with following differences:




          1. scopes defined by tf.variable_op_scope or tf.variable_scope are compatible with tf.get_variable (it ignores two other scopes)


          2. tf.op_scope and tf.variable_op_scope just select a graph from a list of specified variables to create a scope for. Other than than their behavior equal to tf.name_scope and tf.variable_scope accordingly


          3. tf.variable_scope and variable_op_scope add specified or default initializer.






          share|improve this answer


























          • For the graph in which specified variables were created? Does this means such as above example by fabrizioM, with tf.variable_op_scope([a,b],name,"mysum2") as scope, here parameter a and b are not affected by this function and variables defined in this scope are affected?

            – user2886263
            Mar 11 '16 at 1:54













          • The answer for both questions is yes: the graph in which specified variables were created and they are not modified.

            – Alexander Gorban
            Mar 11 '16 at 4:56











          • Does this mean that tf.name_scope and tf.variable_scope only be used in default graph, but when you obviously define and constuct a graph using tf.Graph(), the other two functions tf.op_scope and tf.variable_op_scope can not be used in this graph!

            – user2886263
            Mar 12 '16 at 11:21














          41












          41








          41







          Namespaces is a way to organize names for variables and operators in hierarchical manner (e.g. "scopeA/scopeB/scopeC/op1")





          • tf.name_scope creates namespace for operators in the default graph.

          • tf.variable_scope creates namespace for both variables and operators in the default graph.


          • tf.op_scope same as tf.name_scope, but for the graph in which specified variables were created.


          • tf.variable_op_scope same as tf.variable_scope, but for the graph in which specified variables were created.



          Links to the sources above help to disambiguate this documentation issue.



          This example shows that all types of scopes define namespaces for both variables and operators with following differences:




          1. scopes defined by tf.variable_op_scope or tf.variable_scope are compatible with tf.get_variable (it ignores two other scopes)


          2. tf.op_scope and tf.variable_op_scope just select a graph from a list of specified variables to create a scope for. Other than than their behavior equal to tf.name_scope and tf.variable_scope accordingly


          3. tf.variable_scope and variable_op_scope add specified or default initializer.






          share|improve this answer















          Namespaces is a way to organize names for variables and operators in hierarchical manner (e.g. "scopeA/scopeB/scopeC/op1")





          • tf.name_scope creates namespace for operators in the default graph.

          • tf.variable_scope creates namespace for both variables and operators in the default graph.


          • tf.op_scope same as tf.name_scope, but for the graph in which specified variables were created.


          • tf.variable_op_scope same as tf.variable_scope, but for the graph in which specified variables were created.



          Links to the sources above help to disambiguate this documentation issue.



          This example shows that all types of scopes define namespaces for both variables and operators with following differences:




          1. scopes defined by tf.variable_op_scope or tf.variable_scope are compatible with tf.get_variable (it ignores two other scopes)


          2. tf.op_scope and tf.variable_op_scope just select a graph from a list of specified variables to create a scope for. Other than than their behavior equal to tf.name_scope and tf.variable_scope accordingly


          3. tf.variable_scope and variable_op_scope add specified or default initializer.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 11 '17 at 18:03









          nbro

          5,63184994




          5,63184994










          answered Mar 10 '16 at 23:37









          Alexander GorbanAlexander Gorban

          975816




          975816













          • For the graph in which specified variables were created? Does this means such as above example by fabrizioM, with tf.variable_op_scope([a,b],name,"mysum2") as scope, here parameter a and b are not affected by this function and variables defined in this scope are affected?

            – user2886263
            Mar 11 '16 at 1:54













          • The answer for both questions is yes: the graph in which specified variables were created and they are not modified.

            – Alexander Gorban
            Mar 11 '16 at 4:56











          • Does this mean that tf.name_scope and tf.variable_scope only be used in default graph, but when you obviously define and constuct a graph using tf.Graph(), the other two functions tf.op_scope and tf.variable_op_scope can not be used in this graph!

            – user2886263
            Mar 12 '16 at 11:21



















          • For the graph in which specified variables were created? Does this means such as above example by fabrizioM, with tf.variable_op_scope([a,b],name,"mysum2") as scope, here parameter a and b are not affected by this function and variables defined in this scope are affected?

            – user2886263
            Mar 11 '16 at 1:54













          • The answer for both questions is yes: the graph in which specified variables were created and they are not modified.

            – Alexander Gorban
            Mar 11 '16 at 4:56











          • Does this mean that tf.name_scope and tf.variable_scope only be used in default graph, but when you obviously define and constuct a graph using tf.Graph(), the other two functions tf.op_scope and tf.variable_op_scope can not be used in this graph!

            – user2886263
            Mar 12 '16 at 11:21

















          For the graph in which specified variables were created? Does this means such as above example by fabrizioM, with tf.variable_op_scope([a,b],name,"mysum2") as scope, here parameter a and b are not affected by this function and variables defined in this scope are affected?

          – user2886263
          Mar 11 '16 at 1:54







          For the graph in which specified variables were created? Does this means such as above example by fabrizioM, with tf.variable_op_scope([a,b],name,"mysum2") as scope, here parameter a and b are not affected by this function and variables defined in this scope are affected?

          – user2886263
          Mar 11 '16 at 1:54















          The answer for both questions is yes: the graph in which specified variables were created and they are not modified.

          – Alexander Gorban
          Mar 11 '16 at 4:56





          The answer for both questions is yes: the graph in which specified variables were created and they are not modified.

          – Alexander Gorban
          Mar 11 '16 at 4:56













          Does this mean that tf.name_scope and tf.variable_scope only be used in default graph, but when you obviously define and constuct a graph using tf.Graph(), the other two functions tf.op_scope and tf.variable_op_scope can not be used in this graph!

          – user2886263
          Mar 12 '16 at 11:21





          Does this mean that tf.name_scope and tf.variable_scope only be used in default graph, but when you obviously define and constuct a graph using tf.Graph(), the other two functions tf.op_scope and tf.variable_op_scope can not be used in this graph!

          – user2886263
          Mar 12 '16 at 11:21











          9















          As for API r0.11, op_scope and variable_op_scope are both deprecated.
          name_scope and variable_scope can be nested:



          with tf.name_scope('ns'):
          with tf.variable_scope('vs'): #scope creation
          v1 = tf.get_variable("v1",[1.0]) #v1.name = 'vs/v1:0'
          v2 = tf.Variable([2.0],name = 'v2') #v2.name= 'ns/vs/v2:0'
          v3 = v1 + v2 #v3.name = 'ns/vs/add:0'





          share|improve this answer






























            9















            As for API r0.11, op_scope and variable_op_scope are both deprecated.
            name_scope and variable_scope can be nested:



            with tf.name_scope('ns'):
            with tf.variable_scope('vs'): #scope creation
            v1 = tf.get_variable("v1",[1.0]) #v1.name = 'vs/v1:0'
            v2 = tf.Variable([2.0],name = 'v2') #v2.name= 'ns/vs/v2:0'
            v3 = v1 + v2 #v3.name = 'ns/vs/add:0'





            share|improve this answer




























              9












              9








              9








              As for API r0.11, op_scope and variable_op_scope are both deprecated.
              name_scope and variable_scope can be nested:



              with tf.name_scope('ns'):
              with tf.variable_scope('vs'): #scope creation
              v1 = tf.get_variable("v1",[1.0]) #v1.name = 'vs/v1:0'
              v2 = tf.Variable([2.0],name = 'v2') #v2.name= 'ns/vs/v2:0'
              v3 = v1 + v2 #v3.name = 'ns/vs/add:0'





              share|improve this answer
















              As for API r0.11, op_scope and variable_op_scope are both deprecated.
              name_scope and variable_scope can be nested:



              with tf.name_scope('ns'):
              with tf.variable_scope('vs'): #scope creation
              v1 = tf.get_variable("v1",[1.0]) #v1.name = 'vs/v1:0'
              v2 = tf.Variable([2.0],name = 'v2') #v2.name= 'ns/vs/v2:0'
              v3 = v1 + v2 #v3.name = 'ns/vs/add:0'






              share|improve this answer














              share|improve this answer



              share|improve this answer








              edited Jan 1 '18 at 20:30









              Uzair Ahmed

              568




              568










              answered Jan 16 '17 at 18:20









              sgusgu

              452315




              452315























                  7














                  You can think them as two groups: variable_op_scope and op_scope take a set of variables as input and are designed to create operations. The difference is in how they affect the creation of variables with tf.get_variable:



                  def mysum(a,b,name=None):
                  with tf.op_scope([a,b],name,"mysum") as scope:
                  v = tf.get_variable("v", 1)
                  v2 = tf.Variable([0], name="v2")
                  assert v.name == "v:0", v.name
                  assert v2.name == "mysum/v2:0", v2.name
                  return tf.add(a,b)

                  def mysum2(a,b,name=None):
                  with tf.variable_op_scope([a,b],name,"mysum2") as scope:
                  v = tf.get_variable("v", 1)
                  v2 = tf.Variable([0], name="v2")
                  assert v.name == "mysum2/v:0", v.name
                  assert v2.name == "mysum2/v2:0", v2.name
                  return tf.add(a,b)

                  with tf.Graph().as_default():
                  op = mysum(tf.Variable(1), tf.Variable(2))
                  op2 = mysum2(tf.Variable(1), tf.Variable(2))
                  assert op.name == 'mysum/Add:0', op.name
                  assert op2.name == 'mysum2/Add:0', op2.name


                  notice the name of the variable v in the two examples.



                  same for tf.name_scope and tf.variable_scope:



                  with tf.Graph().as_default():
                  with tf.name_scope("name_scope") as scope:
                  v = tf.get_variable("v", [1])
                  op = tf.add(v, v)
                  v2 = tf.Variable([0], name="v2")
                  assert v.name == "v:0", v.name
                  assert op.name == "name_scope/Add:0", op.name
                  assert v2.name == "name_scope/v2:0", v2.name

                  with tf.Graph().as_default():
                  with tf.variable_scope("name_scope") as scope:
                  v = tf.get_variable("v", [1])
                  op = tf.add(v, v)
                  v2 = tf.Variable([0], name="v2")
                  assert v.name == "name_scope/v:0", v.name
                  assert op.name == "name_scope/Add:0", op.name
                  assert v2.name == "name_scope/v2:0", v2.name


                  You can read more about variable scope in the tutorial.
                  A similar question was asked before on Stack Overflow.






                  share|improve this answer






























                    7














                    You can think them as two groups: variable_op_scope and op_scope take a set of variables as input and are designed to create operations. The difference is in how they affect the creation of variables with tf.get_variable:



                    def mysum(a,b,name=None):
                    with tf.op_scope([a,b],name,"mysum") as scope:
                    v = tf.get_variable("v", 1)
                    v2 = tf.Variable([0], name="v2")
                    assert v.name == "v:0", v.name
                    assert v2.name == "mysum/v2:0", v2.name
                    return tf.add(a,b)

                    def mysum2(a,b,name=None):
                    with tf.variable_op_scope([a,b],name,"mysum2") as scope:
                    v = tf.get_variable("v", 1)
                    v2 = tf.Variable([0], name="v2")
                    assert v.name == "mysum2/v:0", v.name
                    assert v2.name == "mysum2/v2:0", v2.name
                    return tf.add(a,b)

                    with tf.Graph().as_default():
                    op = mysum(tf.Variable(1), tf.Variable(2))
                    op2 = mysum2(tf.Variable(1), tf.Variable(2))
                    assert op.name == 'mysum/Add:0', op.name
                    assert op2.name == 'mysum2/Add:0', op2.name


                    notice the name of the variable v in the two examples.



                    same for tf.name_scope and tf.variable_scope:



                    with tf.Graph().as_default():
                    with tf.name_scope("name_scope") as scope:
                    v = tf.get_variable("v", [1])
                    op = tf.add(v, v)
                    v2 = tf.Variable([0], name="v2")
                    assert v.name == "v:0", v.name
                    assert op.name == "name_scope/Add:0", op.name
                    assert v2.name == "name_scope/v2:0", v2.name

                    with tf.Graph().as_default():
                    with tf.variable_scope("name_scope") as scope:
                    v = tf.get_variable("v", [1])
                    op = tf.add(v, v)
                    v2 = tf.Variable([0], name="v2")
                    assert v.name == "name_scope/v:0", v.name
                    assert op.name == "name_scope/Add:0", op.name
                    assert v2.name == "name_scope/v2:0", v2.name


                    You can read more about variable scope in the tutorial.
                    A similar question was asked before on Stack Overflow.






                    share|improve this answer




























                      7












                      7








                      7







                      You can think them as two groups: variable_op_scope and op_scope take a set of variables as input and are designed to create operations. The difference is in how they affect the creation of variables with tf.get_variable:



                      def mysum(a,b,name=None):
                      with tf.op_scope([a,b],name,"mysum") as scope:
                      v = tf.get_variable("v", 1)
                      v2 = tf.Variable([0], name="v2")
                      assert v.name == "v:0", v.name
                      assert v2.name == "mysum/v2:0", v2.name
                      return tf.add(a,b)

                      def mysum2(a,b,name=None):
                      with tf.variable_op_scope([a,b],name,"mysum2") as scope:
                      v = tf.get_variable("v", 1)
                      v2 = tf.Variable([0], name="v2")
                      assert v.name == "mysum2/v:0", v.name
                      assert v2.name == "mysum2/v2:0", v2.name
                      return tf.add(a,b)

                      with tf.Graph().as_default():
                      op = mysum(tf.Variable(1), tf.Variable(2))
                      op2 = mysum2(tf.Variable(1), tf.Variable(2))
                      assert op.name == 'mysum/Add:0', op.name
                      assert op2.name == 'mysum2/Add:0', op2.name


                      notice the name of the variable v in the two examples.



                      same for tf.name_scope and tf.variable_scope:



                      with tf.Graph().as_default():
                      with tf.name_scope("name_scope") as scope:
                      v = tf.get_variable("v", [1])
                      op = tf.add(v, v)
                      v2 = tf.Variable([0], name="v2")
                      assert v.name == "v:0", v.name
                      assert op.name == "name_scope/Add:0", op.name
                      assert v2.name == "name_scope/v2:0", v2.name

                      with tf.Graph().as_default():
                      with tf.variable_scope("name_scope") as scope:
                      v = tf.get_variable("v", [1])
                      op = tf.add(v, v)
                      v2 = tf.Variable([0], name="v2")
                      assert v.name == "name_scope/v:0", v.name
                      assert op.name == "name_scope/Add:0", op.name
                      assert v2.name == "name_scope/v2:0", v2.name


                      You can read more about variable scope in the tutorial.
                      A similar question was asked before on Stack Overflow.






                      share|improve this answer















                      You can think them as two groups: variable_op_scope and op_scope take a set of variables as input and are designed to create operations. The difference is in how they affect the creation of variables with tf.get_variable:



                      def mysum(a,b,name=None):
                      with tf.op_scope([a,b],name,"mysum") as scope:
                      v = tf.get_variable("v", 1)
                      v2 = tf.Variable([0], name="v2")
                      assert v.name == "v:0", v.name
                      assert v2.name == "mysum/v2:0", v2.name
                      return tf.add(a,b)

                      def mysum2(a,b,name=None):
                      with tf.variable_op_scope([a,b],name,"mysum2") as scope:
                      v = tf.get_variable("v", 1)
                      v2 = tf.Variable([0], name="v2")
                      assert v.name == "mysum2/v:0", v.name
                      assert v2.name == "mysum2/v2:0", v2.name
                      return tf.add(a,b)

                      with tf.Graph().as_default():
                      op = mysum(tf.Variable(1), tf.Variable(2))
                      op2 = mysum2(tf.Variable(1), tf.Variable(2))
                      assert op.name == 'mysum/Add:0', op.name
                      assert op2.name == 'mysum2/Add:0', op2.name


                      notice the name of the variable v in the two examples.



                      same for tf.name_scope and tf.variable_scope:



                      with tf.Graph().as_default():
                      with tf.name_scope("name_scope") as scope:
                      v = tf.get_variable("v", [1])
                      op = tf.add(v, v)
                      v2 = tf.Variable([0], name="v2")
                      assert v.name == "v:0", v.name
                      assert op.name == "name_scope/Add:0", op.name
                      assert v2.name == "name_scope/v2:0", v2.name

                      with tf.Graph().as_default():
                      with tf.variable_scope("name_scope") as scope:
                      v = tf.get_variable("v", [1])
                      op = tf.add(v, v)
                      v2 = tf.Variable([0], name="v2")
                      assert v.name == "name_scope/v:0", v.name
                      assert op.name == "name_scope/Add:0", op.name
                      assert v2.name == "name_scope/v2:0", v2.name


                      You can read more about variable scope in the tutorial.
                      A similar question was asked before on Stack Overflow.







                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited Jul 12 '17 at 6:51









                      GoingMyWay

                      5,550115580




                      5,550115580










                      answered Mar 10 '16 at 21:42









                      fabrizioMfabrizioM

                      30.3k126594




                      30.3k126594























                          6














                          Let's make it simple: just use tf.variable_scope. Quoting a TF developer,:




                          Currently, we recommend everyone to use variable_scope and not use name_scope except for internal code and libraries.




                          Besides the fact that variable_scope's functionality basically extends those of name_scope, consider how they do not play so nice together:



                          with tf.name_scope('foo'):
                          with tf.variable_scope('bar'):
                          x = tf.get_variable('x', shape=())
                          x2 = tf.square(x**2, name='x2')
                          print(x.name)
                          # bar/x:0
                          print(x2.name)
                          # foo/bar/x2:0


                          By sticking to variable_scope only you avoid some headaches due to this kind of incompatibility.






                          share|improve this answer






























                            6














                            Let's make it simple: just use tf.variable_scope. Quoting a TF developer,:




                            Currently, we recommend everyone to use variable_scope and not use name_scope except for internal code and libraries.




                            Besides the fact that variable_scope's functionality basically extends those of name_scope, consider how they do not play so nice together:



                            with tf.name_scope('foo'):
                            with tf.variable_scope('bar'):
                            x = tf.get_variable('x', shape=())
                            x2 = tf.square(x**2, name='x2')
                            print(x.name)
                            # bar/x:0
                            print(x2.name)
                            # foo/bar/x2:0


                            By sticking to variable_scope only you avoid some headaches due to this kind of incompatibility.






                            share|improve this answer




























                              6












                              6








                              6







                              Let's make it simple: just use tf.variable_scope. Quoting a TF developer,:




                              Currently, we recommend everyone to use variable_scope and not use name_scope except for internal code and libraries.




                              Besides the fact that variable_scope's functionality basically extends those of name_scope, consider how they do not play so nice together:



                              with tf.name_scope('foo'):
                              with tf.variable_scope('bar'):
                              x = tf.get_variable('x', shape=())
                              x2 = tf.square(x**2, name='x2')
                              print(x.name)
                              # bar/x:0
                              print(x2.name)
                              # foo/bar/x2:0


                              By sticking to variable_scope only you avoid some headaches due to this kind of incompatibility.






                              share|improve this answer















                              Let's make it simple: just use tf.variable_scope. Quoting a TF developer,:




                              Currently, we recommend everyone to use variable_scope and not use name_scope except for internal code and libraries.




                              Besides the fact that variable_scope's functionality basically extends those of name_scope, consider how they do not play so nice together:



                              with tf.name_scope('foo'):
                              with tf.variable_scope('bar'):
                              x = tf.get_variable('x', shape=())
                              x2 = tf.square(x**2, name='x2')
                              print(x.name)
                              # bar/x:0
                              print(x2.name)
                              # foo/bar/x2:0


                              By sticking to variable_scope only you avoid some headaches due to this kind of incompatibility.







                              share|improve this answer














                              share|improve this answer



                              share|improve this answer








                              edited May 27 '18 at 16:50

























                              answered May 17 '18 at 8:43









                              P-GnP-Gn

                              11.8k13663




                              11.8k13663























                                  2














                                  From the last section of this page of the tensorflow documentation: Names of ops in tf.variable_scope()




                                  [...] when we do with tf.variable_scope("name"), this implicitly opens a tf.name_scope("name"). For example:




                                  with tf.variable_scope("foo"):
                                  x = 1.0 + tf.get_variable("v", [1])
                                  assert x.op.name == "foo/add"



                                  Name scopes can be opened in addition to a variable scope, and then they will only affect the names of the ops, but not of variables.




                                  with tf.variable_scope("foo"):
                                  with tf.name_scope("bar"):
                                  v = tf.get_variable("v", [1])
                                  x = 1.0 + v
                                  assert v.name == "foo/v:0"
                                  assert x.op.name == "foo/bar/add"



                                  When opening a variable scope using a captured object instead of a string, we do not alter the current name scope for ops.







                                  share|improve this answer






























                                    2














                                    From the last section of this page of the tensorflow documentation: Names of ops in tf.variable_scope()




                                    [...] when we do with tf.variable_scope("name"), this implicitly opens a tf.name_scope("name"). For example:




                                    with tf.variable_scope("foo"):
                                    x = 1.0 + tf.get_variable("v", [1])
                                    assert x.op.name == "foo/add"



                                    Name scopes can be opened in addition to a variable scope, and then they will only affect the names of the ops, but not of variables.




                                    with tf.variable_scope("foo"):
                                    with tf.name_scope("bar"):
                                    v = tf.get_variable("v", [1])
                                    x = 1.0 + v
                                    assert v.name == "foo/v:0"
                                    assert x.op.name == "foo/bar/add"



                                    When opening a variable scope using a captured object instead of a string, we do not alter the current name scope for ops.







                                    share|improve this answer




























                                      2












                                      2








                                      2







                                      From the last section of this page of the tensorflow documentation: Names of ops in tf.variable_scope()




                                      [...] when we do with tf.variable_scope("name"), this implicitly opens a tf.name_scope("name"). For example:




                                      with tf.variable_scope("foo"):
                                      x = 1.0 + tf.get_variable("v", [1])
                                      assert x.op.name == "foo/add"



                                      Name scopes can be opened in addition to a variable scope, and then they will only affect the names of the ops, but not of variables.




                                      with tf.variable_scope("foo"):
                                      with tf.name_scope("bar"):
                                      v = tf.get_variable("v", [1])
                                      x = 1.0 + v
                                      assert v.name == "foo/v:0"
                                      assert x.op.name == "foo/bar/add"



                                      When opening a variable scope using a captured object instead of a string, we do not alter the current name scope for ops.







                                      share|improve this answer















                                      From the last section of this page of the tensorflow documentation: Names of ops in tf.variable_scope()




                                      [...] when we do with tf.variable_scope("name"), this implicitly opens a tf.name_scope("name"). For example:




                                      with tf.variable_scope("foo"):
                                      x = 1.0 + tf.get_variable("v", [1])
                                      assert x.op.name == "foo/add"



                                      Name scopes can be opened in addition to a variable scope, and then they will only affect the names of the ops, but not of variables.




                                      with tf.variable_scope("foo"):
                                      with tf.name_scope("bar"):
                                      v = tf.get_variable("v", [1])
                                      x = 1.0 + v
                                      assert v.name == "foo/v:0"
                                      assert x.op.name == "foo/bar/add"



                                      When opening a variable scope using a captured object instead of a string, we do not alter the current name scope for ops.








                                      share|improve this answer














                                      share|improve this answer



                                      share|improve this answer








                                      edited Jul 12 '17 at 6:50









                                      GoingMyWay

                                      5,550115580




                                      5,550115580










                                      answered Mar 7 '17 at 21:49









                                      Guillermo González de GaribayGuillermo González de Garibay

                                      1812




                                      1812






























                                          draft saved

                                          draft discarded




















































                                          Thanks for contributing an answer to Stack Overflow!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid



                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function () {
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f35919020%2fwhats-the-difference-of-name-scope-and-a-variable-scope-in-tensorflow%23new-answer', 'question_page');
                                          }
                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          Xamarin.iOS Cant Deploy on Iphone

                                          Glorious Revolution

                                          Dulmage-Mendelsohn matrix decomposition in Python