How to build tenssorflow op with bazel with additional include directories












0















I got tensorflow binaries (already compiled)
I have added to tensorflow source:
tensorflowcoreuser_opsicp_op_kernel.cc - contains:
https://github.com/tensorflow/models/blob/master/research/vid2depth/ops/icp_op_kernel.cc
tensorflowcoreuser_opsBUILD - contains:



load("//tensorflow:tensorflow.bzl", "tf_custom_op_library")

tf_custom_op_library(
name = "icp_op_kernel.so",
srcs = ["icp_op_kernel.cc"],
)


I am trying to build with:



bazel build --config opt //tensorflow/core/user_ops:icp_op_kernel.so


And I get:



tensorflow/core/user_ops/icp_op_kernel.cc(16): fatal error C1083: Cannot open include file: 'pcl/point_types.h': No such file or directory


Because bazel don't know where the pcl include files are.
I have installed pcl and the include directory is in:



C:Program FilesPCL 1.6.0includepcl-1.6


How do I tell bazel to also include this directory?



Also I will probably need to add C:Program FilesPCL 1.6.0lib to the link, How do I do that?










share|improve this question



























    0















    I got tensorflow binaries (already compiled)
    I have added to tensorflow source:
    tensorflowcoreuser_opsicp_op_kernel.cc - contains:
    https://github.com/tensorflow/models/blob/master/research/vid2depth/ops/icp_op_kernel.cc
    tensorflowcoreuser_opsBUILD - contains:



    load("//tensorflow:tensorflow.bzl", "tf_custom_op_library")

    tf_custom_op_library(
    name = "icp_op_kernel.so",
    srcs = ["icp_op_kernel.cc"],
    )


    I am trying to build with:



    bazel build --config opt //tensorflow/core/user_ops:icp_op_kernel.so


    And I get:



    tensorflow/core/user_ops/icp_op_kernel.cc(16): fatal error C1083: Cannot open include file: 'pcl/point_types.h': No such file or directory


    Because bazel don't know where the pcl include files are.
    I have installed pcl and the include directory is in:



    C:Program FilesPCL 1.6.0includepcl-1.6


    How do I tell bazel to also include this directory?



    Also I will probably need to add C:Program FilesPCL 1.6.0lib to the link, How do I do that?










    share|improve this question

























      0












      0








      0








      I got tensorflow binaries (already compiled)
      I have added to tensorflow source:
      tensorflowcoreuser_opsicp_op_kernel.cc - contains:
      https://github.com/tensorflow/models/blob/master/research/vid2depth/ops/icp_op_kernel.cc
      tensorflowcoreuser_opsBUILD - contains:



      load("//tensorflow:tensorflow.bzl", "tf_custom_op_library")

      tf_custom_op_library(
      name = "icp_op_kernel.so",
      srcs = ["icp_op_kernel.cc"],
      )


      I am trying to build with:



      bazel build --config opt //tensorflow/core/user_ops:icp_op_kernel.so


      And I get:



      tensorflow/core/user_ops/icp_op_kernel.cc(16): fatal error C1083: Cannot open include file: 'pcl/point_types.h': No such file or directory


      Because bazel don't know where the pcl include files are.
      I have installed pcl and the include directory is in:



      C:Program FilesPCL 1.6.0includepcl-1.6


      How do I tell bazel to also include this directory?



      Also I will probably need to add C:Program FilesPCL 1.6.0lib to the link, How do I do that?










      share|improve this question














      I got tensorflow binaries (already compiled)
      I have added to tensorflow source:
      tensorflowcoreuser_opsicp_op_kernel.cc - contains:
      https://github.com/tensorflow/models/blob/master/research/vid2depth/ops/icp_op_kernel.cc
      tensorflowcoreuser_opsBUILD - contains:



      load("//tensorflow:tensorflow.bzl", "tf_custom_op_library")

      tf_custom_op_library(
      name = "icp_op_kernel.so",
      srcs = ["icp_op_kernel.cc"],
      )


      I am trying to build with:



      bazel build --config opt //tensorflow/core/user_ops:icp_op_kernel.so


      And I get:



      tensorflow/core/user_ops/icp_op_kernel.cc(16): fatal error C1083: Cannot open include file: 'pcl/point_types.h': No such file or directory


      Because bazel don't know where the pcl include files are.
      I have installed pcl and the include directory is in:



      C:Program FilesPCL 1.6.0includepcl-1.6


      How do I tell bazel to also include this directory?



      Also I will probably need to add C:Program FilesPCL 1.6.0lib to the link, How do I do that?







      tensorflow bazel






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 14 '18 at 15:17









      Dor PeretzDor Peretz

      213




      213
























          1 Answer
          1






          active

          oldest

          votes


















          1














          You don't need bazel for building ops if it fails.



          I have implemented customized ops both in CPU and GPU, and basically follow the two Tensorflow tutorials.



          For CPU ops, follow Tensorflow tutorial on Build the op library:



          TF_CFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_compile_flags()))') )
          TF_LFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_link_flags()))') )
          g++ -std=c++11 -shared zero_out.cc -o zero_out.so -fPIC ${TF_CFLAGS[@]} ${TF_LFLAGS[@]} -O2


          Note on gcc version >=5: gcc uses the new C++ ABI since version 5. The binary pip packages available on the TensorFlow website are built with gcc4 that uses the older ABI. If you compile your op library with gcc>=5, add -D_GLIBCXX_USE_CXX11_ABI=0 to the command line to make the library compatible with the older abi.



          For GPU ops, check the current official GPU ops building instructions on Tensorflow adding GPU op support



          nvcc -std=c++11 -c -o cuda_op_kernel.cu.o cuda_op_kernel.cu.cc 
          ${TF_CFLAGS[@]} -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC

          g++ -std=c++11 -shared -o cuda_op_kernel.so cuda_op_kernel.cc
          cuda_op_kernel.cu.o ${TF_CFLAGS[@]} -fPIC -lcudart ${TF_LFLAGS[@]}


          As it says, Note that if your CUDA libraries are not installed in /usr/local/lib64, you'll need to specify the path explicitly in the second (g++) command above. For example, add -L /usr/local/cuda-8.0/lib64/ if your CUDA is installed in /usr/local/cuda-8.0.



          Also, Note in some linux settings, additional options to nvcc compiling step are needed. Add -D_MWAITXINTRIN_H_INCLUDED to the nvcc command line to avoid errors from mwaitxintrin.h.






          share|improve this answer


























          • I am actually trying to build it on windows, I will try to translate what you proposed.

            – Dor Peretz
            Nov 15 '18 at 13:20











          • The following fails: nvcc -std=c++11 -c -o cuda_op_kernel.cu.o cuda_op_kernel.cu.cc -IC:UsersdorpDesktoptensorflow-1.11.0 -D_GLIBCXX_USE_CXX11_ABI=0 -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC Error: C:/Users/dorp/Desktop/tensorflow-1.11.0third_party/eigen3/unsupported/Eigen/CXX11/Tensor(1): fatal error C1083: Cannot open include file: 'unsupported/Eigen/CXX11/Tensor': No such file or directory

            – Dor Peretz
            Nov 20 '18 at 9:15













          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53303418%2fhow-to-build-tenssorflow-op-with-bazel-with-additional-include-directories%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          You don't need bazel for building ops if it fails.



          I have implemented customized ops both in CPU and GPU, and basically follow the two Tensorflow tutorials.



          For CPU ops, follow Tensorflow tutorial on Build the op library:



          TF_CFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_compile_flags()))') )
          TF_LFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_link_flags()))') )
          g++ -std=c++11 -shared zero_out.cc -o zero_out.so -fPIC ${TF_CFLAGS[@]} ${TF_LFLAGS[@]} -O2


          Note on gcc version >=5: gcc uses the new C++ ABI since version 5. The binary pip packages available on the TensorFlow website are built with gcc4 that uses the older ABI. If you compile your op library with gcc>=5, add -D_GLIBCXX_USE_CXX11_ABI=0 to the command line to make the library compatible with the older abi.



          For GPU ops, check the current official GPU ops building instructions on Tensorflow adding GPU op support



          nvcc -std=c++11 -c -o cuda_op_kernel.cu.o cuda_op_kernel.cu.cc 
          ${TF_CFLAGS[@]} -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC

          g++ -std=c++11 -shared -o cuda_op_kernel.so cuda_op_kernel.cc
          cuda_op_kernel.cu.o ${TF_CFLAGS[@]} -fPIC -lcudart ${TF_LFLAGS[@]}


          As it says, Note that if your CUDA libraries are not installed in /usr/local/lib64, you'll need to specify the path explicitly in the second (g++) command above. For example, add -L /usr/local/cuda-8.0/lib64/ if your CUDA is installed in /usr/local/cuda-8.0.



          Also, Note in some linux settings, additional options to nvcc compiling step are needed. Add -D_MWAITXINTRIN_H_INCLUDED to the nvcc command line to avoid errors from mwaitxintrin.h.






          share|improve this answer


























          • I am actually trying to build it on windows, I will try to translate what you proposed.

            – Dor Peretz
            Nov 15 '18 at 13:20











          • The following fails: nvcc -std=c++11 -c -o cuda_op_kernel.cu.o cuda_op_kernel.cu.cc -IC:UsersdorpDesktoptensorflow-1.11.0 -D_GLIBCXX_USE_CXX11_ABI=0 -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC Error: C:/Users/dorp/Desktop/tensorflow-1.11.0third_party/eigen3/unsupported/Eigen/CXX11/Tensor(1): fatal error C1083: Cannot open include file: 'unsupported/Eigen/CXX11/Tensor': No such file or directory

            – Dor Peretz
            Nov 20 '18 at 9:15


















          1














          You don't need bazel for building ops if it fails.



          I have implemented customized ops both in CPU and GPU, and basically follow the two Tensorflow tutorials.



          For CPU ops, follow Tensorflow tutorial on Build the op library:



          TF_CFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_compile_flags()))') )
          TF_LFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_link_flags()))') )
          g++ -std=c++11 -shared zero_out.cc -o zero_out.so -fPIC ${TF_CFLAGS[@]} ${TF_LFLAGS[@]} -O2


          Note on gcc version >=5: gcc uses the new C++ ABI since version 5. The binary pip packages available on the TensorFlow website are built with gcc4 that uses the older ABI. If you compile your op library with gcc>=5, add -D_GLIBCXX_USE_CXX11_ABI=0 to the command line to make the library compatible with the older abi.



          For GPU ops, check the current official GPU ops building instructions on Tensorflow adding GPU op support



          nvcc -std=c++11 -c -o cuda_op_kernel.cu.o cuda_op_kernel.cu.cc 
          ${TF_CFLAGS[@]} -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC

          g++ -std=c++11 -shared -o cuda_op_kernel.so cuda_op_kernel.cc
          cuda_op_kernel.cu.o ${TF_CFLAGS[@]} -fPIC -lcudart ${TF_LFLAGS[@]}


          As it says, Note that if your CUDA libraries are not installed in /usr/local/lib64, you'll need to specify the path explicitly in the second (g++) command above. For example, add -L /usr/local/cuda-8.0/lib64/ if your CUDA is installed in /usr/local/cuda-8.0.



          Also, Note in some linux settings, additional options to nvcc compiling step are needed. Add -D_MWAITXINTRIN_H_INCLUDED to the nvcc command line to avoid errors from mwaitxintrin.h.






          share|improve this answer


























          • I am actually trying to build it on windows, I will try to translate what you proposed.

            – Dor Peretz
            Nov 15 '18 at 13:20











          • The following fails: nvcc -std=c++11 -c -o cuda_op_kernel.cu.o cuda_op_kernel.cu.cc -IC:UsersdorpDesktoptensorflow-1.11.0 -D_GLIBCXX_USE_CXX11_ABI=0 -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC Error: C:/Users/dorp/Desktop/tensorflow-1.11.0third_party/eigen3/unsupported/Eigen/CXX11/Tensor(1): fatal error C1083: Cannot open include file: 'unsupported/Eigen/CXX11/Tensor': No such file or directory

            – Dor Peretz
            Nov 20 '18 at 9:15
















          1












          1








          1







          You don't need bazel for building ops if it fails.



          I have implemented customized ops both in CPU and GPU, and basically follow the two Tensorflow tutorials.



          For CPU ops, follow Tensorflow tutorial on Build the op library:



          TF_CFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_compile_flags()))') )
          TF_LFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_link_flags()))') )
          g++ -std=c++11 -shared zero_out.cc -o zero_out.so -fPIC ${TF_CFLAGS[@]} ${TF_LFLAGS[@]} -O2


          Note on gcc version >=5: gcc uses the new C++ ABI since version 5. The binary pip packages available on the TensorFlow website are built with gcc4 that uses the older ABI. If you compile your op library with gcc>=5, add -D_GLIBCXX_USE_CXX11_ABI=0 to the command line to make the library compatible with the older abi.



          For GPU ops, check the current official GPU ops building instructions on Tensorflow adding GPU op support



          nvcc -std=c++11 -c -o cuda_op_kernel.cu.o cuda_op_kernel.cu.cc 
          ${TF_CFLAGS[@]} -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC

          g++ -std=c++11 -shared -o cuda_op_kernel.so cuda_op_kernel.cc
          cuda_op_kernel.cu.o ${TF_CFLAGS[@]} -fPIC -lcudart ${TF_LFLAGS[@]}


          As it says, Note that if your CUDA libraries are not installed in /usr/local/lib64, you'll need to specify the path explicitly in the second (g++) command above. For example, add -L /usr/local/cuda-8.0/lib64/ if your CUDA is installed in /usr/local/cuda-8.0.



          Also, Note in some linux settings, additional options to nvcc compiling step are needed. Add -D_MWAITXINTRIN_H_INCLUDED to the nvcc command line to avoid errors from mwaitxintrin.h.






          share|improve this answer















          You don't need bazel for building ops if it fails.



          I have implemented customized ops both in CPU and GPU, and basically follow the two Tensorflow tutorials.



          For CPU ops, follow Tensorflow tutorial on Build the op library:



          TF_CFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_compile_flags()))') )
          TF_LFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_link_flags()))') )
          g++ -std=c++11 -shared zero_out.cc -o zero_out.so -fPIC ${TF_CFLAGS[@]} ${TF_LFLAGS[@]} -O2


          Note on gcc version >=5: gcc uses the new C++ ABI since version 5. The binary pip packages available on the TensorFlow website are built with gcc4 that uses the older ABI. If you compile your op library with gcc>=5, add -D_GLIBCXX_USE_CXX11_ABI=0 to the command line to make the library compatible with the older abi.



          For GPU ops, check the current official GPU ops building instructions on Tensorflow adding GPU op support



          nvcc -std=c++11 -c -o cuda_op_kernel.cu.o cuda_op_kernel.cu.cc 
          ${TF_CFLAGS[@]} -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC

          g++ -std=c++11 -shared -o cuda_op_kernel.so cuda_op_kernel.cc
          cuda_op_kernel.cu.o ${TF_CFLAGS[@]} -fPIC -lcudart ${TF_LFLAGS[@]}


          As it says, Note that if your CUDA libraries are not installed in /usr/local/lib64, you'll need to specify the path explicitly in the second (g++) command above. For example, add -L /usr/local/cuda-8.0/lib64/ if your CUDA is installed in /usr/local/cuda-8.0.



          Also, Note in some linux settings, additional options to nvcc compiling step are needed. Add -D_MWAITXINTRIN_H_INCLUDED to the nvcc command line to avoid errors from mwaitxintrin.h.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 14 '18 at 17:59

























          answered Nov 14 '18 at 17:53









          Panfeng LiPanfeng Li

          1,0951024




          1,0951024













          • I am actually trying to build it on windows, I will try to translate what you proposed.

            – Dor Peretz
            Nov 15 '18 at 13:20











          • The following fails: nvcc -std=c++11 -c -o cuda_op_kernel.cu.o cuda_op_kernel.cu.cc -IC:UsersdorpDesktoptensorflow-1.11.0 -D_GLIBCXX_USE_CXX11_ABI=0 -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC Error: C:/Users/dorp/Desktop/tensorflow-1.11.0third_party/eigen3/unsupported/Eigen/CXX11/Tensor(1): fatal error C1083: Cannot open include file: 'unsupported/Eigen/CXX11/Tensor': No such file or directory

            – Dor Peretz
            Nov 20 '18 at 9:15





















          • I am actually trying to build it on windows, I will try to translate what you proposed.

            – Dor Peretz
            Nov 15 '18 at 13:20











          • The following fails: nvcc -std=c++11 -c -o cuda_op_kernel.cu.o cuda_op_kernel.cu.cc -IC:UsersdorpDesktoptensorflow-1.11.0 -D_GLIBCXX_USE_CXX11_ABI=0 -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC Error: C:/Users/dorp/Desktop/tensorflow-1.11.0third_party/eigen3/unsupported/Eigen/CXX11/Tensor(1): fatal error C1083: Cannot open include file: 'unsupported/Eigen/CXX11/Tensor': No such file or directory

            – Dor Peretz
            Nov 20 '18 at 9:15



















          I am actually trying to build it on windows, I will try to translate what you proposed.

          – Dor Peretz
          Nov 15 '18 at 13:20





          I am actually trying to build it on windows, I will try to translate what you proposed.

          – Dor Peretz
          Nov 15 '18 at 13:20













          The following fails: nvcc -std=c++11 -c -o cuda_op_kernel.cu.o cuda_op_kernel.cu.cc -IC:UsersdorpDesktoptensorflow-1.11.0 -D_GLIBCXX_USE_CXX11_ABI=0 -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC Error: C:/Users/dorp/Desktop/tensorflow-1.11.0third_party/eigen3/unsupported/Eigen/CXX11/Tensor(1): fatal error C1083: Cannot open include file: 'unsupported/Eigen/CXX11/Tensor': No such file or directory

          – Dor Peretz
          Nov 20 '18 at 9:15







          The following fails: nvcc -std=c++11 -c -o cuda_op_kernel.cu.o cuda_op_kernel.cu.cc -IC:UsersdorpDesktoptensorflow-1.11.0 -D_GLIBCXX_USE_CXX11_ABI=0 -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC Error: C:/Users/dorp/Desktop/tensorflow-1.11.0third_party/eigen3/unsupported/Eigen/CXX11/Tensor(1): fatal error C1083: Cannot open include file: 'unsupported/Eigen/CXX11/Tensor': No such file or directory

          – Dor Peretz
          Nov 20 '18 at 9:15






















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53303418%2fhow-to-build-tenssorflow-op-with-bazel-with-additional-include-directories%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Xamarin.iOS Cant Deploy on Iphone

          Glorious Revolution

          Dulmage-Mendelsohn matrix decomposition in Python