How to use Custom OP to build TensorFlow Graph in C++?
up vote
1
down vote
favorite
From TensorFlow documentation, the following can be done to build graph using inherent OP
#include "tensorflow/cc/client/client_session.h"
#include "tensorflow/cc/ops/standard_ops.h"
#include "tensorflow/core/framework/tensor.h"
int main() {
using namespace tensorflow;
using namespace tensorflow::ops;
Scope root = Scope::NewRootScope();
// Matrix A = [3 2; -1 0]
auto A = Const(root, { {3.f, 2.f}, {-1.f, 0.f} });
// Vector b = [3 5]
auto b = Const(root, { {3.f, 5.f} });
// v = Ab^T
auto v = MatMul(root.WithOpName("v"), A, b, MatMul::TransposeB(true));
std::vector<Tensor> outputs;
ClientSession session(root);
// Run and fetch v
TF_CHECK_OK(session.Run({v}, &outputs));
// Expect outputs[0] == [19; -3]
LOG(INFO) << outputs[0].matrix<float>();
return 0;
}
It seems that MatMul
class is auto generated as there is no tensorflow/cc/ops/math_ops.h
in the github source code.
How to do the same thing for custom op such as ZeroOut OP from here
c++ tensorflow machine-learning
add a comment |
up vote
1
down vote
favorite
From TensorFlow documentation, the following can be done to build graph using inherent OP
#include "tensorflow/cc/client/client_session.h"
#include "tensorflow/cc/ops/standard_ops.h"
#include "tensorflow/core/framework/tensor.h"
int main() {
using namespace tensorflow;
using namespace tensorflow::ops;
Scope root = Scope::NewRootScope();
// Matrix A = [3 2; -1 0]
auto A = Const(root, { {3.f, 2.f}, {-1.f, 0.f} });
// Vector b = [3 5]
auto b = Const(root, { {3.f, 5.f} });
// v = Ab^T
auto v = MatMul(root.WithOpName("v"), A, b, MatMul::TransposeB(true));
std::vector<Tensor> outputs;
ClientSession session(root);
// Run and fetch v
TF_CHECK_OK(session.Run({v}, &outputs));
// Expect outputs[0] == [19; -3]
LOG(INFO) << outputs[0].matrix<float>();
return 0;
}
It seems that MatMul
class is auto generated as there is no tensorflow/cc/ops/math_ops.h
in the github source code.
How to do the same thing for custom op such as ZeroOut OP from here
c++ tensorflow machine-learning
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
From TensorFlow documentation, the following can be done to build graph using inherent OP
#include "tensorflow/cc/client/client_session.h"
#include "tensorflow/cc/ops/standard_ops.h"
#include "tensorflow/core/framework/tensor.h"
int main() {
using namespace tensorflow;
using namespace tensorflow::ops;
Scope root = Scope::NewRootScope();
// Matrix A = [3 2; -1 0]
auto A = Const(root, { {3.f, 2.f}, {-1.f, 0.f} });
// Vector b = [3 5]
auto b = Const(root, { {3.f, 5.f} });
// v = Ab^T
auto v = MatMul(root.WithOpName("v"), A, b, MatMul::TransposeB(true));
std::vector<Tensor> outputs;
ClientSession session(root);
// Run and fetch v
TF_CHECK_OK(session.Run({v}, &outputs));
// Expect outputs[0] == [19; -3]
LOG(INFO) << outputs[0].matrix<float>();
return 0;
}
It seems that MatMul
class is auto generated as there is no tensorflow/cc/ops/math_ops.h
in the github source code.
How to do the same thing for custom op such as ZeroOut OP from here
c++ tensorflow machine-learning
From TensorFlow documentation, the following can be done to build graph using inherent OP
#include "tensorflow/cc/client/client_session.h"
#include "tensorflow/cc/ops/standard_ops.h"
#include "tensorflow/core/framework/tensor.h"
int main() {
using namespace tensorflow;
using namespace tensorflow::ops;
Scope root = Scope::NewRootScope();
// Matrix A = [3 2; -1 0]
auto A = Const(root, { {3.f, 2.f}, {-1.f, 0.f} });
// Vector b = [3 5]
auto b = Const(root, { {3.f, 5.f} });
// v = Ab^T
auto v = MatMul(root.WithOpName("v"), A, b, MatMul::TransposeB(true));
std::vector<Tensor> outputs;
ClientSession session(root);
// Run and fetch v
TF_CHECK_OK(session.Run({v}, &outputs));
// Expect outputs[0] == [19; -3]
LOG(INFO) << outputs[0].matrix<float>();
return 0;
}
It seems that MatMul
class is auto generated as there is no tensorflow/cc/ops/math_ops.h
in the github source code.
How to do the same thing for custom op such as ZeroOut OP from here
c++ tensorflow machine-learning
c++ tensorflow machine-learning
asked Nov 20 at 0:14
tianyapiaozi
5481516
5481516
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
accepted
Take ZeroOut
from here as example, you have to do the following
class ZeroOut {
public:
ZeroOut(const ::tensorflow::Scope& scope, ::tensorflow::Input x);
operator ::tensorflow::Output() const { return y; }
operator ::tensorflow::Input() const { return y; }
::tensorflow::Node* node() const { return y.node(); }
::tensorflow::Output y;
};
ZeroOut::ZeroOut(const ::tensorflow::Scope& scope, ::tensorflow::Input x) {
if (!scope.ok()) return;
auto _x = ::tensorflow::ops::AsNodeOut(scope, x);
if (!scope.ok()) return;
::tensorflow::Node* ret;
const auto unique_name = scope.GetUniqueNameForOp("ZeroOut");
auto builder = ::tensorflow::NodeBuilder(unique_name, "ZeroOut")
.Input(_x)
;
scope.UpdateBuilder(&builder);
scope.UpdateStatus(builder.Finalize(scope.graph(), &ret));
if (!scope.ok()) return;
scope.UpdateStatus(scope.DoShapeInference(ret));
this->y = Output(ret, 0);
}
Then you can use it to build graph
Scope root = Scope::NewRootScope();
// Matrix A = [3 2; -1 0]
auto A = Const(root, { {3, 2}, {-1, 0} });
auto v = ZeroOut(root.WithOpName("v"), A);
std::vector<Tensor> outputs;
ClientSession session(root);
// Run and fetch v
TF_CHECK_OK(session.Run({v}, &outputs));
LOG(INFO) << outputs[0].matrix<int>();
Note: For TensorFlow inherent OP, code like ZeroOut class
is autogenerated by bazel rule. We can imitate those codes(e.g. tensorflow/cc/ops/math_ops.h
) to hand write our own classes if we only have a few custom OPs.
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
accepted
Take ZeroOut
from here as example, you have to do the following
class ZeroOut {
public:
ZeroOut(const ::tensorflow::Scope& scope, ::tensorflow::Input x);
operator ::tensorflow::Output() const { return y; }
operator ::tensorflow::Input() const { return y; }
::tensorflow::Node* node() const { return y.node(); }
::tensorflow::Output y;
};
ZeroOut::ZeroOut(const ::tensorflow::Scope& scope, ::tensorflow::Input x) {
if (!scope.ok()) return;
auto _x = ::tensorflow::ops::AsNodeOut(scope, x);
if (!scope.ok()) return;
::tensorflow::Node* ret;
const auto unique_name = scope.GetUniqueNameForOp("ZeroOut");
auto builder = ::tensorflow::NodeBuilder(unique_name, "ZeroOut")
.Input(_x)
;
scope.UpdateBuilder(&builder);
scope.UpdateStatus(builder.Finalize(scope.graph(), &ret));
if (!scope.ok()) return;
scope.UpdateStatus(scope.DoShapeInference(ret));
this->y = Output(ret, 0);
}
Then you can use it to build graph
Scope root = Scope::NewRootScope();
// Matrix A = [3 2; -1 0]
auto A = Const(root, { {3, 2}, {-1, 0} });
auto v = ZeroOut(root.WithOpName("v"), A);
std::vector<Tensor> outputs;
ClientSession session(root);
// Run and fetch v
TF_CHECK_OK(session.Run({v}, &outputs));
LOG(INFO) << outputs[0].matrix<int>();
Note: For TensorFlow inherent OP, code like ZeroOut class
is autogenerated by bazel rule. We can imitate those codes(e.g. tensorflow/cc/ops/math_ops.h
) to hand write our own classes if we only have a few custom OPs.
add a comment |
up vote
0
down vote
accepted
Take ZeroOut
from here as example, you have to do the following
class ZeroOut {
public:
ZeroOut(const ::tensorflow::Scope& scope, ::tensorflow::Input x);
operator ::tensorflow::Output() const { return y; }
operator ::tensorflow::Input() const { return y; }
::tensorflow::Node* node() const { return y.node(); }
::tensorflow::Output y;
};
ZeroOut::ZeroOut(const ::tensorflow::Scope& scope, ::tensorflow::Input x) {
if (!scope.ok()) return;
auto _x = ::tensorflow::ops::AsNodeOut(scope, x);
if (!scope.ok()) return;
::tensorflow::Node* ret;
const auto unique_name = scope.GetUniqueNameForOp("ZeroOut");
auto builder = ::tensorflow::NodeBuilder(unique_name, "ZeroOut")
.Input(_x)
;
scope.UpdateBuilder(&builder);
scope.UpdateStatus(builder.Finalize(scope.graph(), &ret));
if (!scope.ok()) return;
scope.UpdateStatus(scope.DoShapeInference(ret));
this->y = Output(ret, 0);
}
Then you can use it to build graph
Scope root = Scope::NewRootScope();
// Matrix A = [3 2; -1 0]
auto A = Const(root, { {3, 2}, {-1, 0} });
auto v = ZeroOut(root.WithOpName("v"), A);
std::vector<Tensor> outputs;
ClientSession session(root);
// Run and fetch v
TF_CHECK_OK(session.Run({v}, &outputs));
LOG(INFO) << outputs[0].matrix<int>();
Note: For TensorFlow inherent OP, code like ZeroOut class
is autogenerated by bazel rule. We can imitate those codes(e.g. tensorflow/cc/ops/math_ops.h
) to hand write our own classes if we only have a few custom OPs.
add a comment |
up vote
0
down vote
accepted
up vote
0
down vote
accepted
Take ZeroOut
from here as example, you have to do the following
class ZeroOut {
public:
ZeroOut(const ::tensorflow::Scope& scope, ::tensorflow::Input x);
operator ::tensorflow::Output() const { return y; }
operator ::tensorflow::Input() const { return y; }
::tensorflow::Node* node() const { return y.node(); }
::tensorflow::Output y;
};
ZeroOut::ZeroOut(const ::tensorflow::Scope& scope, ::tensorflow::Input x) {
if (!scope.ok()) return;
auto _x = ::tensorflow::ops::AsNodeOut(scope, x);
if (!scope.ok()) return;
::tensorflow::Node* ret;
const auto unique_name = scope.GetUniqueNameForOp("ZeroOut");
auto builder = ::tensorflow::NodeBuilder(unique_name, "ZeroOut")
.Input(_x)
;
scope.UpdateBuilder(&builder);
scope.UpdateStatus(builder.Finalize(scope.graph(), &ret));
if (!scope.ok()) return;
scope.UpdateStatus(scope.DoShapeInference(ret));
this->y = Output(ret, 0);
}
Then you can use it to build graph
Scope root = Scope::NewRootScope();
// Matrix A = [3 2; -1 0]
auto A = Const(root, { {3, 2}, {-1, 0} });
auto v = ZeroOut(root.WithOpName("v"), A);
std::vector<Tensor> outputs;
ClientSession session(root);
// Run and fetch v
TF_CHECK_OK(session.Run({v}, &outputs));
LOG(INFO) << outputs[0].matrix<int>();
Note: For TensorFlow inherent OP, code like ZeroOut class
is autogenerated by bazel rule. We can imitate those codes(e.g. tensorflow/cc/ops/math_ops.h
) to hand write our own classes if we only have a few custom OPs.
Take ZeroOut
from here as example, you have to do the following
class ZeroOut {
public:
ZeroOut(const ::tensorflow::Scope& scope, ::tensorflow::Input x);
operator ::tensorflow::Output() const { return y; }
operator ::tensorflow::Input() const { return y; }
::tensorflow::Node* node() const { return y.node(); }
::tensorflow::Output y;
};
ZeroOut::ZeroOut(const ::tensorflow::Scope& scope, ::tensorflow::Input x) {
if (!scope.ok()) return;
auto _x = ::tensorflow::ops::AsNodeOut(scope, x);
if (!scope.ok()) return;
::tensorflow::Node* ret;
const auto unique_name = scope.GetUniqueNameForOp("ZeroOut");
auto builder = ::tensorflow::NodeBuilder(unique_name, "ZeroOut")
.Input(_x)
;
scope.UpdateBuilder(&builder);
scope.UpdateStatus(builder.Finalize(scope.graph(), &ret));
if (!scope.ok()) return;
scope.UpdateStatus(scope.DoShapeInference(ret));
this->y = Output(ret, 0);
}
Then you can use it to build graph
Scope root = Scope::NewRootScope();
// Matrix A = [3 2; -1 0]
auto A = Const(root, { {3, 2}, {-1, 0} });
auto v = ZeroOut(root.WithOpName("v"), A);
std::vector<Tensor> outputs;
ClientSession session(root);
// Run and fetch v
TF_CHECK_OK(session.Run({v}, &outputs));
LOG(INFO) << outputs[0].matrix<int>();
Note: For TensorFlow inherent OP, code like ZeroOut class
is autogenerated by bazel rule. We can imitate those codes(e.g. tensorflow/cc/ops/math_ops.h
) to hand write our own classes if we only have a few custom OPs.
answered Nov 25 at 10:40
tianyapiaozi
5481516
5481516
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53384454%2fhow-to-use-custom-op-to-build-tensorflow-graph-in-c%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown