Speeding up TFRecords feed into Keras model on CloudML for GPU
up vote
6
down vote
favorite
I would like to feed TFRecords into my model at a super fast rate. However, currently, my GPU(Single K80 on GCP) is at 0% load which is super slow on CloudML.
I have TFRecords in GCS: train_directory = gs://bucket/train/*.tfrecord
, (around 100 files of 30mb-800mb in size), but for some reason it struggles to feed the data into my model fast enough for GPU.
Interestingly, loading data into memory and using numpy arrays using fit_generator()
is 7x faster. There I can specify multi-processing and multi workers.
My current set up parses tf records and loads an infinite tf.Dataset
. Ideally, the solution would save/prefecth some batches in memory, for the gpu to use on demand.
def _parse_func(record):
""" Parses TF Record"""
keys_to_features = {}
for _ in feature_list: # 300 features ['height', 'weights', 'salary']
keys_to_features[_] = tf.FixedLenFeature([TIME_STEPS], tf.float32)
parsed = tf.parse_single_example(record, keys_to_features)
t = [tf.manip.reshape(parsed[_], [-1, 1]) for _ in feature_list]
numeric_tensor = tf.concat(values=t, axis=1)
x = dict()
x['numeric'] = numeric_tensor
y = ...
w = ...
return x, y, w
def input_fn(file_pattern, b=BATCH_SIZE):
"""
:param file_pattern: GCS bucket to read from
:param b: Batch size, defaults to BATCH_SIZE in hparams.py
:return: And infinitely iterable data set using tf records of tf.data.Dataset class
"""
files = tf.data.Dataset.list_files(file_pattern=file_pattern)
d = files.apply(
tf.data.experimental.parallel_interleave(
lambda filename: tf.data.TFRecordDataset(filename),
cycle_length=4,
block_length=16,
buffer_output_elements=16,
prefetch_input_elements=16,
sloppy=True))
d = d.apply(tf.contrib.data.map_and_batch(
map_func=_parse_func, batch_size=b,
num_parallel_batches=4))
d = d.cache()
d = d.repeat()
d = d.prefetch(1)
return d
Get train data
# get files from GCS bucket and load them into dataset
train_data = input_fn(train_directory, b=BATCH_SIZE)
Fit the model
model.fit(x=train_data.make_one_shot_iterator())
I am running it on CloudML so GCS and CloudML should be pretty fast.
CloudML CPU Usage:
As we can see below, the CPU is at 70% and the memory doesn't increase past 10%. So what does the dataset.cache()
do?
GPU metrics in CloudML logs
As seen below, it seems that the GPU is off! Also the memory is at 0mb. Where is the cache stored?
No processes running on GPU!
Edit:
It seems that indeed, there are not processes running on GPU. I tried to explicitly state:
tf.keras.backend.set_session(tf.Session(config=tf.ConfigProto(
allow_soft_placement=True,
log_device_placement=True)))
train_data = input_fn(file_pattern=train_directory, b=BATCH_SIZE)
model = create_model()
with tf.device('/gpu:0'):
model.fit(x=train_data.make_one_shot_iterator(),
epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=test_data.make_one_shot_iterator(),
validation_steps=VALIDATION_STEPS)
but everything still utilises the CPU!
python tensorflow keras google-cloud-ml
add a comment |
up vote
6
down vote
favorite
I would like to feed TFRecords into my model at a super fast rate. However, currently, my GPU(Single K80 on GCP) is at 0% load which is super slow on CloudML.
I have TFRecords in GCS: train_directory = gs://bucket/train/*.tfrecord
, (around 100 files of 30mb-800mb in size), but for some reason it struggles to feed the data into my model fast enough for GPU.
Interestingly, loading data into memory and using numpy arrays using fit_generator()
is 7x faster. There I can specify multi-processing and multi workers.
My current set up parses tf records and loads an infinite tf.Dataset
. Ideally, the solution would save/prefecth some batches in memory, for the gpu to use on demand.
def _parse_func(record):
""" Parses TF Record"""
keys_to_features = {}
for _ in feature_list: # 300 features ['height', 'weights', 'salary']
keys_to_features[_] = tf.FixedLenFeature([TIME_STEPS], tf.float32)
parsed = tf.parse_single_example(record, keys_to_features)
t = [tf.manip.reshape(parsed[_], [-1, 1]) for _ in feature_list]
numeric_tensor = tf.concat(values=t, axis=1)
x = dict()
x['numeric'] = numeric_tensor
y = ...
w = ...
return x, y, w
def input_fn(file_pattern, b=BATCH_SIZE):
"""
:param file_pattern: GCS bucket to read from
:param b: Batch size, defaults to BATCH_SIZE in hparams.py
:return: And infinitely iterable data set using tf records of tf.data.Dataset class
"""
files = tf.data.Dataset.list_files(file_pattern=file_pattern)
d = files.apply(
tf.data.experimental.parallel_interleave(
lambda filename: tf.data.TFRecordDataset(filename),
cycle_length=4,
block_length=16,
buffer_output_elements=16,
prefetch_input_elements=16,
sloppy=True))
d = d.apply(tf.contrib.data.map_and_batch(
map_func=_parse_func, batch_size=b,
num_parallel_batches=4))
d = d.cache()
d = d.repeat()
d = d.prefetch(1)
return d
Get train data
# get files from GCS bucket and load them into dataset
train_data = input_fn(train_directory, b=BATCH_SIZE)
Fit the model
model.fit(x=train_data.make_one_shot_iterator())
I am running it on CloudML so GCS and CloudML should be pretty fast.
CloudML CPU Usage:
As we can see below, the CPU is at 70% and the memory doesn't increase past 10%. So what does the dataset.cache()
do?
GPU metrics in CloudML logs
As seen below, it seems that the GPU is off! Also the memory is at 0mb. Where is the cache stored?
No processes running on GPU!
Edit:
It seems that indeed, there are not processes running on GPU. I tried to explicitly state:
tf.keras.backend.set_session(tf.Session(config=tf.ConfigProto(
allow_soft_placement=True,
log_device_placement=True)))
train_data = input_fn(file_pattern=train_directory, b=BATCH_SIZE)
model = create_model()
with tf.device('/gpu:0'):
model.fit(x=train_data.make_one_shot_iterator(),
epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=test_data.make_one_shot_iterator(),
validation_steps=VALIDATION_STEPS)
but everything still utilises the CPU!
python tensorflow keras google-cloud-ml
add a comment |
up vote
6
down vote
favorite
up vote
6
down vote
favorite
I would like to feed TFRecords into my model at a super fast rate. However, currently, my GPU(Single K80 on GCP) is at 0% load which is super slow on CloudML.
I have TFRecords in GCS: train_directory = gs://bucket/train/*.tfrecord
, (around 100 files of 30mb-800mb in size), but for some reason it struggles to feed the data into my model fast enough for GPU.
Interestingly, loading data into memory and using numpy arrays using fit_generator()
is 7x faster. There I can specify multi-processing and multi workers.
My current set up parses tf records and loads an infinite tf.Dataset
. Ideally, the solution would save/prefecth some batches in memory, for the gpu to use on demand.
def _parse_func(record):
""" Parses TF Record"""
keys_to_features = {}
for _ in feature_list: # 300 features ['height', 'weights', 'salary']
keys_to_features[_] = tf.FixedLenFeature([TIME_STEPS], tf.float32)
parsed = tf.parse_single_example(record, keys_to_features)
t = [tf.manip.reshape(parsed[_], [-1, 1]) for _ in feature_list]
numeric_tensor = tf.concat(values=t, axis=1)
x = dict()
x['numeric'] = numeric_tensor
y = ...
w = ...
return x, y, w
def input_fn(file_pattern, b=BATCH_SIZE):
"""
:param file_pattern: GCS bucket to read from
:param b: Batch size, defaults to BATCH_SIZE in hparams.py
:return: And infinitely iterable data set using tf records of tf.data.Dataset class
"""
files = tf.data.Dataset.list_files(file_pattern=file_pattern)
d = files.apply(
tf.data.experimental.parallel_interleave(
lambda filename: tf.data.TFRecordDataset(filename),
cycle_length=4,
block_length=16,
buffer_output_elements=16,
prefetch_input_elements=16,
sloppy=True))
d = d.apply(tf.contrib.data.map_and_batch(
map_func=_parse_func, batch_size=b,
num_parallel_batches=4))
d = d.cache()
d = d.repeat()
d = d.prefetch(1)
return d
Get train data
# get files from GCS bucket and load them into dataset
train_data = input_fn(train_directory, b=BATCH_SIZE)
Fit the model
model.fit(x=train_data.make_one_shot_iterator())
I am running it on CloudML so GCS and CloudML should be pretty fast.
CloudML CPU Usage:
As we can see below, the CPU is at 70% and the memory doesn't increase past 10%. So what does the dataset.cache()
do?
GPU metrics in CloudML logs
As seen below, it seems that the GPU is off! Also the memory is at 0mb. Where is the cache stored?
No processes running on GPU!
Edit:
It seems that indeed, there are not processes running on GPU. I tried to explicitly state:
tf.keras.backend.set_session(tf.Session(config=tf.ConfigProto(
allow_soft_placement=True,
log_device_placement=True)))
train_data = input_fn(file_pattern=train_directory, b=BATCH_SIZE)
model = create_model()
with tf.device('/gpu:0'):
model.fit(x=train_data.make_one_shot_iterator(),
epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=test_data.make_one_shot_iterator(),
validation_steps=VALIDATION_STEPS)
but everything still utilises the CPU!
python tensorflow keras google-cloud-ml
I would like to feed TFRecords into my model at a super fast rate. However, currently, my GPU(Single K80 on GCP) is at 0% load which is super slow on CloudML.
I have TFRecords in GCS: train_directory = gs://bucket/train/*.tfrecord
, (around 100 files of 30mb-800mb in size), but for some reason it struggles to feed the data into my model fast enough for GPU.
Interestingly, loading data into memory and using numpy arrays using fit_generator()
is 7x faster. There I can specify multi-processing and multi workers.
My current set up parses tf records and loads an infinite tf.Dataset
. Ideally, the solution would save/prefecth some batches in memory, for the gpu to use on demand.
def _parse_func(record):
""" Parses TF Record"""
keys_to_features = {}
for _ in feature_list: # 300 features ['height', 'weights', 'salary']
keys_to_features[_] = tf.FixedLenFeature([TIME_STEPS], tf.float32)
parsed = tf.parse_single_example(record, keys_to_features)
t = [tf.manip.reshape(parsed[_], [-1, 1]) for _ in feature_list]
numeric_tensor = tf.concat(values=t, axis=1)
x = dict()
x['numeric'] = numeric_tensor
y = ...
w = ...
return x, y, w
def input_fn(file_pattern, b=BATCH_SIZE):
"""
:param file_pattern: GCS bucket to read from
:param b: Batch size, defaults to BATCH_SIZE in hparams.py
:return: And infinitely iterable data set using tf records of tf.data.Dataset class
"""
files = tf.data.Dataset.list_files(file_pattern=file_pattern)
d = files.apply(
tf.data.experimental.parallel_interleave(
lambda filename: tf.data.TFRecordDataset(filename),
cycle_length=4,
block_length=16,
buffer_output_elements=16,
prefetch_input_elements=16,
sloppy=True))
d = d.apply(tf.contrib.data.map_and_batch(
map_func=_parse_func, batch_size=b,
num_parallel_batches=4))
d = d.cache()
d = d.repeat()
d = d.prefetch(1)
return d
Get train data
# get files from GCS bucket and load them into dataset
train_data = input_fn(train_directory, b=BATCH_SIZE)
Fit the model
model.fit(x=train_data.make_one_shot_iterator())
I am running it on CloudML so GCS and CloudML should be pretty fast.
CloudML CPU Usage:
As we can see below, the CPU is at 70% and the memory doesn't increase past 10%. So what does the dataset.cache()
do?
GPU metrics in CloudML logs
As seen below, it seems that the GPU is off! Also the memory is at 0mb. Where is the cache stored?
No processes running on GPU!
Edit:
It seems that indeed, there are not processes running on GPU. I tried to explicitly state:
tf.keras.backend.set_session(tf.Session(config=tf.ConfigProto(
allow_soft_placement=True,
log_device_placement=True)))
train_data = input_fn(file_pattern=train_directory, b=BATCH_SIZE)
model = create_model()
with tf.device('/gpu:0'):
model.fit(x=train_data.make_one_shot_iterator(),
epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=test_data.make_one_shot_iterator(),
validation_steps=VALIDATION_STEPS)
but everything still utilises the CPU!
python tensorflow keras google-cloud-ml
python tensorflow keras google-cloud-ml
edited Nov 12 at 18:32
asked Nov 9 at 18:54
GRS
457625
457625
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
In my case, I was using a custom setup.py
file which used a CPU-only Tensorflow version.
I am kicking myself, please install tensorflow-gpu
instead.
Wait. If you did use CPU mode, the procedure should be bottlenecked at back-prop stage instead of data I/O. How come you reached 7x faster speed by loading data into memory?
– Tay2510
Nov 12 at 19:29
1
The 7x was on the default runtime, v1.10 which is preconfigured with GPU. When I specified the 1.12 version of Tensorflow atsetup.py
, it was a CPU only version, where the error occurred. The direct feed of weighs from tf.Dataset is only possible in Tensorflow v.1.12 which is the reason I had to install it. But still, the GPU usage is now at maximum 70%. While the CPU is just at 30% instead of previous 70%. So there is still room for improvement
– GRS
Nov 12 at 20:00
If you are using GCP ML, they only support up to TF 1.10 today in their runtime. Which scale tier are you using? You should be using: BASIC_GPU. cloud.google.com/ml-engine/docs/tensorflow/machine-types
– spicyramen
Nov 19 at 23:17
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
In my case, I was using a custom setup.py
file which used a CPU-only Tensorflow version.
I am kicking myself, please install tensorflow-gpu
instead.
Wait. If you did use CPU mode, the procedure should be bottlenecked at back-prop stage instead of data I/O. How come you reached 7x faster speed by loading data into memory?
– Tay2510
Nov 12 at 19:29
1
The 7x was on the default runtime, v1.10 which is preconfigured with GPU. When I specified the 1.12 version of Tensorflow atsetup.py
, it was a CPU only version, where the error occurred. The direct feed of weighs from tf.Dataset is only possible in Tensorflow v.1.12 which is the reason I had to install it. But still, the GPU usage is now at maximum 70%. While the CPU is just at 30% instead of previous 70%. So there is still room for improvement
– GRS
Nov 12 at 20:00
If you are using GCP ML, they only support up to TF 1.10 today in their runtime. Which scale tier are you using? You should be using: BASIC_GPU. cloud.google.com/ml-engine/docs/tensorflow/machine-types
– spicyramen
Nov 19 at 23:17
add a comment |
up vote
0
down vote
In my case, I was using a custom setup.py
file which used a CPU-only Tensorflow version.
I am kicking myself, please install tensorflow-gpu
instead.
Wait. If you did use CPU mode, the procedure should be bottlenecked at back-prop stage instead of data I/O. How come you reached 7x faster speed by loading data into memory?
– Tay2510
Nov 12 at 19:29
1
The 7x was on the default runtime, v1.10 which is preconfigured with GPU. When I specified the 1.12 version of Tensorflow atsetup.py
, it was a CPU only version, where the error occurred. The direct feed of weighs from tf.Dataset is only possible in Tensorflow v.1.12 which is the reason I had to install it. But still, the GPU usage is now at maximum 70%. While the CPU is just at 30% instead of previous 70%. So there is still room for improvement
– GRS
Nov 12 at 20:00
If you are using GCP ML, they only support up to TF 1.10 today in their runtime. Which scale tier are you using? You should be using: BASIC_GPU. cloud.google.com/ml-engine/docs/tensorflow/machine-types
– spicyramen
Nov 19 at 23:17
add a comment |
up vote
0
down vote
up vote
0
down vote
In my case, I was using a custom setup.py
file which used a CPU-only Tensorflow version.
I am kicking myself, please install tensorflow-gpu
instead.
In my case, I was using a custom setup.py
file which used a CPU-only Tensorflow version.
I am kicking myself, please install tensorflow-gpu
instead.
answered Nov 12 at 18:46
GRS
457625
457625
Wait. If you did use CPU mode, the procedure should be bottlenecked at back-prop stage instead of data I/O. How come you reached 7x faster speed by loading data into memory?
– Tay2510
Nov 12 at 19:29
1
The 7x was on the default runtime, v1.10 which is preconfigured with GPU. When I specified the 1.12 version of Tensorflow atsetup.py
, it was a CPU only version, where the error occurred. The direct feed of weighs from tf.Dataset is only possible in Tensorflow v.1.12 which is the reason I had to install it. But still, the GPU usage is now at maximum 70%. While the CPU is just at 30% instead of previous 70%. So there is still room for improvement
– GRS
Nov 12 at 20:00
If you are using GCP ML, they only support up to TF 1.10 today in their runtime. Which scale tier are you using? You should be using: BASIC_GPU. cloud.google.com/ml-engine/docs/tensorflow/machine-types
– spicyramen
Nov 19 at 23:17
add a comment |
Wait. If you did use CPU mode, the procedure should be bottlenecked at back-prop stage instead of data I/O. How come you reached 7x faster speed by loading data into memory?
– Tay2510
Nov 12 at 19:29
1
The 7x was on the default runtime, v1.10 which is preconfigured with GPU. When I specified the 1.12 version of Tensorflow atsetup.py
, it was a CPU only version, where the error occurred. The direct feed of weighs from tf.Dataset is only possible in Tensorflow v.1.12 which is the reason I had to install it. But still, the GPU usage is now at maximum 70%. While the CPU is just at 30% instead of previous 70%. So there is still room for improvement
– GRS
Nov 12 at 20:00
If you are using GCP ML, they only support up to TF 1.10 today in their runtime. Which scale tier are you using? You should be using: BASIC_GPU. cloud.google.com/ml-engine/docs/tensorflow/machine-types
– spicyramen
Nov 19 at 23:17
Wait. If you did use CPU mode, the procedure should be bottlenecked at back-prop stage instead of data I/O. How come you reached 7x faster speed by loading data into memory?
– Tay2510
Nov 12 at 19:29
Wait. If you did use CPU mode, the procedure should be bottlenecked at back-prop stage instead of data I/O. How come you reached 7x faster speed by loading data into memory?
– Tay2510
Nov 12 at 19:29
1
1
The 7x was on the default runtime, v1.10 which is preconfigured with GPU. When I specified the 1.12 version of Tensorflow at
setup.py
, it was a CPU only version, where the error occurred. The direct feed of weighs from tf.Dataset is only possible in Tensorflow v.1.12 which is the reason I had to install it. But still, the GPU usage is now at maximum 70%. While the CPU is just at 30% instead of previous 70%. So there is still room for improvement– GRS
Nov 12 at 20:00
The 7x was on the default runtime, v1.10 which is preconfigured with GPU. When I specified the 1.12 version of Tensorflow at
setup.py
, it was a CPU only version, where the error occurred. The direct feed of weighs from tf.Dataset is only possible in Tensorflow v.1.12 which is the reason I had to install it. But still, the GPU usage is now at maximum 70%. While the CPU is just at 30% instead of previous 70%. So there is still room for improvement– GRS
Nov 12 at 20:00
If you are using GCP ML, they only support up to TF 1.10 today in their runtime. Which scale tier are you using? You should be using: BASIC_GPU. cloud.google.com/ml-engine/docs/tensorflow/machine-types
– spicyramen
Nov 19 at 23:17
If you are using GCP ML, they only support up to TF 1.10 today in their runtime. Which scale tier are you using? You should be using: BASIC_GPU. cloud.google.com/ml-engine/docs/tensorflow/machine-types
– spicyramen
Nov 19 at 23:17
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53231758%2fspeeding-up-tfrecords-feed-into-keras-model-on-cloudml-for-gpu%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown