tf 1.x的方法
By default, TensorFlow maps nearly all of the GPU memory of all GPUs (subject to
CUDA_VISIBLE_DEVICES
)
visible to the process. This is done to more efficiently use the relatively
precious GPU memory resources on the devices by reducing memory
fragmentation.In some cases it is desirable for the process to only allocate a
subset of the available memory, or to only grow the memory usage as is
needed by the process.TensorFlow provides two Config options on the Session to control this.
The first is theallow_growth
option, which attempts to allocate
only as much GPU memory based on runtime allocations: it starts out
allocating very little memory, and as Sessions get run and more GPU
memory is needed, we extend the GPU memory region needed by the
TensorFlow process. Note that we do not release memory, since that can
lead to even worse memory fragmentation. To turn this option on, set
the option in the ConfigProto by:
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)
The second method is the
per_process_gpu_memory_fraction
option,
which determines the fraction of the overall amount of memory that
each visible GPU should be allocated. For example, you can tell
TensorFlow to only allocate 40% of the total memory of each GPU by:
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
session = tf.Session(config=config, ...)
This is useful if you want to truly bound the amount of GPU memory
available to the TensorFlow process.
tf的显存不做设置的话,默认会几乎占用全部的显存,不利于资源的利用。
第一种方法设置显存分配随着需求增加而分配。
第二种方法直接设置最大的显存大小。
tf2.x的方法
tf.config.experimental.set_memory_growth(
device, enable
)
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
文章由极客之音整理,本文链接:https://www.bmabk.com/index.php/post/133488.html