Hızlı yanıt: kod örneği
The configuration value can be set as an environment variable.The exact syntax is documented, but in short:The behavior of caching allocator can be controlled via environment variable . The format is …Available options:…Linux: This will depend on what OS you're using - in your case, for Google Colab, you might find this question helpful.
max_split_size_mb
PYTORCH_CUDA_ALLOC_CONF
PYTORCH_CUDA_ALLOC_CONF=<option>:<value>,<option2>:<value2>
- prevents the allocator from splitting blocks larger than this size (in MB). This can help prevent fragmentation and may allow some borderline workloads to complete without running out of memory. Performance cost can range from ‘zero’ to ‘substantial’ depending on allocation patterns. Default value is unlimited, i.e. all blocks can be split. The
max_split_size_mb
andmemory_stats()
methods are useful for tuning. This option should be used as a last resort for a workload that is aborting due to ‘out of memory’ and showing a large amount of inactive split blocks.memory_summary()
set 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'
export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'