Description
When using thinc (e.g. via spaCy with PyTorch-backed components), PyTorch emits a FutureWarning because thinc still uses the deprecated torch.cuda.amp.autocast() API.
Warning:
FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast(...)` instead.
Location: thinc/shims/pytorch.py
- Line 114:
with torch.cuda.amp.autocast(self._mixed_precision): (in predict)
- Line 128:
with torch.cuda.amp.autocast(self._mixed_precision): (in begin_update)
Expected behavior
No deprecation warning. PyTorch recommends the device-agnostic API:
- Old (deprecated):
torch.cuda.amp.autocast(...)
- New:
torch.amp.autocast(device_type="cuda", ...) or torch.autocast("cuda", ...)
See PyTorch amp docs.
Suggested fix
Replace:
with torch.cuda.amp.autocast(self._mixed_precision):
with the new API, e.g. (depending on PyTorch version compatibility):
with torch.amp.autocast(device_type="cuda", dtype=self._mixed_precision or torch.float16):
or the equivalent that preserves the current _mixed_precision behaviour (e.g. dtype=torch.float16 when mixed precision is enabled).
Environment
- thinc: 8.2.5
- spacy: 3.7.5
- PyTorch: 2.x (emits the deprecation)
Triggered when running pipelines that use spaCy with PyTorch (e.g. NER, transformers-backed components). Cosmetic only (no functional impact), but clutters logs and CI output.
Description
When using thinc (e.g. via spaCy with PyTorch-backed components), PyTorch emits a
FutureWarningbecause thinc still uses the deprecatedtorch.cuda.amp.autocast()API.Warning:
Location:
thinc/shims/pytorch.pywith torch.cuda.amp.autocast(self._mixed_precision):(inpredict)with torch.cuda.amp.autocast(self._mixed_precision):(inbegin_update)Expected behavior
No deprecation warning. PyTorch recommends the device-agnostic API:
torch.cuda.amp.autocast(...)torch.amp.autocast(device_type="cuda", ...)ortorch.autocast("cuda", ...)See PyTorch amp docs.
Suggested fix
Replace:
with the new API, e.g. (depending on PyTorch version compatibility):
or the equivalent that preserves the current
_mixed_precisionbehaviour (e.g.dtype=torch.float16when mixed precision is enabled).Environment
Triggered when running pipelines that use spaCy with PyTorch (e.g. NER, transformers-backed components). Cosmetic only (no functional impact), but clutters logs and CI output.