Skip to content

Large scale dataset training #29

@Ruazzm

Description

@Ruazzm

Hi,
I have encountered an issue where the dataset I entered is too large to be read, and if it is particularly large, , it can cause the process to be Killed. For example,

Loading extension module split_decision...
Using /root/.cache/torch_extensions/py38_cu118 as PyTorch extensions root...
No modifications detected for re-loaded extension module split_decision, skipping build step...
Loading extension module split_decision...
Killed

How can I solve this problem? Does PGBM support batch training?
Thanks

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions