Skip to content

Automatically scale memory based on sample size #3

@andrew-slater

Description

@andrew-slater

Past evidence has indicated that when > 400 subjects in a single analysis/group, qmake chunk job memory requirements could exceed the currently hard coded 3 GB in tableinput.R. Ideally, tableinput.R could automatically scale the memory setting based on the number of subjects in the largest analysis/group (or perhaps it is better for the gtx package to do this as it has more knowledge of the analysis/groups). As a simple fix, allow a config key where the analyst can specify the number of subjects in the largest analysis/group and tableinput.R would recognize and scale the memory setting accordingly. Something like -l mt=NG where N is (X / 100) where X is the number of subjects - unsure if N can be float or if it should be rounded.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions