Skip to content

GPU shared library home in a Spark release (no CUDA-MLlib)? #12

@a-roberts

Description

@a-roberts

Hi, currently the shared library sits in the CUDA-MLlib/als directory, we want to include the shared library in a "standalone" Spark release on our developerWorks page on IBM (so under the SparkGPU project itself).

I personally think it should be under a new folder structure (com/ibm/gpu); perhaps inside of the Spark mllib jar (under the jars folder). I haven't tried this approach and I'll perform a few experiments to see how the library loading works (I know that Xerial's Snappy native library is automatically picked up from the Snappy jar for example).

What do other members of this community think? We'll want to be consistent when we add more algorithms and moving it around will only lead to confusion, so getting it right for our users at this early stage is important

@bherta especially, thoughts?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions