Skip to content

Twitter dataset runs out of memory #6

@sgpyc

Description

@sgpyc

Hi, the Frog technical report indicates that it can process the twitter graph (41.7M vertices and 1.47B edges) on K20m with 6GB of memory.

However, when I tested it on K40c with 12GB of memory, it runs out of memory.
Looking into the bfs.cu line 180 to 181, the pair of CudaBufferFill essentially allocates 2 int (4 bytes each) for each edge in the graph, and move the source and destination vertex index from CPU to GPU.

Here is my question, 1.47 B edges * 2 vertices / edge * 4 byte / vertex = 11.76 x 10^9 Bytes, how can it be fit into K20m's 6GB memory without streaming? It is even larger than K40c's useable memory, which is 11.439 x 10^9 Bytes.

~/Projects/Frog/src/exp ./twitter_rv.net.bin
Reading File ... 46193.47 ms 
Begin Experiments on Graph (V=41652230 E=1468365181 File='./twitter_rv.net.bin')
-------------------------------------------------------------------
Partitioning ... 18949.75 ms ... Get partitions ... 12962.69 ms
    Time    Total   Tips
    36374.11    BFS on CPU  Step=14 Visited=35016137
GPU Memory Allocation Failed in File 'bfs.cu' at Line 180!
    INFO : out of memory

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions