Skip to content

Latest commit

 

History

History
258 lines (233 loc) · 16.7 KB

File metadata and controls

258 lines (233 loc) · 16.7 KB

Data

Download

The files mentioned below can also be downloaded via OpenDataLab. It is recommended to use provided command line interface (CLI) for acceleration.

🔥 For CVPR 2024 AGC Track Mapless Driving!, please download:

  • Map Element Bucket info of subset_A
  • All image of subset_A
  • SDMap of subset_A (optional)
Subset Split Google Drive Google Drive Baidu Yun Baidu Yun md5 Size
sample OpenLane-V2 sample sample 21c607fa5a1930275b7f1409b25042a0 ~300M
subset_A OpenLane-V2 info info 95bf28ccf22583d20434d75800be065d ~8.8G
Map Element Bucket info info 1c1f9d49ecd47d6bc5bf093f38fb68c9 ~240M
Image (train) image_0 image_0 8ade7daeec1b64f8ab91a50c81d812f6 ~14.0G
image_1 image_1 c78e776f79e2394d2d5d95b7b5985e0f ~14.3G
image_2 image_2 4bf09079144aa54cb4dcd5ff6e00cf79 ~14.2G
image_3 image_3 fd9e64345445975f462213b209632aee ~14.4G
image_4 image_4 ae07e48c88ea2c3f6afbdf5ff71e9821 ~14.5G
image_5 image_5 df62c1f6e6b3fb2a2a0868c78ab19c92 ~14.2G
image_6 image_6 7bff1ce30329235f8e0f25f6f6653b8f ~14.4G
Image (val) image_7 image_7 c73af4a7aef2692b96e4e00795120504 ~21.0G
Image (test) image_8 image_8 fb2f61e7309e0b48e2697e085a66a259 ~21.2G
SD Map sdmap sdmap de22c7be880b667f1b3373ff665aac2e ~7M
subset_B OpenLane-V2 info info 27696b1ed1d99b1f70fdb68f439dc87d ~7.7G
Image (train) image_0 image_0 0876c6b2381bacedeb3be16e57c7d59b ~3.4G
image_1 image_1 ecdec8ff8c72525af322032a312aad10 ~3.3G
image_2 image_2 b720bf7fdf0ebd44b71beffc84722359 ~3.3G
image_3 image_3 ac3bc9400ade6c47c396af4b12bbd0e0 ~3.4G
image_4 image_4 fa4c4a04b5ad3eac817e6368047d0d89 ~3.5G
image_5 image_5 19d2cc92514e65270779e405d3a93c61 ~3.6G
image_6 image_6 d4f56c562f11a6bcc918f2d20441c42c ~3.3G
Image (val) image_7 image_7 443045d7a3faf5998af27e2302d3503e ~5.0G
Image (test) image_8 image_8 6ecb7a9e866e29ed73d335c2d897f50e ~5.4G
  • OpenLane-V2 contains annotations for the initial task of OpenLane Topology.
  • Map Element Bucket contains annotations for the task of Driving Scene Topology.
  • Image and SD Map serves as sensor inputs.

For files in Google Drive, you can use the following command by replacing [FILE_ID] and [FILE_NAME] accordingly:

wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=[FILE_ID]' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=[FILE_ID]" -O [FILE_NAME]

Preprocess

The dataset is preprocessed into pickle files representing different collections, which then be used for training models or evaluation:

cd data
python OpenLane-V2/preprocess.py 

For using the SD Map and Map Element Bucket, please specify the with_sd_map parameter and run python OpenLane-V2/preprocess-ls.py respectively.

Hierarchy

The hierarchy of folder OpenLane-V2/ is described below:

└── OpenLane-V2
    ├── train
    |   ├── [segment_id]
    |   |   ├── image                        (Image)
    |   |   |   ├── [camera]
    |   |   |   |   ├── [timestamp].jpg
    |   |   |   |   └── ...
    |   |   |   └── ...
    |   |   ├── sdmap.json                   (SD Map)
    |   |   └── info
    |   |       ├── [timestamp].json         (OpenLane-V2)
    |   |       ├── [timestamp]-ls.json      (Map Element Bucket)
    |   |       └── ...
    |   └── ...
    ├── val
    |   └── ...
    ├── test
    |   └── ...
    ├── data_dict_example.json
    ├── data_dict_subset_A.json
    ├── data_dict_subset_B.json
    ├── openlanev2.md5
    ├── preprocess.py
    └── preprocess-ls.py

  • [segment_id] specifies a sequence of frames, and [timestamp] specifies a single frame in a sequence.
  • image/ contains images captured by various cameras, and info/ contains meta data and annotations of a single frame.
  • data_dict_[xxx].json notes the split of train / val / test under the subset of data.

SD Map

The sdmap.json comprises three types of SD map elements that can be used as sensor inputs.

[
    {
        'points':                           <list> -- list of 2D points in the BEV space
        'category':                         <str> -- type of the SD map element
                                                'road',
                                                'cross_walk',
                                                'side_walk',
    },
    ...
]

Meta Data

The json files under the info/ folder contain meta data and annotations for each frame. Each file is formatted as follows:

{
    'version':                              <str> -- version
    'segment_id':                           <str> -- segment_id
    'meta_data': {
        'source':                           <str> -- name of the original dataset
        'source_id':                        <str> -- original identifier of the segment
    }
    'timestamp':                            <int> -- timestamp of the frame
    'sensor': {
        [camera]: {                         <str> -- name of the camera
            'image_path':                   <str> -- image path
            'extrinsic':                    <dict> -- extrinsic parameters of the camera, trasformation from camera frame to vehicle frame
            'intrinsic':                    <dict> -- intrinsic parameters of the camera
        },
        ...
    }                              
    'pose':                                 <dict> -- ego pose
    'annotation':                           <dict> -- anntations for the current frame
}

Annotations

For a single frame, annotations are formatted as follow:

{
    'lane_centerline': [                    (n lane centerlines in the current frame)
        {   
            'id':                           <int> -- unique ID in the current frame
            'points':                       <float> [n, 3] -- 3D coordiate
            'confidence':                   <float> -- confidence, only for prediction
        },
        ...
    ],
    'traffic_element': [                    (k traffic elements in the current frame)
        {   
            'id':                           <int> -- unique ID in the current frame
            'category':                     <int> -- traffic element category
                                                1: 'traffic_light',
                                                2: 'road_sign',
            'attribute':                    <int> -- attribute of traffic element
                                                0:  'unknown',
                                                1:  'red',
                                                2:  'green',
                                                3:  'yellow',
                                                4:  'go_straight',
                                                5:  'turn_left',
                                                6:  'turn_right',
                                                7:  'no_left_turn',
                                                8:  'no_right_turn',
                                                9:  'u_turn',
                                                10: 'no_u_turn',
                                                11: 'slight_left',
                                                12: 'slight_right',
            'points':                       <float> [2, 2] -- top-left and bottom-right corners of the 2D bounding box
            'confidence':                   <float> -- confidence, only for prediction
        },
        ...
    ],
    'topology_lclc':                        <float> [n, n] -- adjacent matrix among lane centerlines
    'topology_lcte':                        <float> [n, k] -- adjacent matrix between lane centerlines and traffic elements
}
  • id is the identifier of a lane centerline or traffic element and is consistent in a sequence. For predictions, it can be randomly assigned but unique in a single frame.
  • topology_lclc and topology_lcte are adjacent matrices, where row and column are sorted according to the order of the lists lane_centerline and traffic_element. It is a MUST to keep the ordering the same for correct evaluation. For ground truth, only 0 or 1 is a valid boolean value for an element in the matrix. For predictions, the value varies from 0 to 1, representing the confidence of the predicted relationship.
  • #lane_centerline and #traffic_element are not required to be equal between ground truth and predictions. In the process of evaluation, a matching of ground truth and predictions is determined.

Map Element Bucket

In the Map Element Bucket, we reformulate the annotation files to include additional labels.

{
    'lane_segment': [                       (i lane segments in the current frame)
        {   
            'id':                           <int> -- unique ID in the current frame
            'centerline':                   <float> [n, 3] -- 3D coordiate
            'left_laneline':                <float> [n, 3] -- 3D coordiate
            'left_laneline_type':           <int> -- type of the left laneline
                                                0: 'none',
                                                1: 'solid',
                                                2: 'dash',
            'right_laneline':               <float> [n, 3] -- 3D coordiate
            'right_laneline_type':          <int> -- type of the right laneline
            'is_intersection_or_connector': <bool> -- whether the lane segment is in a intersection or connector
            'confidence':                   <float> -- confidence, only for prediction
        },
        ...
    ],
    'traffic_element': [                    (j traffic elements in the current frame)
        {   
            'id':                           <int> -- unique ID in the current frame
            'category':                     <int> -- traffic element category
                                                1: 'traffic_light',
                                                2: 'road_sign',
            'attribute':                    <int> -- attribute of traffic element
                                                0:  'unknown',
                                                1:  'red',
                                                2:  'green',
                                                3:  'yellow',
                                                4:  'go_straight',
                                                5:  'turn_left',
                                                6:  'turn_right',
                                                7:  'no_left_turn',
                                                8:  'no_right_turn',
                                                9:  'u_turn',
                                                10: 'no_u_turn',
                                                11: 'slight_left',
                                                12: 'slight_right',
            'points':                       <float> [2, 2] -- top-left and bottom-right corners of the 2D bounding box
            'confidence':                   <float> -- confidence, only for prediction
        },
        ...
    ],
    'area': [                               (k areas in the current frame)
        {   
            'id':                           <int> -- unique ID in the current frame
            'category':                     <int> -- area category
                                                1: 'pedestrian_crossing',
                                                2: 'road_boundary',
            'points':                       <float> [n, 3] -- 3D coordiate
            'confidence':                   <float> -- confidence, only for prediction
        },
        ...
    ],
    'topology_lsls':                        <float> [n, n] -- adjacent matrix among lane segments
    'topology_lste':                        <float> [n, k] -- adjacent matrix between lane segments and traffic elements
}
  • id is the identifier of a lane segment or traffic element and is consistent in a sequence. For predictions, it can be randomly assigned but unique in a single frame.
  • topology_lsls and topology_lste are adjacent matrices, where row and column are sorted according to the order of the lists lane_segment and traffic_element. It is a MUST to keep the ordering the same for correct evaluation. For ground truth, only 0 or 1 is a valid boolean value for an element in the matrix. For predictions, the value varies from 0 to 1, representing the confidence of the predicted relationship.
  • #lane_segment and #traffic_element are not required to be equal between ground truth and predictions. In the process of evaluation, a matching of ground truth and predictions is determined.