-
Notifications
You must be signed in to change notification settings - Fork 15
Description
After label subtask boundaries and per-frame stage progress in parquets, update config,but i encounter this problem,how can i fix this

Debug Log:
2026-03-30 14:22:28,414 INFO MainThread:1562838 [wandb_setup.py:_flush():70] Current SDK version is 0.19.11 2026-03-30 14:22:28,414 INFO MainThread:1562838 [wandb_setup.py:_flush():70] Configure stats pid to 1562838 2026-03-30 14:22:28,414 INFO MainThread:1562838 [wandb_setup.py:_flush():70] Loading settings from /home/user/.config/wandb/settings 2026-03-30 14:22:28,414 INFO MainThread:1562838 [wandb_setup.py:_flush():70] Loading settings from /home/user/Data_arx/kai0/wandb/settings 2026-03-30 14:22:28,414 INFO MainThread:1562838 [wandb_setup.py:_flush():70] Loading settings from environment variables 2026-03-30 14:22:28,414 INFO MainThread:1562838 [wandb_init.py:setup_run_log_directory():724] Logging user logs to /home/user/Data_arx/kai0/wandb/run-20260330_142228-ac953z0x/logs/debug.log 2026-03-30 14:22:28,414 INFO MainThread:1562838 [wandb_init.py:setup_run_log_directory():725] Logging internal logs to /home/user/Data_arx/kai0/wandb/run-20260330_142228-ac953z0x/logs/debug-internal.log 2026-03-30 14:22:28,414 INFO MainThread:1562838 [wandb_init.py:init():852] calling init triggers 2026-03-30 14:22:28,414 INFO MainThread:1562838 [wandb_init.py:init():857] wandb.init called with sweep_config: {} 2026-03-30 14:22:28,414 INFO MainThread:1562838 [wandb_init.py:init():893] starting backend 2026-03-30 14:22:28,414 INFO MainThread:1562838 [wandb_init.py:init():897] sending inform_init request 2026-03-30 14:22:28,415 INFO MainThread:1562838 [backend.py:_multiprocessing_setup():101] multiprocessing start_methods=fork,spawn,forkserver, using: spawn 2026-03-30 14:22:28,415 INFO MainThread:1562838 [wandb_init.py:init():907] backend started and connected 2026-03-30 14:22:28,416 INFO MainThread:1562838 [wandb_init.py:init():1005] updated telemetry 2026-03-30 14:22:28,418 INFO MainThread:1562838 [wandb_init.py:init():1029] communicating run to backend with 90.0 second timeout 2026-03-30 14:22:29,663 INFO MainThread:1562838 [wandb_init.py:init():1104] starting run threads in backend 2026-03-30 14:22:29,757 INFO MainThread:1562838 [wandb_run.py:_console_start():2573] atexit reg 2026-03-30 14:22:29,757 INFO MainThread:1562838 [wandb_run.py:_redirect():2421] redirect: wrap_raw 2026-03-30 14:22:29,757 INFO MainThread:1562838 [wandb_run.py:_redirect():2490] Wrapping output streams. 2026-03-30 14:22:29,757 INFO MainThread:1562838 [wandb_run.py:_redirect():2513] Redirects installed. 2026-03-30 14:22:29,758 INFO MainThread:1562838 [wandb_init.py:init():1150] run started, returning control to user process 2026-03-30 14:22:54,767 INFO MsgRouterThr:1562838 [mailbox.py:close():129] [no run ID] Closing mailbox, abandoning 1 handles.
Config:
TrainConfig( name="ADVANTAGE_ADD_C_TEST", advantage_estimator=True, model=pi0_config.AdvantageEstimatorConfig( pi05=True, loss_value_weight=1., loss_action_weight=0., discrete_state_input=False, ), data=LerobotAgilexDataConfig( repo_id = "/home/user/Data_arx/demo_openlib_test/base", assets=AssetsConfig( assets_dir="Path/to/your/advantage/dataset/assets", asset_id="Your_advantage_dataset_name", ), default_prompt="Flatten and fold the cloth.", # * why removing "prompt" here will lead to an error in transforms.py repack_transforms=_transforms.Group( inputs=[ _transforms.RepackTransform( { "images": { "top_head": "observation.images.top_head", "hand_left": "observation.images.hand_left", "hand_right": "observation.images.hand_right", "his_-100_top_head": "his_-100_observation.images.top_head", "his_-100_hand_left": "his_-100_observation.images.hand_left", "his_-100_hand_right": "his_-100_observation.images.hand_right", }, "state": "observation.state", "actions": "action", # "prompt": "prompt", # ! Not adding this for default prompt. "episode_length": "episode_length", "frame_index": "frame_index", "episode_index": "episode_index", "progress_gt": "progress_gt", "stage_progress_gt": "stage_progress_gt", "progress": "progress", # "is_suboptimal": "is_suboptimal", } ) ] ) ), weight_loader=weight_loaders.CheckpointWeightLoader("gs://openpi-assets/checkpoints/pi05_base/params"), num_train_steps=100_000, keep_period=10000, save_interval=10000, num_workers=8, batch_size=16, # * 1 gpus # batch_size=128, # * 8 gpus skip_norm_stats=True, # * No norm stats used. ),