A generalized framework for prototyping full-stack cooperative driving automation applications under CARLA+SUMO.

Overview

OpenCDA

Build Status Coverage Status Documentation Status

OpenCDA is a SIMULATION tool integrated with a prototype cooperative driving automation (CDA; see SAE J3216) pipeline as well as regular automated driving components (e.g., perception, localization, planning, control). The tool integrates automated driving simulation (CARLA), traffic simulation (SUMO), and Co-simulation (CARLA + SUMO).

OpenCDA is all in Python. The purpose is to enable researchers to fast-prototype, simulate, and test CDA algorithms and functions. By applying our simulation tool, users can conveniently conduct both task-specific evaluation (e.g. object detection accuracy) and pipeline-level assessment (e.g. traffic safety) on their customized algorithms.

In collaboration with U.S.DOT CDA Research and the FHWA CARMA Program, OpenCDA, as an open-source project, makes a unique contribution from the perspective of initial-stage development and testing using simulation. OpenCDA is designed and built to support initial algorithmic testing for CDA Features. Through collaboration with CARMA Collaborative, this tool provides a unique capability to the CDA research community and will interface with the CARMA XiL tools being developed by U.S.DOT to support more advanced simulation testing of CDA Features.

The key features of OpenCDA are:

  • Integration: OpenCDA utilizes CARLA and SUMO separately, as well as integrates them together for realistic scene rendering, vehicle modeling, and traffic simulation.
  • Full-stack prototype CDA Platform in Simulation: OpenCDA provides a simple prototype automated driving and cooperative driving platform, all in Python, that contains perception, localization, planning, control, and V2X communication modules.
  • Modularity: OpenCDA is highly modularized, enabling users to conveniently replace any default algorithms or protocols with their own customzied design.
  • Benchmark: OpenCDA offers benchmark testing scenarios, benchmark baseline maps, state-of-the-art benchmark algorithms for ADS and Cooperative ADS functions, and benchmark evaluation metrics.
  • Connectivity and Cooperation: OpenCDA supports various levels and categories of cooperation between CAVs in simulation. This differentiates OpenCDA from other single vehicle simulation tools.

Users could refer to OpenCDA documentation for more details.

Major Components

teaser

OpenCDA consists of three major component: Cooperative Driving System, Co-Simulation Tools, and Scenario Manager.

Check the OpenCDA Introduction for more details.

Citation

If you are using our OpenCDA framework or codes for your development, please cite the following paper:

@inproceedings{xu2021opencda,
title={OpenCDA:  An  Open  Cooperative  Driving  Automation  Framework
Integrated  with  Co-Simulation},
author={Runsheng Xu, Yi Guo, Xu Han, Xin Xia, Hao Xiang, Jiaqi Ma},
booktitle={2021 IEEE Intelligent Transportation Systems Conference (ITSC)},
year={2021}
}

The arxiv link to the paper: https://arxiv.org/abs/2107.06260

Also, under this LICENSE, OpenCDA is for non-commercial research only. Researchers can modify the source code for their own research only. Contracted work that generates corporate revenues and other general commercial use are prohibited under this LICENSE. See the LICENSE file for details and possible opportunities for commercial use.

Get Started

teaser

Users Guide

Note: We continuously improve the performance of OpenCDA. Currently, it is mainly tested in our customized maps and Carla town06 map; therefore, we DO NOT guarantee the same level of robustness in other maps.

Developer Guide

Contribution Rule

We welcome your contributions.

  • Please report bugs and improvements by submitting issues.
  • Submit your contributions using pull requests. Please use this template for your pull requests.

In OpenCDA v0.1.0 Release

The current version features the following:

  • OpenCDA v0.1.0 software stack (basic ADS and cooperative ADS platform, benchmark algorithms for platooning, cooperative lane change, merge, and other freeway maneuvers)
  • CARLA only simulation
  • Co-Simulation function with CARLA + SUMO
  • Scenario manager and scenario database for CDA freeway applications

In Future Releases

Future versions are expected to include the following:

  • OpenCDA v0.2.0 and above software stack, including signalized intersection and corridor applications, cooperative perception and localization, enhanced scenario generation/manager and scenario database for newly added CDA applications)
  • SUMO only simulation which includes SUMO impplementation of all cooperative driving applications using behavior based approach (consistent with CARLA implementation)
  • Software-in-the-loop interfaces with two open-source ADS platforms, i.e., Autoware and CARMA
  • hardware-in-the-loop interfaces and example projects with a real automated driving vehicle platform and a driving simulator

Contributors

OpenCDA is supported by the UCLA Mobility Lab.

Lab Principal Investigator:

Project Lead:

Team Members:

Comments
  • Spawn a new CAV at a certain simulation time step

    Spawn a new CAV at a certain simulation time step

    I was wondering if it is possible to generate a new single CAV on the on-ramp particularly for the scenario "platoon_joining_2lanefree_cosim". I tried to spawn a single cav on the on-ramp but when it reached to the merging area, about the same time as a mainline platoon (and it should perform a cut-in merge). It did not merge into the platoon.

    Please advise if OpenCDA allows us to do this. My intent is to have the simulation run longer with more CAVs. (Spawning multiple CAVs at the simulation start is possible but is limited by space of link.)

    Thank you, Thod

    opened by thuns001 17
  • .py not found ERROR

    .py not found ERROR

    I am trying to run opencda on a remote server with Ubuntu16.04, I had a problem with open3d before, after I solved that problem. I got the following error: image I'm sure I followed the steps in the official documentation, what should I do to fix this error?Thanks! By the way, does opencda support running on a remote server? Carla: 0.9.11 Driver Version: 418.43 CUDA Version: 10.1

    opened by 6Lackiu 15
  • RuntimeError: opendrive could not be correctly parsed

    RuntimeError: opendrive could not be correctly parsed

    Not sure if I missed anything but I cannot get the basic example working.

    OS: Ubuntu 2004 GPU: RTX2080

    Carla itself is working fine.

    Command for starting carla server:

    /opt/carla-simulator/CarlaUE4.sh 
    4.24.3-0+++UE4+Release-4.24 518 0
    Disabling core dumps.
    

    command for starting opencda:

    $ python opencda.py -t single_2lanefree_carla
    OpenCDA Version: 0.1.0
    load opendrive map '2lane_freeway_simplified.xodr'.
    Traceback (most recent call last):
      File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/single_2lanefree_carla.py", line 35, in run_scenario
        cav_world=cav_world)
      File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/utils/sim_api.py", line 114, in __init__
        self.world = load_customized_world(xodr_path, self.client)
      File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/utils/customized_map_api.py", line 54, in load_customized_world
        enable_mesh_visibility=True))
    RuntimeError: opendrive could not be correctly parsed
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "opencda.py", line 56, in <module>
        main()
      File "opencda.py", line 51, in main
        scenario_runner(opt, config_yaml)
      File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/single_2lanefree_carla.py", line 75, in run_scenario
        eval_manager.evaluate()
    UnboundLocalError: local variable 'eval_manager' referenced before assignment
    
    question 
    opened by yanghao 12
  •  RuntimeError: time-out of 10000ms while waiting for the simulator

    RuntimeError: time-out of 10000ms while waiting for the simulator

    python opencda.py -t platoon_joining_2lanefree_cosim OpenCDA Version: 0.1.0 load opendrive map '2lane_freeway_simplified.xodr'. Traceback (most recent call last): File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/platoon_joining_2lanefree_cosim.py", line 42, in run_scenario sumo_file_parent_path=sumo_cfg) File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/utils/cosim_api.py", line 64, in init cav_world) File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/utils/sim_api.py", line 114, in init self.world = load_customized_world(xodr_path, self.client) File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/utils/customized_map_api.py", line 54, in load_customized_world enable_mesh_visibility=True)) RuntimeError: time-out of 10000ms while waiting for the simulator, make sure the simulator is ready and connected to localhost:2000

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "opencda.py", line 56, in main() File "opencda.py", line 51, in main scenario_runner(opt, config_yaml) File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/platoon_joining_2lanefree_cosim.py", line 86, in run_scenario eval_manager.evaluate() UnboundLocalError: local variable 'eval_manager' referenced before assignment

    opened by luckynote 7
  • CARLA installation

    CARLA installation

    I encounter the following issue when installing CARLA with the command 'make launch' (make PythonAPI is successfully compiled):

    8 warnings and 18 errors generated. 5 warnings and 10 errors generated. make[1]: *** [Makefile:315: CarlaUE4Editor] Error 6 make[1]: Leaving directory '/home/admin1/carla/Unreal/CarlaUE4' make: *** [Util/BuildTools/Linux.mk:7: launch] Error 2

    Please help me!!! Many thanks.

    opened by bigbird11 4
  • opencda.py: error: unrecognized arguments: -v 0.9.12

    opencda.py: error: unrecognized arguments: -v 0.9.12

    Hi, when I changed my carla version this error occurred. Is there any mistake in my command?

    (opencda) [email protected]_2019:~/OpenCDA$ python opencda.py -t single_2lanefree_carla -v 0.9.12
    usage: opencda.py [-h] -t TEST_SCENARIO [--record] [--apply_ml]
    opencda.py: error: unrecognized arguments: -v 0.9.12
    
    opened by Sei2112 4
  • OpenCDA能否导入Intereaction数据集,并将数据集中的场景进行仿真及车辆行为分析?

    OpenCDA能否导入Intereaction数据集,并将数据集中的场景进行仿真及车辆行为分析?

    您好,很高兴可以了解OpenCAD。目前我只是拜读了您的文献,还没开始深入学习OpenCDA的具体操作。现在有一些问题想请问:

    1.OpenCDA能否支持导入Interaction数据集,对其进行场景的还原仿真?比如复现地图,汽车驾驶轨迹,行为分析等。导入的过程中是否要对Interaction数据集中的数据类型进行转换?其他数据集呢?(InD数据集等,主要是一些汽车行为与轨迹的数据集) 2.仿真之后如果要对一些行为进行分析,或者加入一些算法进行一些研究(比如,加入LSTM进行轨迹预测,采用MPC控制动力学模型等等),能否将数据结果进行保存或者实现算法开发?

    以上功能的实现,包括了OpenCDA自带的内置功能,或者我也可以自己进行算法编写(只要OpenCDA提供相应接口)。如果可以实现,我将进一步深入学习OpenCDA。

    期待回复

    opened by ShenZC25 3
  • Is CARLA 0.9.9 supported?

    Is CARLA 0.9.9 supported?

    Huge thanks to this great project, it looks amazing! I have a question about the supported version of Carla. I saw on the installation page, both carla 0.9.11 and 0.9.12 are supported, but due to the current projects we have to continue to use the version 0.9.9. Does your project also support carla 0.9.9? If not, would you please provide any ideas on how we could modify this great project so that it could be fitted for carla 0.9.9? Thanks!

    opened by luh-j 3
  • The errors about 'torch.cuda' and  'eval_manager'

    The errors about 'torch.cuda' and 'eval_manager'

    Hello,

    It is really great work! I am interested in co-simulation with sumo. While running it, I have encountered with errors. Could you please help me?

    Kind regards, error

    opened by aslirey 3
  • Ubuntu16.04 can NOT run Two-lane highway test

    Ubuntu16.04 can NOT run Two-lane highway test

    Hi, Thanks for the great work

    I try to run the single_2lanefree_carla on Ubuntu 16.04, but it failed:


    ~/OpenCDA$ python opencda.py -t single_2lanefree_carla OpenCDA Version: 0.1.0 Traceback (most recent call last): File "opencda.py", line 56, in main() File "opencda.py", line 40, in main testing_scenario = importlib.import_module("opencda.scenario_testing.%s" % opt.test_scenario) ... import open3d as o3d File "/home/anaconda3/envs/opencda/lib/python3.7/site-packages/open3d/init.py", line 56, in _CDLL(str(next((_Path(file).parent / 'cpu').glob('pybind*')))) File "/home/anaconda3/envs/opencda/lib/python3.7/ctypes/init.py", line 364, in init self._handle = _dlopen(self._name, mode) OSError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.27' not found (required by /home/anaconda3/envs/opencda/lib/python3.7/site-packages/open3d/cpu/pybind.cpython-37m-x86_64-linux-gnu.so)

    I searched in google and found that it maybe the problem of open3d which uses glibc 2.27 Ubuntu16.04 seems to not be supported anymore. Ubuntu 16.04 use only glibc 2.23

    https://github.com/isl-org/Open3D/issues/1898

    so Am I must upgrade my Ubuntu to 18.04?

    opened by CharlesWolff6 3
  • Travis CI: Test on the current versions of Ubuntu and Python

    Travis CI: Test on the current versions of Ubuntu and Python

    Python 3.10 release candidate 1 should be released next week so perhaps it is time to start testing on current Python.

    If tests pass on both Python 3.7 and 3.9, it is almost certain they will also pass on 3.8.

    opened by cclauss 3
  • Running opencda in docker support

    Running opencda in docker support

    This is not a real issue, but just some notes for those who want to running opencda in docker environment.

    1. Base Docker Image: I already have a base docker image(ubuntu 18.04) with carla client lib(0.9.11) installed. ie. import carla will not generate any error messages.
    2. OpenCDA installation: Get a copy of the source code, and mount it to the docker container based on image in the previous step using the docker -v options. So you'll get access to the opencda source in the docker container.
    3. X11 support: using docker run option -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY

    Possible errors:

    1. In the container shell, try to run a scenario, for example. the single_2lanefree_carla, you may get some libs (limSM.so, libGL.so) missing messages. To fix those errors: using sudo apt-get update && sudo apt-get install -y libsm6 libgl1-mesa-glx to install the dependencies.
    2. You may get some errors like "X error: BadShmSeg, blabla", set environment variable using export QT_X11_NO_MITSHM=1 in the container will fix it.

    If you see some other errors, leave a message here, I'll see if I can help.

    opened by jewes 3
Releases(v0.1.2)
  • v0.1.2(Mar 14, 2022)

    Map manager

    OpenCDA now adds a new component map_manager for each cav. It will dynamically load road topology, traffic light information, and dynamic objects information around the ego vehicle and save them into rasterized map, which can be useful for RL planning, HDMap learning, scene understanding, etc. Key elements in the rasterization map:

    • Drivable space colored by black
    • Lanes
      • Red lane: the lanes that are controlled by red traffic lights
      • Green lane: the lanes that are controlled by green traffic lights
      • Yellow lane: the lanes that are not effected by any traffic light
    • Objects are colored by white and represented as rectangles
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Oct 9, 2021)

    Check https://opencda-documentation.readthedocs.io/en/latest/md_files/release_history.html to see more visulizations.


    v0.1.1

    Cooperative Perception

    OpenCDA now supports data dumping simultaneously for multiple CAVs to develop V2V perception algorithms offline. The dumped data includes:

    • LiDAR data
    • RGB camera (4 for each CAV)
    • GPS/IMU
    • Velocity and future planned trajectory of the CAV
    • Surrounding vehicles' bounding box position, velocity

    Run the following script to collect cooperative data: python opencda.py -t cooperception_datadump_town06_carla -v 0.9.12(or 0.9.11)

    Besides the above dumped data, users can also generate the future trajectory for each vehicle for trajectory prediction purpose. Run python root_of_opencda/scripts/generate_prediction_yaml.py to generate the prediction offline.

    This new functionality has been proved helpful. The newest paper OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication has utilized this new feature to collect cooperative data. Check https://mobility-lab.seas.ucla.edu/opv2v/ for more information.

    CARLA 0.9.12 Support

    OpenCDA now supports both CARLA 0.9.12 and 0.9.11. Users needs to set CARLA_VERSION variable before installing OpenCDA. When users run opencda.py, -v argument is required to classify the CARLA version for OpenCDA to select the correct API.

    Weather Parameters

    To help estimate the influence of weather on cooperative driving automation, users now can define weather setting in the yaml file to control sunlight, fog, rain, wetness and other conditions.

    Bug Fixes

    Some minor bugs in the planning module are fixed.

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Jul 27, 2021)

    The initial release of OpenCDA

    • Integrated with CARLA and Sumo. Supports CARLA only mode and Co-Simulation mode.
    • Provides a full-stack automated driving and cooperative driving software system. that contains perception, localization, planning, control, and V2X communication modules.
    • Default perception, localization, planning, and control algorithms installed
    • Default platooning and cooperative merge algorithms and protocols installed
    • V2X feature supported, allowing simulating communication lagging and noise
    • 10+ testing scenarios were provided.
    • Customized maps were provided for highway testing.
    • Benchmark evaluation measurements provided
    Source code(tar.gz)
    Source code(zip)
Owner
UCLA Mobility Lab
A research lab dedicated to harnessing system theories and tools, such as AI, control, robotics, and optimization for smart vehicles and transportation
UCLA Mobility Lab
Resilient projection-based consensus actor-critic (RPBCAC) algorithm

Resilient projection-based consensus actor-critic (RPBCAC) algorithm We implement the RPBCAC algorithm with nonlinear approximation from [1] and focus

Martin Figura 5 Jul 12, 2022
Privacy as Code for DSAR Orchestration: Privacy Request automation to fulfill GDPR, CCPA, and LGPD data subject requests.

Meet Fidesops: Privacy as Code for DSAR Orchestration A part of the greater Fides ecosystem. ⚡ Overview Fidesops (fee-dez-äps, combination of the Lati

Ethyca 44 Dec 06, 2022
Unofficial implementation of HiFi-GAN+ from the paper "Bandwidth Extension is All You Need" by Su, et al.

HiFi-GAN+ This project is an unoffical implementation of the HiFi-GAN+ model for audio bandwidth extension, from the paper Bandwidth Extension is All

Brent M. Spell 134 Dec 30, 2022
Static Features Classifier - A static features classifier for Point-Could clusters using an Attention-RNN model

Static Features Classifier This is a static features classifier for Point-Could

ABDALKARIM MOHTASIB 1 Jan 25, 2022
Source code for CIKM 2021 paper for Relation-aware Heterogeneous Graph for User Profiling

RHGN Source code for CIKM 2021 paper for Relation-aware Heterogeneous Graph for User Profiling Dependencies torch==1.6.0 torchvision==0.7.0 dgl==0.7.1

Big Data and Multi-modal Computing Group, CRIPAC 6 Nov 29, 2022
PyTorch implementation of SampleRNN: An Unconditional End-to-End Neural Audio Generation Model

samplernn-pytorch A PyTorch implementation of SampleRNN: An Unconditional End-to-End Neural Audio Generation Model. It's based on the reference implem

DeepSound 261 Dec 14, 2022
XViT - Space-time Mixing Attention for Video Transformer

XViT - Space-time Mixing Attention for Video Transformer This is the official implementation of the XViT paper: @inproceedings{bulat2021space, title

Adrian Bulat 33 Dec 23, 2022
[ICCV'21] Neural Radiance Flow for 4D View Synthesis and Video Processing

NeRFlow [ICCV'21] Neural Radiance Flow for 4D View Synthesis and Video Processing Datasets The pouring dataset used for experiments can be download he

44 Dec 20, 2022
[ICCV21] Self-Calibrating Neural Radiance Fields

Self-Calibrating Neural Radiance Fields, ICCV, 2021 Project Page | Paper | Video Author Information Yoonwoo Jeong [Google Scholar] Seokjun Ahn [Google

381 Dec 30, 2022
The official PyTorch implementation of recent paper - SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training

This repository is the official PyTorch implementation of SAINT. Find the paper on arxiv SAINT: Improved Neural Networks for Tabular Data via Row Atte

Gowthami Somepalli 284 Dec 21, 2022
Simple-Image-Classification - Simple Image Classification Code (PyTorch)

Simple-Image-Classification Simple Image Classification Code (PyTorch) Yechan Kim This repository contains: Python3 / Pytorch code for multi-class ima

Yechan Kim 8 Oct 29, 2022
For visualizing the dair-v2x-i dataset

3D Detection & Tracking Viewer The project is based on hailanyi/3D-Detection-Tracking-Viewer and is modified, you can find the original version of the

34 Dec 29, 2022
Inverse Optimal Control Adapted to the Noise Characteristics of the Human Sensorimotor System

Inverse Optimal Control Adapted to the Noise Characteristics of the Human Sensorimotor System This repository contains code for the paper Schultheis,

2 Oct 28, 2022
Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation

SUO-SLAM This repository hosts the code for our CVPR 2022 paper "Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation". ArXiv li

Robot Perception & Navigation Group (RPNG) 97 Jan 03, 2023
Provide baselines and evaluation metrics of the task: traffic flow prediction

Note: This repo is adpoted from https://github.com/UNIMIBInside/Smart-Mobility-Prediction. Due to technical reasons, I did not fork their code. Introd

Zhangzhi Peng 11 Nov 02, 2022
The repository forked from NVlabs uses our data. (Differentiable rasterization applied to 3D model simplification tasks)

nvdiffmodeling [origin_code] Differentiable rasterization applied to 3D model simplification tasks, as described in the paper: Appearance-Driven Autom

Qiujie (Jay) Dong 2 Oct 31, 2022
functorch is a prototype of JAX-like composable function transforms for PyTorch.

functorch is a prototype of JAX-like composable function transforms for PyTorch.

Facebook Research 1.2k Jan 09, 2023
Source code for the paper "Periodic Traveling Waves in an Integro-Difference Equation With Non-Monotonic Growth and Strong Allee Effect"

Source code for the paper "Periodic Traveling Waves in an Integro-Difference Equation With Non-Monotonic Growth and Strong Allee Effect" by Michael Ne

M Nestor 1 Apr 19, 2022
A curated list of resources for Image and Video Deblurring

A curated list of resources for Image and Video Deblurring

Subeesh Vasu 1.7k Jan 01, 2023
Torch Implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"

Photo-Realistic-Super-Resoluton Torch Implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network" [Paper]

Harry Yang 199 Dec 01, 2022