Creating a photorealistic reiforcement learning environment : Best way/ practices to do it #1037
Unanswered
umangdighe
asked this question in
Q&A
Replies: 1 comment
-
did you find a good solution @umangdighe ? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I want to train a wheeled robot with differential drive to drive in a photo realistic outdoor environment. The outdoor environment will have multiple objects and obstacles that the robot is not allowed to touch/hit. Considering I want to train the robot to do multiple tasks : For example : Drive only on marked path but cover the entire area in optimal manner, I am still trying to figure out if I want to use manager based RL environment (https://isaac-sim.github.io/IsaacLab/source/tutorials/03_envs/create_manager_rl_env.html) or direct workflow RL environment (https://isaac-sim.github.io/IsaacLab/source/tutorials/03_envs/create_direct_rl_env.html#) .
Going through the tutorials, I see examples of importing usd terrain (https://isaac-sim.github.io/IsaacLab/source/api/lab/omni.isaac.lab.terrains.html#omni.isaac.lab.terrains.TerrainImporter.import_usd).
My question is: Is the best practice/option is to create the entire scene including the objects/obstacles and import the photorealistic scene as a usd file OR only import the ground withe markings/etc as USD and import the obstacles separately? Is there an example of creating and importing a photo realistic training environment?
The question arises because I need to define the rewards based on no go zones/ not hitting objects and completing the exploration optimally.
Thanks! I appreciate any help or pointers.
Beta Was this translation helpful? Give feedback.
All reactions