Replies: 3 comments
-
Thanks for posting this. In Isaac Lab, the |
Beta Was this translation helpful? Give feedback.
-
Thanks for the reply. I am using the manager based rl env, can I modify the framework? I want to call this function after (or before) every step, how can I achieve this? |
Beta Was this translation helpful? Give feedback.
-
Thanks for following up. I'll move this post to our Discussions section. Here is a summary of a what you may try. In Isaac Lab's manager-based RL environment, you can modify the workflow by overriding specific methods in your custom environment class. The framework allows injecting custom logic before or after each step by leveraging its structured workflow. Here's how to achieve this: 1. Override Core MethodsThe base from isaaclab.envs import ManagerBasedRLEnv
class CustomRLEnv(ManagerBasedRLEnv):
def _pre_physics_step(self, actions: torch.Tensor) -> None:
"""Called before physics simulation steps."""
# Custom logic before physics steps
super()._pre_physics_step(actions)
def _post_physics_step(self) -> None:
"""Called after physics simulation steps."""
super()._post_physics_step()
# Custom logic after physics steps (e.g., logging, additional computations) 2. Key Workflow Points
3. Decimation HandlingFor environments using def step(self, action: torch.Tensor) -> tuple:
# Custom pre-step logic
for _ in range(self.cfg.decimation):
self._apply_action() # Called before each simulation substep
self.sim.step()
# Custom post-step logic
return super().step(action) 4. Full Workflow CustomizationTo completely modify the step sequence, override the def step(self, actions: torch.Tensor) -> tuple:
# Custom pre-processing
self.action_manager.process_action(actions)
# Physics stepping loop
for _ in range(self.cfg.decimation):
self.scene.write_data_to_sim()
self.sim.step()
self.scene.update(self.physics_dt)
# Post-step computations
self.episode_length_buf += 1
observations = self._get_observations()
rewards = self._get_rewards()
dones = self._get_dones()
# Custom post-processing
return observations, rewards, dones, {} Key Changes from IsaacGymEnvs
Example Use Caseclass MonitoringEnv(ManagerBasedRLEnv):
def _post_physics_step(self):
super()._post_physics_step()
# Add custom monitoring after every step
self._log_custom_metrics()
def _log_custom_metrics(self):
cart_pos = self.scene["robot"].data.joint_pos[:, 0]
print(f"Average cart position: {cart_pos.mean().item():.2f}") This approach maintains compatibility with Isaac Lab's manager system while allowing injection of custom logic at precise points in the simulation loop. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Question
Hi, I am using the manager based rl env to train my rl policy, I want to assign the root positions of the actors after each steps, such as set_root_state in post_physics_steps in isaacgymenvs. How to do this in the manager based rl framework, thank you
Beta Was this translation helpful? Give feedback.
All reactions