The Morphological Symmetries (MorphoSymm) repository offers a comprehensive toolkit for the detection, analysis, and exploitation of morphological symmetries in the modeling and control of robotic systems. These symmetries are a common characteristic of both biological and artificial locomotion systems, including legged, swimming, and flying animals, robots, and virtual characters. Essentially, a morphological symmetry is a transformation that connects two equivalent states under the governing dynamics, meaning two states that will evolve identically when influenced by equivalent moving forces. For example, consider these two motion trajectories from the mini-cheetah quadruped robot, which are related by a symmetry transformation.
These symmetries carry significant implications. Notably, they offer a valuable geometric bias, since by modeling and controlling the dynamics of one state, we can effectively identify and control the dynamics of all its symmetric counterparts (see our RSS-2023 paper and IJRR-2024 Journal for details).
To exploit these symmetries, we employ the language of group and representation theory, enabling us to represent these transformations via linear algebra. The set of morphological symmetries of a robotic system forms the system's discrete (or finite) symmetry group. For example, the mini-cheetah robot's symmetry group
As depicted above, each symmetry transformation, represented by a group element
from morpho_symm.utils.robot_utils import load_symmetric_system
from morpho_symm.utils.pybullet_visual_utils import configure_bullet_simulation
robot, G = load_symmetric_system(robot_name='mini_cheetah')
bullet_client = configure_bullet_simulation(gui=True) # Start pybullet simulation
robot.configure_bullet_simulation(bullet_client, world=None) # Load robot in pybullet environment
# Get joint space position and velocity coordinates (q_js, v_js) | q_js ∈ Qjs, dq_js ∈ TqQjs
_, v_js = robot.get_joint_space_state()
# Get the group representation on the space of joint space generalized velocity coordinates
rep_TqQ_js = G.representations['TqQ_js']
for g in G.elements:
# Transform the observations
g_v_js = rep_TqQ_js(g) @ v_js # rep_TqQ_js(g) ∈ R^12x12
For more details follow the getting started guide
There are three use cases of morphological symmetries supported by MorphoSymm. For details and tutorials see the getting started guide
Data Augmentation | |
---|---|
Any data recording of proprioceptive and exteroceptive observations can be augmented using the appropriate group representations. This means that if you collect data from a symmetric robotic system, you can use the system's symmetries to analitically augment/multiply the recorded data. | |
Equivariant/Invariant Function Approximation | |
Given the geometric nature of the data processed in robotics, most applications of machine learning to supervised, unsupervised, and reinforcement learning in robotics rely on the approximation of equivariant/invariant functions. To approximate these functions using deep learning, we should use equivariant/invariant neural networks that respect the symmetries of the learning problem. These networks feature enhanced sample efficiency and generalization, as demonstrated in our paper results. This repository relies on pytorch and the escnn library to offer equivariant and invariant neural network architectures. | |
Dynamics Harmonics Analysis | |
Through the application of abstract harmonic analysis, the symmetries of a robotic system enable us to decompose any motion into a superposition of simpler symmetric synergistic motions. Each of these motions evolves in a lower-dimensional space, known as an isotypic subspace. By projecting entire motion trajectories into these spaces, we can characterize each motion as a superposition of lower-dimensional and synergistic motions. For further details on the use of abstract harmonic analysis in robotics see our paper Dynamics Harmonic Analysis of Robotic Systems: Application in Data-Driven Koopman Modeling | |
Simply clone the repository and install it through pip:
git clone https://github.com/Danfoa/MorphoSymm.git
cd MorphoSymm
pip install -e .
The following is a non-exhaustive and expanding list of dynamical systems with Discrete Morphological Symmetries. Each example can be reproduced in a 3D interactive environment running:
python morpho_symm/robot_symmetry_visualization.py robot=<robot> gui=True
This script functions as an introductory tutorial showing how we define the representations of Discrete Morphological Symmetries in order to perform symmetry transformations on the robot state, and proprioceptive and exteroceptive observations.
Cassie | Atlas | KUKA-iiwa | Mini-Cheetah | Bolt | Baxter | Solo | A1 | Aliengo | HyQ | Anymal-C | Anymal-B | Go1 | Go2 | B1 | UR-3 | UR-5 | UR-10 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Trifinger |
---|
Solo | HyQ | Mini-Cheetah | Anymal-C | Anymal-B |
---|---|---|---|---|
Solo | Mini-Cheetah |
---|---|
If you want a new robot to be introduced in the library create an issue request indicating where to find the URDF of the system.
Each symmetric dynamical system has a configuration file in the folder morpho_symm/cfg/supervised/robot
. To load one
of these systems, simply use the function load_symmetric_system
as follows:
from morpho_symm.utils.robot_utils import load_symmetric_system
robot_name = 'solo' # or any of the robots in the library (see `/morpho_symm/cfg/robot`)
robot, G = load_symmetric_system(robot_name=robot_name)
The function returns:
robot
an instance of the classPinBulletWrapper
(utility class bridgingpybullet
andpinocchio
).G
: the symmetry group of the system of instanceescnn.group.Group
The system state is defined as
from morpho_symm.utils.pybullet_visual_utils import configure_bullet_simulation
# Load robot to pybullet simulation environment. Robot state will be the simulation state.
bullet_client = configure_bullet_simulation(gui=True) # Start pybullet simulation
robot.configure_bullet_simulation(bullet_client, world=None) # Load robot in pybullet environment
# Get the state of the system
q, v = robot.get_state() # q ∈ Q, v ∈ TqQ
# Get the robot's base configuration XB ∈ E3 as a homogenous transformation matrix.
XB = robot.get_base_configuration()
# Get joint space position and velocity coordinates (q_js, v_js) | q_js ∈ Qjs, dq_js ∈ TqQjs
q_js, v_js = robot.get_joint_space_state()
The system's symmetry group instance G
contains the group representations required to transform most proprioceptive and exteroceptive observations (e.g., joint positions/velocities/accelerations, joint forces/torques, contact locations & forces, linear and angular velocities, terrain heightmaps, depthmaps, etc). These are:
-
$\rho_ {\mathbb{E}_d}: \mathbb{G} \rightarrow \mathbb{E}_d$ : Representation mapping symmetry actions to elements of the Euclidean group$\mathbb{E}_d$ . Essentially, homogenous transformation matrices describing a rotation/reflection and translation of space (Euclidean isometry) in$d$ dimensions.
-
$\rho_ {\mathcal{Q}_ {js}}: \mathbb{G} \rightarrow \mathcal{GL}(\mathcal{Q}_ {js})$ and$\rho_ {\mathcal{T}_ {q}\mathcal{Q}_ {js}}: \mathbb{G} \rightarrow \mathcal{GL}(\mathcal{T}_ {q}\mathcal{Q}_ {js})$ : Representations mapping symmetry actions to transformation matrices of joint space position$\mathcal{Q}_ {js}$ and velocity$T_{q}\mathcal{Q}_ {js}$ coordinates.
-
$\rho_ {\mathbb{R}^d}: \mathbb{G} \rightarrow \mathcal{GL}(\mathcal{\mathbb{R}^d})$ : Representation mapping symmetry actions to elements to invertible transformations in$d$ dimensions. In practise, rotation and reflection matrices .
-
$\rho_ {reg}: \mathbb{G} \rightarrow \mathbb{R}^{|\mathbb{G}|}$ : The group regular representation.
-
$\hat{\rho}_ {i}: \mathbb{G} \rightarrow \mathcal{GL}(|\hat{\rho}_ {i}|)$ : Each of the group irreducible representations.
In practice, direct products and direct sums (block-diagonal stacking) of these representations are enough to obtain the representations of any proprioceptive and exteroceptive measurements/observations. That is, these representations are sufficient to perform data-augmentation and to construct equivariant/invariant neural networks.
Here, we provide some basic examples of augmenting observations frequently used in robotics, such as the configuration of a rigid-body, the joint space position, velocity, and accelerations, contact forces, joint torques, etc. To gain intiution, focus on the simplest symmetry group, the relfection group
Any observation from this robot has a symmetric equivalent observation. In this tutorial we will show how to obtain these symmetric observations. You can also check the script robot_symmetry_visualization.py, where we use data-augmentation to generate the animation displayed above. This script works for any robot in the library.
The homogenous matrix describing the configuration of any rigid body, including the system's base configuration
rep_E3 = G.representations['E3'] # rep_E3(g) ∈ R^(3+1)x(3+1) is a homogenous transformation matrix
# The orbit of the base configuration XB is a map from group elements g ∈ G to base configurations g·XB ∈ E3
orbit_X_B = {g: rep_E3(g) @ XB @ rep_E3(g).T for g in G.elements()}
Another example of an observation transfromed by
r = np.random.rand(3) # Example point in E3, assuming d=3.
r_hom = np.concatenate((r, np.ones(1))) # Use homogenous coordinates to represent a point in E3
# The orbit of the point is a map from group elements g ∈ G to the set of symmetric points g·r ∈ R^d
orbit_r = {g: (rep_E3(g) @ r_hom)[:3] for g in G.elements}
Joint space observations usually evolve in the space of generalized position
rep_Qjs = G.representations['Q_js']
rep_TqQjs = G.representations['TqQ_js']
# Get joint space position and velocity coordinates (q_js, v_js) | q_js ∈ Qjs, dq_js ∈ TqQjs
q_js, v_js = robot.get_joint_space_state()
# The joint space state orbit is a map from group elements g ∈ G to joint space states (g·q_js, g·v_js)
orbit_js_state = {g: (rep_Qjs(g) @ q_js, rep_TqQjs(g) @ v_js) for g in G.elements}
Observations evolving in
rep_R3 = G.representations['R3'] # rep_R3(g) is an orthogonal matrix ∈ R^dxd
rep_R3_pseudo = G.representations['R3_pseudo']
v = np.random.rand(3) # Example vector in R3, E.g. linear velocity of the base frame.
w = np.random.rand(3) # Example pseudo-vector in E3, assuming d=3. E.g. angular velocity of the base frame.
# The orbit of the vector is a map from group elements g ∈ G to the set of symmetric vectors g·v ∈ R^d
orbit_v = {g: rep_R3(g) @ v for g in G.elements}
# The orbit of the pseudo-vector is a map from group elements g ∈ G to the set of symmetric pseudo-vectors g·w ∈ R^d
orbit_w = {g: rep_R3_pseudo(g) @ w for g in G.elements}
In robotics, any data-driven application of supervised, unsupervised, and reinforcement learning involves approximating a function. This function maps a set of proprioceptive/exteroceptive observations (e.g., system state, contact state, contact points/forces) to an output set of similar observations (e.g., control position/torque setpoints, linear/angular velocity vector, energy). If the robotic system possesses a symmetry group
Let's consider the example from (1) of approximating the Center of Mass (CoM) momentum from the joint space state observations. That is we want to use a neural network to approximate the function solo
. Defining
To construct a G-equivariant architecture you need to:
- Identify the representations of the observations in your input and output spaces.
- Define the input and output
FieldType
instances using the representations of each geometric object. - Instanciate a EMLP architecture with the input and output
FieldType
instances. This class handles the appropiate parametrization of the hidden layer group representations and activation functions.
import numpy as np
import escnn
from escnn.nn import FieldType
from hydra import compose, initialize
from morpho_symm.nn.EMLP import EMLP
from morpho_symm.utils.robot_utils import load_symmetric_system
# Load robot instance and its symmetry group
robot, G = load_symmetric_system(robot_name='solo-k4')
# We use ESCNN to handle the group/representation-theoretic concepts and for the construction of equivariant neural networks.
gspace = escnn.gspaces.no_base_space(G)
# Get the relevant group representations.
rep_Qjs = G.representations["Q_js"] # Used to transform joint space position coordinates q_js ∈ Q_js
rep_TqQjs = G.representations["TqQ_js"] # Used to transform joint space velocity coordinates v_js ∈ TqQ_js
rep_R3 = G.representations["R3"] # Used to transform the linear momentum l ∈ R3
rep_R3_pseudo = G.representations["R3_pseudo"] # Used to transform the angular momentum k ∈ R3
# Define the input and output FieldTypes using the representations of each geometric object.
# Representation of x := [q_js, v_js] ∈ Q_js x TqQ_js => ρ_X_js(g) := ρ_Q_js(g) ⊕ ρ_TqQ_js(g) | g ∈ G
in_field_type = FieldType(gspace, [rep_Qjs, rep_TqQjs])
# Representation of y := [l, k] ∈ R3 x R3 => ρ_Y_js(g) := ρ_R3(g) ⊕ ρ_R3pseudo(g) | g ∈ G
out_field_type = FieldType(gspace, [rep_R3, rep_R3_pseudo])
# Construct the equivariant MLP
model = EMLP(in_type=in_field_type,
out_type=out_field_type,
num_layers=5, # Input layer + 3 hidden layers + output/head layer
num_hidden_units=128, # Number of hidden units per layer
activation=escnn.nn.ReLU, # Activarions must be `EquivariantModules` instances
with_bias=True # Use bias in the linear layers
)
print(f"Here is your equivariant MLP \n {model}")
If you find this repository or any our our papers relevant please cite us as:
@article{ordonez2024morphological,
title={Morphological Symmetries in Robotics},
author={Ordo{\~n}ez-Apraez, Daniel and Turrisi, Giulio and Kostic, Vladimir and Martin, Mario and Agudo, Antonio and Moreno-Noguer, Francesc and Pontil, Massimiliano and Semini, Claudio and Mastalli, Carlos},
journal={arXiv preprint arXiv:2402.15552},
year={2024}
}
@INPROCEEDINGS{Ordonez-Apraez-RSS-23,
AUTHOR = {Daniel F Ordo{\~n}ez-Apraez AND Martin, Mario AND Antonio Agudo AND Francesc Moreno},
TITLE = {{On discrete symmetries of robotics systems: A group-theoretic and data-driven analysis}},
BOOKTITLE = {Proceedings of Robotics: Science and Systems},
YEAR = {2023},
ADDRESS = {Daegu, Republic of Korea},
MONTH = {July},
DOI = {10.15607/RSS.2023.XIX.053}
}
Dynamics Harmonic Analysis of Robotic Systems: Application in Data-Driven Koopman Modeling (L4DC-2024)
@article{ordonez2023dynamics,
title={Dynamics Harmonic Analysis of Robotic Systems: Application in Data-Driven Koopman Modelling},
author={Ordo{\~n}ez-Apraez, Daniel and Kostic, Vladimir and Turrisi, Giulio and Novelli, Pietro and Mastalli, Carlos and Semini, Claudio and Pontil, Massimiliano},
journal={arXiv preprint arXiv:2312.07457},
year={2023}
}
If you have any doubts or ideas, create an issue or contact us. We are happy to help and collaborate.
In case you want to contribute, thanks for being that awesome, and please contact us to see how can we assist you.
The repository focuses on robotics and uses the URDF (Unified Robot Description Format) to integrate new systems. It utilizes the robot_descriptions.py package to simplify the integration of new URDF descriptions and their usage in third-party robotics packages. This package provides a convenient interface for loading URDF files into GUI visualization tools, robot dynamics packages (such as Pinocchio), and physics simulators. To add a new robotic system to our library
- The system URDF must be contributed to robot_descriptions.py.
- The corresponding robot configuration file should be added to
cfg/supervised/robot/
describing the system' symmetry group and joint space representation generators, should also be added.
In summary, we support:
- Loading of URDF files in
pybullet
andpinocchio
throughrobot_descriptions.py
- Visualization of robot Discrete Morphological Symmetries in
pybullet
. Other physics simulators and visualization tools will come soon. - Utility functions to define symmetry representations of proprioceptive and exteroceptive observations.
- Construction of equivariant neural networks processing proprioceptive and exteroceptive observations, using the
escnn
library. - Use abstract harmonic analysis to decompose motion trajectories and recorded proprioceptive and exteroceptive observations into isotypic components.