This is a PyTorch implementation of Mentor (📖 Modeling Teams Performance Using Deep Representational Learning on Graphs)
Authors: Pietro Foini, Francesco Carli, Nicolò Gozzi, Nicola Perra, Rossano Schifanella
We suggest to use PyCharm Community for following steps 2-7.
- Install Python 3.9: Make sure you have Python installed on your system. You can download it from the official Python website (https://www.python.org/) and follow the installation instructions for your operating system;
- Clone the repository;
- Create a virtual environment;
- Activate the virtual environment;
- Mark
src
folder as root directory; - Install project dependencies:
pip install -r requirements.txt
- Look in (https://download.pytorch.org/whl/torch_stable.html and https://data.pyg.org/whl/torch-1.8.0%2Bcu111.html) for
the torch versions you want to install (torch version, python version, CUDA version, OS etc.). Add a URL dependency
to your
pyproject.toml
file. For example, the current .toml file has torch 1.8.0 working with GPU on Windows system; - Run the commands for further project dependencies:
poetry lock --no-update
poetry install
Now you're all set! 🎉 Happy coding! 😄✨
Git hooks are scripts that automatically run before or after specific Git actions, such as committing code or pushing changes. They act as your code quality guardians, ensuring consistency and preventing messy commits.
To install hooks using pre-commit, follow these steps:
-
📥 Make sure you have installed the repo using poetry. Also check that pre-commit is installed with:
pre-commit --version
-
📂 Navigate to your project's root directory using the command line.
-
✍️ Check a file named
.pre-commit-config.yaml
in the project's root directory. This file will contain the configuration for your pre-commit hooks. -
💾 If everything checks run
pre-commit install
in your terminal. This command will install the hooks and set them up to run automatically before each commit.
Now you're all set! 🎉 Your hooks will work their magic, keeping your codebase clean and your commits error-free.
The datasets used are divided into two main categories: synthetic and real-world datasets. Synthetic data has been generated in such a way as to systematically validate the theoretical assumptions regarding the key contributions of the three effects: topology, centrality, and position. Real-world datasets, on the other hand, have been employed to assess the effectiveness of the models thus developed.
For more details on both types of datasets, we refer you to the respective folder where the analyses related to them have been included.
NB: The contextual channel described on paper refers to positional channel.
The synthetic datasets were created to isolate the three core effects of the current methodology. They served as a foundation during the construction of Mentor, ensuring that these behaviors were effectively captured and leveraged. The synthetic datasets are as follows:
- Position: the teams' label is ruled by their position into the graph
- Centrality: the teams' label is ruled by the in/out degree centrality of the nodes outside the team
- In-degree;
- Out-degree;
- Topology: the teams' label is ruled by the internal connection structures of the teams
- Topology (v1);
- Topology (v2);
- Topology (v3);
- Position and Topology: the teams' label is ruled by a combination of topological and positional effects
- Attribute: the teams' label is ruled by the attributes if the nodes
*The visualization tool used is Graphia.
Freely accessible and copyright-free data concerning team management and their respective performance are lacking. The three datasets we have focused on are as follows:
For each of these datasets, a subset was selected.
We propose a graph neural network model designed to predict a team’s performance while identifying the drivers that determine such outcome. 💡🧐
In particular, the model is based on three architectural channels: topological, centrality and positional which capture different factors potentially shaping teams’ success. We endow the model with two attention mechanisms to boost model performance and allow interpretability. A first mechanism allows pinpointing key members inside the team. A second mechanism allows us to quantify the contributions of the three driver effects in determining the outcome performance.
MIT
@article{Carli2024,
author = {Carli, Francesco and Foini, Pietro and Gozzi, Nicolò and Perra, Nicola and Schifanella, Rossano},
title = {Modeling teams performance using deep representational learning on graphs},
journal = {EPJ Data Science},
volume = {13},
number = {1},
pages = {7},
year = {2024},
month = {January},
doi = {10.1140/epjds/s13688-023-00442-1},
url = {https://doi.org/10.1140/epjds/s13688-023-00442-1},
}
Please open an issue or contact [email protected] with any questions. 🙂