@@ -14,9 +14,23 @@ You can easily install model through the PIP:
1414pip install code2seq
1515```
1616
17- ## Usage
17+ ## Available checkpoints
1818
19- Minimal code example to run the model:
19+ ### Method name prediction
20+ | Dataset (with link) | Checkpoint | # epochs | F1-score | Precision | Recall | ChrF |
21+ | -------------------------------------------------------------------------------------------------------------------------| -------------------------------------------------------------------------------------------------------------------| ----------| ----------| -----------| --------| -------|
22+ | [ Java-small] ( https://s3.eu-west-1.amazonaws.com/datasets.ml.labs.aws.intellij.net/java-paths-methods/java-small.tar.gz ) | [ link] ( https://s3.eu-west-1.amazonaws.com/datasets.ml.labs.aws.intellij.net/checkpoints/code2seq_java_small.ckpt ) | 11 | 41.49 | 54.26 | 33.59 | 30.21 |
23+ | [ Java-med] ( https://s3.eu-west-1.amazonaws.com/datasets.ml.labs.aws.intellij.net/java-paths-methods/java-med.tar.gz ) | [ link] ( https://s3.eu-west-1.amazonaws.com/datasets.ml.labs.aws.intellij.net/checkpoints/code2seq_java_med.ckpt ) | 10 | 48.17 | 58.87 | 40.76 | 42.32 |
24+
25+ ## Configuration
26+
27+ The model is fully configurable by standalone YAML file.
28+ Navigate to [ config] ( config ) directory to see examples of configs.
29+
30+ ## Examples
31+
32+ Model training may be done via PyTorch Lightning trainer.
33+ See it [ documentation] ( https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html ) for more information.
2034
2135``` python
2236from argparse import ArgumentParser
@@ -29,20 +43,21 @@ from code2seq.model import Code2Seq
2943
3044
3145def train (config : DictConfig):
32- # Load data module
46+ # Define data module
3347 data_module = PathContextDataModule(config.data_folder, config.data)
34- data_module.prepare_data()
35- data_module.setup()
3648
37- # Load model
49+ # Define model
3850 model = Code2Seq(
3951 config.model,
4052 config.optimizer,
4153 data_module.vocabulary,
4254 config.train.teacher_forcing
4355 )
4456
45- trainer = Trainer(max_epochs = config.hyper_parameters.n_epochs)
57+ # Define hyper parameters
58+ trainer = Trainer(max_epochs = config.train.n_epochs)
59+
60+ # Train model
4661 trainer.fit(model, datamodule = data_module)
4762
4863
@@ -54,6 +69,3 @@ if __name__ == "__main__":
5469 __config = OmegaConf.load(__args.config)
5570 train(__config)
5671```
57-
58- Navigate to [ config] ( config ) directory to see examples of configs.
59- If you have any questions, then feel free to open the issue.
0 commit comments