-
Notifications
You must be signed in to change notification settings - Fork 20
Home
Model evaluation is a part of WP7, Task 1. The goal of this task is to identify the modelling languages, tools and platform that could fit the requirements associated to the specific needs of ETCS design and railway norms. Depending on recommendations of WP2 on these needs, several languages and tools may be necessary to handle the different levels of abstraction of the whole design process. For each candidate, a small subset of the ERTMS specification will be modelled.
This task is subdivided in 3 subtasks :
- T7.1.1 : Identify and define the potential means of description
- T7.1.2 : Identify and compare existing modelling tools
- T7.1.3 : Identify the tool platform
These subtasks are deeply linked : tools are usually developped for a specific language and are supported by a given tool platform.
Inputs of these substaks are expected from WP2 workpackge :
- List of suitable languages (based on State of the Art Analysis)
- Small subset of ERTMS requirements that is representative
- Those WP2 Requirements that are sucient to evaluate a target language
The expected results are :
- Formal Model representing the sample spec, one for each candidate
- Documentation of the changes to each language used (if any)
- Evaluation of the models against the WP2 requirements
- Evaluation of tool and model for each prototype
- Documentation of the changes to each tool (if any)
- Evaluation of the tools against the WP2 requirements
- Evaluation of each tool platform against WP2 requirements, independent of target tool
- Evaluation of tool platform in the context of specific target tools
The deliverables expected by the other workpackages are :
- Decision on the final means of description
- Report on the candidate languages (sample model, evaluation against requirements and evolution needed)
- Report on the final language choice(s)
- Decision on the final tool choice(s)
- Report on the final choice(s) for the primary tool chain (means of description, tool and platform)
- Selection of Tool Platform (and reasoning)
The task is planned for seven months from November 2012 to June 2013.
A schedule and a description of the model evaluation benchmark may be found here