You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+12-2Lines changed: 12 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,15 @@
1
1
# DACKAR
2
-
*Digital Analytics, Causal Knowledge Acquisition and Reasoning for Technical Language Processing*
2
+
*Digital Analytics, Causal Knowledge Acquisition and Reasoning*
3
+
4
+
A Knowledge Management and Discovery Tool for Equipment Reliability Data
5
+
6
+
To improve the performance and reliability of high dependable technological systems such as nuclear power plants, advanced monitoring and health management systems are employed to inform system engineers on observed degradation processes and anomalous behaviors of assets and components. This information is captured in the form of large amount of data which can be heterogenous in nature (e.g., numeric, textual). Such a large amount of available data poses challenges when system engineers are required to parse and analyze them to track the historic reliability performance of assets and components. DACKAR tackles this challenge by providing means to organize equipment reliability data in the form of a knowledge graph. DACKAR distinguish itself from current knowledge graph-based methods in that model-based system engineering (MBSE) models are used to capture system architecture and health and performance data. MBSE models are used as skeleton of a knowledge graph; numeric and textual data elements, once processed, are associated to MBSE model elements. Such a feature opens the door to new data analytics methods designed to identify causal relations between observed phenomena.
7
+
8
+
DACKAR is structured by a set of workflows where each workflow is designed to process raw data elements (i.e., anomalies, events reported in textual form, MBSE models) and construct or update a knowledge graph. For each workflow, the user can specify the sequence of pipelines that are designed to perform specific processing actions on the raw data or the processed data within the same workflow. Specific guidelines on the formats of the raw data are provided. In addition, within the same workflow, a specific data-object is defined; in this respect, each pipeline is tasked to either process portion of the defined data-object or create knowledge graph data. The available workflows are:
9
+
* mbse_workflow: Workflow to process system and equipment MBSE models
10
+
* anomaly_workflow: Workflow to process numeric data and anomalies
11
+
* tlp_workflow: Workflow to process textual data
12
+
* kg_workflow: Workflow to construct and update knowledge graphs
3
13
4
14
## Installation
5
15
@@ -41,7 +51,7 @@ and ``jupyterlab`` is used to execute notebook examples under ``./examples/`` fo
41
51
42
52
## Test
43
53
44
-
### Test functions with ```__pytest__```
54
+
### Test functions with ```pytest```
45
55
46
56
- Run the following command in your command line to install pytest:
0 commit comments