You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/brainkbui.md
+2-1
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,6 @@
1
1
# User Interface Overview
2
-
2
+
This section offers a detailed overview of the UI, including its layout, design elements, and functionality. It provides insights into how users can navigate the interface, interact with various components, and utilize its features effectively to achieve their goals.
3
+
Additionally, it highlights key elements that enhance user experience, such as responsiveness, accessibility, and ease of use.
3
4
## Overview
4
5
5
6
The BrainKB UI (user interface), accessible at [beta.brainkb.org](https://beta.brainkb.org), is a user-centric interface designed to interact with the BrainKB knowledge graph infrastructure. It enables neuroscientists, researchers, and practitioners to explore, search, analyze, and visualize neuroscience knowledge effectively. The platform integrates a range of tools and features that facilitate evidence-based decision-making, making it an essential resource for advancing neuroscience research.
Copy file name to clipboardexpand all lines: docs/ingestion_service.md
+135-8
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,6 @@
1
1
# Ingestion service
2
-
2
+
This section provides information regarding the ingestion service, one of the service component of BrainKB.
3
+
## Overview
3
4
{numref}`brainkb_intestion_architecture_figure` illustrates the architecture of the ingestion service, which follows the producer-consumer pattern and leverages RabbitMQ for scalable data ingestion. The service is composed of two main components: (i) the producer and (ii) the consumer.
4
5
The producer component exposes API endpoints (see {numref}`brainkb_ingestion_service_api_endpoints`) that allow clients or users to ingest data. Currently, it supports the ingestion of KGs represented in JSON-LD and Turtle formats. Users can ingest raw JSON-LD data as well as upload files, either individually or in batches. At present, the ingestion of other file types, such as PDF, text, and JSON, has been disabled due to the incomplete implementation of the required functionalities.
5
6
The consumer retrieves ingested data from RabbitMQ, processes it, and forwards it to the query service via API endpoints. The query service then inserts the processed data into the graph database.
@@ -16,10 +17,13 @@ Currently Enabled API Endpoints.
16
17
```
17
18
18
19
20
+
## Sequence Diagram
19
21
22
+
This sequence diagram illustrates the data ingestion pipeline, detailing the process of how a client submits data, which is subsequently validated, processed, and stored in a graph database.
23
+
24
+
### **Producer Workflow Overview**
25
+
The diagram below highlights the producer in this pipeline, detailing each step in the process as described below.
20
26
21
-
## Sequence Diagram
22
-
23
27
```{mermaid}
24
28
sequenceDiagram
25
29
%% Client/User on the left
@@ -29,12 +33,12 @@ sequenceDiagram
29
33
30
34
box Thistle Producer
31
35
participant API as Producer API
32
-
participant Validator as Shared.py
36
+
participant Validator as Shared
33
37
participant Publisher as RabbitMQ Publisher
34
38
end
35
39
36
40
box LightGoldenRodYellow RabbitMQ
37
-
participant RabbitMQ as RabbitMQ Queue
41
+
participant RabbitMQ as Oxigraph
38
42
end
39
43
40
44
%% Client submits data
@@ -60,6 +64,27 @@ sequenceDiagram
60
64
deactivate API
61
65
```
62
66
67
+
#### **Receiving Data from the Client**
68
+
- The **client** initiates the ingestion process by submitting a `POST` request to the **Producer API**.
69
+
- The request contains structured data, typically in formats like **JSON, JSON-LD, or TTL (Turtle)**.
70
+
71
+
_Note:_ Support for additional formats, such as PDF and text, will be enabled once the necessary functionalities are fully developed (see {ref}`table_sourcecodes`) and integrated.
72
+
73
+
#### **Validation & Preprocessing**
74
+
- The **Producer API** passes the received data to the **Shared (or shared.py that implements shared functionalities)**, which performs essential validation checks:
75
+
- Ensuring the existence of the **named graph** in the database and ensuring the ingested data is in correct format, e.g., valid JSON-LD format.
76
+
- It is important to note that to be able to proceed with ingestion, the client must either register a new named graph IRI (using the query service API endpoint) or select an existing one. This approach enables versioning, ensuring efficient data management and traceability.
77
+
- If validation **fails**, the system returns a `400 Bad Request` to the client.
78
+
79
+
#### **Publishing Data to RabbitMQ**
80
+
- Once validated, the **Producer RabbitMQ Publisher** formats the data for ingestion.
81
+
- The formatted data is published to **RabbitMQ**, which acts as a message broker to decouple producers and consumers.
82
+
- A successful message publication triggers a **publish confirmation**, which is sent back to the **Producer API**.
83
+
84
+
85
+
### **Consumer Workflow Overview**
86
+
The diagram below highlights the consumer in this pipeline, detailing each step in the process as described below.
87
+
63
88
```{mermaid}
64
89
sequenceDiagram
65
90
%% Client/User on the left
@@ -70,15 +95,15 @@ sequenceDiagram
70
95
71
96
box HoneyDew Consumer
72
97
participant Consumer as Listener
73
-
participant Processor as Shared.py
98
+
participant Processor as Shared
74
99
end
75
100
76
101
box AliceBlue Query Service
77
102
participant QueryService as Query Service
78
103
end
79
104
80
105
box Wheat Graph Database
81
-
participant GraphDB as Graph Database
106
+
participant GraphDB as Oxigraph
82
107
end
83
108
84
109
@@ -113,4 +138,106 @@ sequenceDiagram
113
138
deactivate Processor
114
139
Consumer-->>RabbitMQ: 6. Acknowledge message
115
140
deactivate Consumer
116
-
```
141
+
```
142
+
143
+
#### **Message Consumption from RabbitMQ**
144
+
- The **RabbitMQ Queue** holds messages published by the **Producer**.
145
+
- The **Consumer Listener** picks up an available message from the RabbitMQ queue.
146
+
147
+
#### **Adding Provenance Metadata**
148
+
- The **Consumer** forwards the message to the **Shared (or shared.py that implements shared functionalities)** for further handling, e.g., adding the provenance information.
149
+
150
+
- Example: Consider the following ingested data in TTL representation.
A new property `prov:wasInformedBy`, is added to the initial TTL data, establishing a link to the triple that contains provenance information, as illustrated below. Please note that the provenance is attached for all the triples. For example, if you are uploading a TTL file that contains 30 triples then all the 30 triples will have provenance information attached.
173
+
174
+
```
175
+
<https://identifiers.org/brain-bican/vocab/ingestionActivity/e4db1e0b-98ff-497c-88b1-afb4a6d7ee14> a prov:Activity,
bican:000015fd3d6a449b47e75651210a6cc74fca918255232c8af9e46d077034c84d a bican:GeneAnnotation ;
201
+
rdfs:label "LOC106504536";
202
+
schema1:identifier "106504536";
203
+
prov:wasInformedBy <https://identifiers.org/brain-bican/vocab/provenance/e4db1e0b-98ff-497c-88b1-afb4a6d7ee14>;#this links to the new provenance information
#added new provenance information regarding the ingestion activity. Might have to update <https://identifiers.org/brain-bican/vocab/ingestionActivity/e4db1e0b-98ff-497c-88b1-afb4a6d7ee14 patten, to be discussed and done later
214
+
<https://identifiers.org/brain-bican/vocab/ingestionActivity/e4db1e0b-98ff-497c-88b1-afb4a6d7ee14> a prov:Activity,
0 commit comments