-
Notifications
You must be signed in to change notification settings - Fork 1.2k
FAQ
Common questions about Nebula Graph and more.
-
Frequently Asked Questions (FAQ)
-
Trouble Shooting
- graphd Config File Doesn't Register to Meta Server
- Errors Thrown When Inserting Data After Tag or Edge is Created
- Errors Thrown When Executing Command in Docker
- Check Logs
- Check Configs
- Check runtime Configs
- Connection Refused
- Process Crash
- Could not create logging file:... Too many open files
- Storaged Service Cannot Start Normally
- General Information
-
Trouble Shooting
Trouble Shooting session lists the common operation errors in Nebula Graph.
When starting Nebula Graph services with the nebula.service
script, graphd
, metad
and storaged
processes start too fast to make the graphd config file registered into the meta server. The same problem may also occur when restarting.
If you are using the beta version, start the metad service first, then the storaged and graphd to avoid such problem. We will resolve this problem in the next release.
Start metad first:
nebula> scripts/nebula.service start metad
[INFO] Starting nebula-metad...
[INFO] Done
Then start storaged and graphd
nebula> scripts/nebula.service start storaged
[INFO] Starting nebula-storaged...
[INFO] Done
nebula> scripts/nebula.service start graphd
[INFO] Starting nebula-graphd...
[INFO] Done
This is likely caused by setting the load_data_interval_secs
value to fetch data from the meta server. Conduct the following steps to resolve:
If meta has registered, check load_data_interval_secs
value in console with the following command.
nebula> GET CONFIGS storage:load_data_interval_secs
nebula> GET CONFIGS graph:load_data_interval_secs
If the value is large, change it to 1s with the following command.
nebula> UPDATE CONFIGS storage:load_data_interval_secs=1
nebula> UPDATE CONFIGS graph:load_data_interval_secs=1
Note the changes take effect in the next period.
This is likely caused by the inconsistency between the docker IP and the default listening address (172.17.0.2). Thus we need to change the the latter one.
- First run
ifconfig
in container to check your container IP, here we assume your IP is 172.17.0.3. - In directory
/usr/local/nebula/etc
, check the config locations of all the IP addresses with the commandgrep "172.17.0.2" . -r
. - Change all the IPs you find in step 2 to your container IP 172.17.0.3.
- Restart all the services.
Logs are stored under /usr/local/nebula/logs/
by default.
Logs details please refer to logs.
Configuration files are stored under /usr/local/nebula/etc/
by default.
In Nebula console, run
nebula> SHOW CONFIGS;
Configuration details please see here.
E1121 04:49:34.563858 256 GraphClient.cpp:54] Thrift rpc call failed: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused): Connection refused
Check service status by
$ /usr/local/nebula/scripts/nebula.service status all
- Check disk space
df -h
. - Check memory usage
free -h
.
- Check your disk space
df -h
- Check log directory
/usr/local/nebula/logs/
- reset your max open files by
ulimit -n 65536
When the same host is used for single host or cluster test, the storaged service cannot start normally. The listening port of the storaged service is red in the console.
Check the logs of the storaged service. If you find the "wrong cluster" error message, the possible cause is that the cluster id generated by Nebula Graph during the single host test and the cluster test are inconsistent. You need to delete the cluster.id file and the data directory and restart the service.
General Information lists the conceptual questions about Nebula Graph.
nebula> GO FROM 101 OVER follow
===============
| follow._dst |
===============
| 100 |
---------------
| 102 |
---------------
| 125 |
---------------
Got 3 rows (Time spent: 7431/10406 us)
Taking the above query as an example, the first number is the time it takes the query engine to receive the data from the console and pass the data to the storage engine to perform a series of calculations on the entire link; the second number is the time spent from sending the request to receiving the response and outputting the results in the console.