Skip to content
Amber Zhang edited this page Dec 17, 2019 · 3 revisions

Frequently Asked Questions (FAQ)

Common questions about Nebula Graph and more.

Trouble Shooting

Trouble Shooting session lists the common operation errors in Nebula Graph.

graphd Config File Doesn't Register to Meta Server

When starting Nebula Graph services with the nebula.service script, graphd, metad and storaged processes start too fast to make the graphd config file registered into the meta server. The same problem may also occur when restarting.

If you are using the beta version, start the metad service first, then the storaged and graphd to avoid such problem. We will resolve this problem in the next release.

Start metad first:

nebula> scripts/nebula.service start metad
[INFO] Starting nebula-metad...
[INFO] Done

Then start storaged and graphd

nebula> scripts/nebula.service start storaged
[INFO] Starting nebula-storaged...
[INFO] Done

nebula> scripts/nebula.service start graphd
[INFO] Starting nebula-graphd...
[INFO] Done

Errors Thrown When Inserting Data After Tag or Edge is Created

This is likely caused by setting the load_data_interval_secs value to fetch data from the meta server. Conduct the following steps to resolve:

If meta has registered, check load_data_interval_secs value in console with the following command.

nebula> GET CONFIGS storage:load_data_interval_secs
nebula> GET CONFIGS graph:load_data_interval_secs

If the value is large, change it to 1s with the following command.

nebula> UPDATE CONFIGS storage:load_data_interval_secs=1
nebula> UPDATE CONFIGS graph:load_data_interval_secs=1

Note the changes take effect in the next period.

Errors Thrown When Executing Command in Docker

This is likely caused by the inconsistency between the docker IP and the default listening address (172.17.0.2). Thus we need to change the the latter one.

  1. First run ifconfig in container to check your container IP, here we assume your IP is 172.17.0.3.
  2. In directory /usr/local/nebula/etc, check the config locations of all the IP addresses with the command grep "172.17.0.2" . -r.
  3. Change all the IPs you find in step 2 to your container IP 172.17.0.3.
  4. Restart all the services.

Check Logs

Logs are stored under /usr/local/nebula/logs/ by default.

Logs details please refer to logs.

Check Configs

Configuration files are stored under /usr/local/nebula/etc/ by default.

Check runtime Configs

In Nebula console, run

nebula> SHOW CONFIGS;

Configuration details please see here.

Connection Refused

E1121 04:49:34.563858   256 GraphClient.cpp:54] Thrift rpc call failed: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused): Connection refused

Check service status by

$ /usr/local/nebula/scripts/nebula.service status all

Process Crash

  1. Check disk space df -h.
  2. Check memory usage free -h.

Could not create logging file:... Too many open files

  1. Check your disk space df -h
  2. Check log directory /usr/local/nebula/logs/
  3. reset your max open files by ulimit -n 65536

Storaged Service Cannot Start Normally

When the same host is used for single host or cluster test, the storaged service cannot start normally. The listening port of the storaged service is red in the console.

Check the logs of the storaged service. If you find the "wrong cluster" error message, the possible cause is that the cluster id generated by Nebula Graph during the single host test and the cluster test are inconsistent. You need to delete the cluster.id file and the data directory and restart the service.

General Information

General Information lists the conceptual questions about Nebula Graph.

Explanations on the Time Return in Queries

nebula> GO FROM 101 OVER follow
===============
| follow._dst |
===============
| 100         |
---------------
| 102         |
---------------
| 125         |
---------------
Got 3 rows (Time spent: 7431/10406 us)

Taking the above query as an example, the first number is the time it takes the query engine to receive the data from the console and pass the data to the storage engine to perform a series of calculations on the entire link; the second number is the time spent from sending the request to receiving the response and outputting the results in the console.

Clone this wiki locally