forked from druid-io/druid-io.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
/
faq.html
88 lines (65 loc) · 6.72 KB
/
faq.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
published: true
title: FAQ
layout: html_page
---
<div class="druid-header">
<div class="container">
<h1 class='index' id='frequently_asked_questions'>Frequently Asked Questions</h1>
<h4>Don't see your question here? <a href='/community.html#converse'>Ask us</a>.</h4>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-md-10 col-md-offset-1">
<h2 id='memory'>Is Druid in-memory?</h2>
<p>Given that it is impossible to process data with a modern processor without first loading the data into memory, yes, it is in-memory. The earliest iterations of Druid didn’t allow for data to be paged in from and out to disk, so we often called it an “in-memory” system. However, we very quickly realized that RAM hasn’t become cheap enough to actually store all data in RAM and sell a product at a price-point that customers are willing to pay. Since the Summer of 2011, we have leveraged memory-mapping to allow us to page data between disk and memory and extend the amount of data a single node can load up to the size of its disks.</p>
<p>That said, as we made the shift, we didn’t want to compromise on the ability to configure the system to run such that everything is essentially in memory. To this end, individual Historical nodes can be configured with the maximum amount of data they should be given. Couple that with the Coordinator’s ability to assign data blocks to different “tiers” based on differing query requirements and Druid essentially becomes a system that can be configured across the whole spectrum of performance requirements. Configuration can be such that all data can be in memory and processed, it can be such that data is heavily over-committed compared to the amount of memory available, and it can also be that the most recent month of data is in memory, while everything else is over-committed.</p>
<h2 id='external'>What external dependencies does Druid have?</h2>
<p>Druid has three external dependencies that must be running in order for the Druid cluster to operate</p>
<div class="text-indent">
<ol>
<li>Deep Storage</li>
<li>Metadata Storage (MySQL or Postgres)</li>
<li>ZooKeeper</li>
</ol>
</div>
<h2 id='what_is_deep_storage'>What is “deep storage”?</h2>
<p>Simply put, deep storage is some storage infrastructure that Druid depends on for data availability. If this infrastructure goes down, then Druid cannot load new data into the system, nor can it recover from failures in the cluster. As long as this infrastructure is operational, Druid Historical nodes cannot lose data.</p>
<p>A more technical description of Deep Storage and the options available exists in our docs.<br /><a href='/docs/latest/dependencies/deep-storage.html'>Deep Storage</a></p>
<h2 id='non-structured'>Can Druid accept non-structured data?</h2>
<p>Druid requires some structure to the data it ingests. In general data should consist of a timestamp, dimensions and metrics. These are discussed in a bit more detail in our original blog post: <a href='blog/2011/04/30/introducing-druid.html'>Introducing Druid: Real-Time Analytics at a Billion Rows Per Second</a></p>
<h2 id='time-series'>Does Druid only accept data with a timestamp?</h2>
<p>Yes. Data that is ingested into Druid <em>must</em> have a timestamp.</p>
<h2 id='zookeeper'>Isn’t Zookeeper a single point of failure?</h2>
<p>No, Zookeeper can be deployed to withstand a configurable number of individual node failures. Also, if ZooKeeper goes down, the cluster will continue to operate.</p>
<p>Losing ZooKeeper does, however, mean that the cluster cannot add new data segments, nor can it effectively react to the lose of one of the nodes. So, while it is safe to lose access to ZooKeeper, it is definitely a degraded state.</p>
<h2 id='master'>Isn’t the Coordinator a single point of failure?</h2>
<p>No, the “Coordinator” node is merely for coordination. It is <strong><em>not</em></strong> involved in the query path. Losing all coordinators means that new segments will not be loaded by the Historical nodes, i.e. no new data will enter the cluster. Also, segment balancing decisions will not be made, so it stops the cluster from being able to effectively scale up and down. Just like ZooKeeper, losing the coordinators puts the cluster in a degraded state, but it will keep operating just fine.</p>
<p>Also, there can be multiple coordinator nodes deployed. Extra coordinators will act as “hot spares” in case the active coordinator is lost.</p>
<h2 id='mas-queries'>Do all queries go through the coordinator?</h2>
<p>Queries <strong><em>never</em></strong> touch the coordinator. Ever.</p>
<h2 id='mas-dies'>Can I query Druid even after the coordinator dies?</h2>
<p>Yes, the coordinator has no impact on queries.</p>
<h2 id='update'>How do I update data?</h2>
<p>Updating data means that you have to re-generate the segments for the time period that you need to update. This can be done by reindexing the data for that time period. Once the indexer finishes, the Druid cluster will swap the new segment in and stop serving the old segment.</p>
<h2 id='ind-serv'>What is the Indexing Service, do I need that?</h2>
<p>The Indexing Service is a job scheduling service with tasks that operate on Segments. Long-term, we intend to move indexing activities to be fronted by this service.</p>
<p>It is not a requirement yet, but will become one as time goes on.</p>
<p>More information can be found in the docs:</p>
<p><a href="/docs/latest/design/indexing-service.html">Indexing Service</a></p>
<h2 id='elasticsearch'>How does Druid compare to Elasticsearch?</h2>
<p><a href='/docs/latest/comparisons/druid-vs-elasticsearch.html'>Druid vs Elasticsearch</a></p>
<h2 id='key-value'>How does Druid compare to Key/Value Stores (HBase/Cassandra)?</h2>
<p><a href='/docs/latest/comparisons/druid-vs-key-value.html'>Druid vs Key/Value Stores</a></p>
<h2 id='kudu'>How does Druid compare to Kudu?</h2>
<p><a href='/docs/latest/comparisons/druid-vs-kudu.html'>Druid vs Kudu</a></p>
<h2 id='redshift'>How does Druid compare to Redshift?</h2>
<p><a href='/docs/latest/comparisons/druid-vs-redshift.html'>Druid vs Redshift</a></p>
<h2 id='spark'>How does Druid compare to Spark?</h2>
<p><a href='/docs/latest/comparisons/druid-vs-spark.html'>Druid vs Spark</a></p>
<h2 id='sql-on-hadoop'>How does Druid compare to SQL-on-Hadoop (Impala/Drill/Spark SQL/Presto)?</h2>
<p><a href='/docs/latest/comparisons/druid-vs-sql-on-hadoop.html'>Druid vs SQL-on-Hadoop</a></p>
</div>
</div>
</div>