-
Notifications
You must be signed in to change notification settings - Fork 4
/
using.html.md.erb
101 lines (57 loc) · 6.41 KB
/
using.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
title: Using PCF Log Search
owner: PCF Metrics
---
This topic describes how to get started with Pivotal Cloud Foundry (PCF) Log Search. This topic focuses on Kibana, which is the front-end component of PCF Log Search.
The Kibana web application lets you search and filter system logs, design visualizations of saved searches, and create dashboards.
##<a id="login"></a>Log in to Kibana
<p class="note"><strong>Note</strong>: Log Search only supports one set of access credentials, viewable through the Ops Manager to PCF admin users. Additional user creation is not supported.</p>
1. From the **Installation Dashboard** in Ops Manager, click on the **Log Search** tile.
![Tile](tile.png)
1. Select the **Credentials** tab.
1. Click **Link to Credentials** to view and record the **Kibana Credentials**.
![Credentials](creds.png)
1. Navigate to `https://logsearch.YOUR-SYSTEM-DOMAIN.example.com` and log in to Kibana using the credentials that you recorded in the previous step.
1. If prompted to configure an index pattern, enter `logstash-*` for the **Index name or pattern** and `@timestamp` for the **Time-field name**.
![Kibana Index pattern config screenshot](images/index-pattern-config.png)
##<a id="get-started"></a> Get Started with Kibana
The PCF Log Search tile provides tags to standardize the data it receives from multiple tiles. The section below explains PCF Log Search tags. Once you understand how PCF Log Search tags work, you can use Kibana successfully.
###<a id="tags"></a> Understand Log Search Tags
PCF Log Search receives data in JSON format from other tiles. PCF Log Search organizes this data into searchable fields based on the JSON keys, and also aggregates fields under custom tags. Log Search attaches these tags to data when it recognizes that different tile logs use different keys to refer to the same type of data. For instance, one tile may specify the timestamp under a `Timestamp` field, while another specifies this value under a `T` field. Log Search recognizes both of these values as a timestamp and attaches the `@timestamp` tag. You can use the common **@timestamp** tag in Kibana to search for timestamp data across all tiles.
Log Search attaches tags to other kinds of data as well. See the [Log Search Tags Dictionary](./search-guide.html) topic for the full list of tags generated by Log Search.
###<a id="searches"></a> Filter, Search, and Visualize
The following list describes what you can do with the Kibana component of PCF Log Search:
* [Filter log data by field](https://www.elastic.co/guide/en/kibana/current/discover.html#field-filter): You can filter log data based on tags generated by Log Search or any keys within the JSON logs themselves. The **Available Fields** list on the left side of the **Discover** page lists the Log Search tags first, followed by the parsed log keys.
* Change the [time scale](https://www.elastic.co/guide/en/kibana/current/discover.html#set-time-filter): By default, the time scale is set to the last **15 minutes**.
* Change the [refresh interval](https://www.elastic.co/guide/en/kibana/current/discover.html#auto-refresh): By default, auto-refresh is set to **Off**.
* [Search log data](https://www.elastic.co/guide/en/kibana/current/discover.html#search): You can further refine your results from any filter or time span using the search bar at the top of the **Discover** page. You can also search against any field by entering your query in the following format: `FIELD:VALUE`. For example: `@source.ip:192.0.2.21`.
* [Design data visualizations](https://www.elastic.co/guide/en/kibana/current/visualize.html): You can create visualizations such as **Data Table**, **Line Chart**, and **Vertical Bar Chart**. You can also [customize your visualizations](https://www.elastic.co/guide/en/kibana/current/visualize.html#aggregation-builder). For example, you can tailor the x and y axis of a Vertical Bar Chart using bucket aggregations and metric aggregations.
* [Create a dashboard](https://www.elastic.co/guide/en/kibana/current/dashboard.html): You can create a dashboard to display multiple visualizations and saved searches. You can also apply filters to your dashboard, which affects all the displayed panes. For instance, using the time filter applies your time changes to every displayed visualization and saved search.
For more information, view the [Kibana documentation](https://www.elastic.co/guide/en/kibana/current/index.html).
##<a id="splunk"></a> Forward Data to Splunk
You can configure PCF Log Search to forward some or all the data it receives to an external service such as Splunk in JSON format.
### Step 1: Configure Splunk
<p class="note"><strong>Note</strong>: Pivotal recommends using UDP to avoid network communication problems with the Splunk Network input preventing data being indexed by Log Search </p>
Follow the instructions for configuring a network input in the [Get data from TCP and UDP ports] (http://docs.splunk.com/Documentation/Splunk/latest/Data/Monitornetworkports) topic of the Splunk documentation. When prompted, choose `_json` as the **Source Type**.
### Step 2: Configure Log Search
1. From Ops Manager, click on the **Log Search** tile, and then the **Experimental** section.
1. For **Custom Logstash Outputs**, enter the Splunk UDP network input that you configured in the previous step. See the following example configuration:
```
if [@source][program] == "uaa" {
udp {
host => "SPLUNK-IP-OR-DNS"
port => "SPLUNK-UDP-PORT-NUMBER"
}
}
```
You can add a conditional statement to filter the data sent to Splunk, such as `if [@source][program] == "uaa" {` in the example above.
### Step 3: Configure Firewall Rules
To send data from the Log Search to Splunk using UDP on the port you specified in Step 1, configure your firewall to allow:
* Outgoing traffic from the Log Search Log parser VMs on the configured port. You can view the IP addresses for the Log Parser VMs in Ops Manager under the **Status** tab of the Log Search tile.
* Incoming traffic to the Splunk installation on the configured port
### Step 4: Verify Your Forwarding Configuration
Check that your data appears in both Log Search and Splunk:
1. Using the example configuration from Step 2, search for `@source.program:uaa` in Kibana.
![UAA log in Log Search > Kibana screenshot](images/UAA-log-data-in-Kibana.png)
1. Using the example configuration from Step 2, search for `sourcetype="_json"` in Splunk.
![UAA log in Splunk screenshot](images/UAA-log-data-in-Splunk.png)