Adding Splunk as a New Ingestion Source #203
Replies: 5 comments 1 reply
-
Yes, developing a Splunk Ingestion could be very useful. I'm not very familiar with Splunk APIs: your intent is using In our Elasticsearch implementation we use two functions:
If it is possible a similar approach should be used also for splunk |
Beta Was this translation helpful? Give feedback.
-
Yes, the approach mirrors Elasticsearch's two-step process:
Example
Additionally, we have an option of using the Splunk SDK as well, which can be super fast and easy to configure. Please let me know what you think. And, I have some doubts regarding the schema and normalization. Would love to hear your thoughts on this. Thank you! |
Beta Was this translation helpful? Give feedback.
-
I was doing some research regarding Splunk ingestion and worked on this as part of my learning process so this isn’t PR-ready code yet. I’ve implemented the test case and Splunk ingestion logic here: commit link I’m a bit confused do we need to use ECS for Splunk ingestion, or is there another approach? If ECS is required, then I tried to align Splunk's flat key-value schema with the dot-nested structure used in Elasticsearch, but I came up with a workaround for it in above commit Could you review it and let me know if this approach makes sense or if there's a better way to handle the normalization? Looking forward to your feedback! Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi @drona-gyawali, you can open an issue using the |
Beta Was this translation helpful? Give feedback.
-
Discussion closed, because the related issue has been opened: #220 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi @Lorygold, @ManofWax,
While reviewing ingestion.json, I noticed a reference to Splunk. Given its widespread use, I believe adding it as a new ingestion source in BuffaLogs would be valuable. This would allow users to pull logs directly from Splunk and integrate them into BuffaLogs for further analysis.
Approach: Using Splunk's REST API (/services/search/jobs/export)
Instead of relying on ad-hoc queries, I propose implementing an ingestion mechanism using Splunk's REST API—specifically the /services/search/jobs/export endpoint. This method allows us to fetch logs efficiently without managing long-running search jobs.
Steps to Implement:
Follow the BaseIngestionConnector Structure – Since BuffaLogs has a standardized ingestion framework, the Splunk connector will extend BaseIngestionConnector, ensuring consistency.
API Integration – Implement a REST client to:
Write Unit Tests – Cover API interactions and data transformation logic.
Develop Documentation – Provide configuration instructions for users.
This approach keeps the ingestion process in line with BuffaLogs' existing architecture. If you have any suggestions or considerations that I should keep in mind, I’d love to hear your thoughts!
Beta Was this translation helpful? Give feedback.
All reactions