When working with large data sets, you can send events from the Splunk platform directly to UBA’s Kafka. In versions prior to 4.2 of Splunk UBA, both results and events were sent from the indexers to the search head, which was solely responsible for processing the events and keeping track of the number of events processed. Sending data directly to Kafka offloads the processing task from the search head to indexers. The search head still tracks the total number of events processed.
This method of data ingestion improves the performance of a single data source connection at the same time enhancing the reliability of ingestion. This in turn impacts the overall data quality within UBA leading to higher fidelity detections.
Kafka ingestion does not require UBA to run real-time indexed search queries on Splunk. Instead it uses something called micro-batched queries.
This app is installed on the Splunk search head. If your environment includes multiple search heads, install the Splunk UBA Kafka Ingestion App on each search head. If you have clustered Splunk environment then you can install this app on the search head cluster.
Add-ons installed on the search heads are automatically bundle replicated to the indexers when a search is issued to the search heads, which is the case for Kafka ingestion. This means all field extractions are also pushed to indexers, and there is no need to install the add-on on the indexers.
For information on Installation and Release Notes see: http://docs.splunk.com/Documentation/UBAKafkaApp
As a Splunkbase app developer, you will have access to all Splunk development resources and receive a 10GB license to build an app that will help solve use cases for customers all over the world. Splunkbase has 1000+ apps from Splunk, our partners and our community. Find an app for most any data source and user need, or simply create your own with help from our developer portal.