Getting Data In

How to index and use unstructured huge volume of data - Splunk HWF and SH cluster?

jincy_18
Path Finder

Hi All,

We are working on a clustered environment where splunk is fetching logs from various servers. In the source server we have set up splunk heavy weight forwarder which forwards the data to the load balanced HWF then to indexers.
Now the issue we face is that our logs are in nested json/ unstructured format and is of huge volume. This is making the searches too slow and crash.
We have tried index time extractions but that is also slower due to the volume.
Could you please suggest a work around for this.

TIA

0 Karma

vliggio
Communicator

What do you mean "huge volumes"? How large are your json objects (ie, how many characters per object, and how many levels deep are the objects), and are you sure that they are fully compliant json objects?

Why do you have heavy weight forwarders on your source server, another load balanced HWF layer, and then indexers? What do you mean "index time extractions but that is also slower due to the volume"? Are you saying the search is slower?

0 Karma
Get Updates on the Splunk Community!

Join Us for Splunk University and Get Your Bootcamp Game On!

If you know, you know! Splunk University is the vibe this summer so register today for bootcamps galore ...

.conf24 | Learning Tracks for Security, Observability, Platform, and Developers!

.conf24 is taking place at The Venetian in Las Vegas from June 11 - 14. Continue reading to learn about the ...

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...