Reporting

How to export very large datasets from Splunk?

_gkollias
SplunkTrust
SplunkTrust

I’m trying to find a way that I can export a very large data set without bringing down any search heads (which I already learned the hard way). Even 10 days of data produces around 10M rows and Splunk isn't able to handle that size of an export.

When I use the outputcsv command, I’m finding that the large output gets replicated in that SH's searchpeer bundles. This results in end users not being able to pull up any data when running searches on that SH. I thought I could simply export it and move it to /tmp/ before anything squirrely occurred.

Do you have any ideas on how else I can export large data sets from Splunk? Here is the search:

(index=1)
OR
(index=2)
OR
(index=3)
| table various field names

Thanks in Advance

0 Karma
1 Solution

sloshburch
Splunk Employee
Splunk Employee

It sounds like you're trying to use output.csv. If this is a onetime thing, then theoretically I think you can simply run the search the produces the table, then go to the dispatch directory to find the results and download them. Be sure to do it quickly since the job may only persist for ten minutes.

Additionally, you can improve the performance of the search by running stats instead of table:

(index=1)
 OR
 (index=2)
 OR
 (index=3)
 | stats count by various field names
 | fields - count

I believe table is a streaming command and therefore returns all results to the search heads for processing. That's a HUGE memory footprint. stats simply tells the indexers to only send the fields of concern back to the search head. You can run sistats to get an idea of what is returned. You'll notice its much less data than all event's payloads.

I believe both the stats usage and fetching from the dispatch should address your issue.

View solution in original post

sloshburch
Splunk Employee
Splunk Employee

It sounds like you're trying to use output.csv. If this is a onetime thing, then theoretically I think you can simply run the search the produces the table, then go to the dispatch directory to find the results and download them. Be sure to do it quickly since the job may only persist for ten minutes.

Additionally, you can improve the performance of the search by running stats instead of table:

(index=1)
 OR
 (index=2)
 OR
 (index=3)
 | stats count by various field names
 | fields - count

I believe table is a streaming command and therefore returns all results to the search heads for processing. That's a HUGE memory footprint. stats simply tells the indexers to only send the fields of concern back to the search head. You can run sistats to get an idea of what is returned. You'll notice its much less data than all event's payloads.

I believe both the stats usage and fetching from the dispatch should address your issue.

_gkollias
SplunkTrust
SplunkTrust

This technique worked very well. I am also able to do a normal export without having to fetch data manually from /dispatch. Thank you, Burch!

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...