Getting Data In

Splunk search of indexed CSV file does not pull out all the fields

TomJordan
Explorer

Hi,
Splunk newbie here... I am trying to get a csv file of performance metrics into Splunk. Briefly, there are about 1700 fields in the data, with the first line of the file being the header containing the fields names. There are about 5500 rows, each row representing a log event every fifteen seconds.
When I first tried to load the data into Splunk using autodetected sourcetype of 'csv,' I had the problem that the data got truncated, so I set TRUNCATE to 1Mb in the 'adjust timestamp and event break settings' > Advanced mode page of the UI. This resulted in the 'Currently Applied Settings' as follows:

NO_BINARY_CHECK=1

TRUNCATE=1048576

CHECK_FOR_HEADER=true

KV_MODE=none

SHOULD_LINEMERGE=false

pulldown-type=true

I then saved this as a new source type, which updated props.conf. This seemed to fix the problem of the truncation, but now there is a problem in that when I run a simple search on this sourcetype, it does not pull out all the ~1700 fields, it only pulls out about 500 of them (after nicely renaming them).

I have had a look through search.log and cannot see anything obviously gone wrong. It looks like there is possibly some lexographical ordering involved in the 'choice' of which fields have been omitted.
I would very much appreciate any pointers as to what is going on here and how I might be able to fix this.
(I have managed to load in similar csv files, but which have much fewer fields, and see all the fields without any problem, using the autodetected source type).

Thanks.

Tags (1)
0 Karma
1 Solution

lguinn2
Legend

The maximum number of fields that Splunk will extract, by default, is 512. This value is specified in limits.conf

So, create a file named limits.conf in $SPLUNK_HOME/etc/system/local and put the following in it

[kv]
maxcols = 1800
* When non-zero, the point at which kv should stop creating new fields.
* Defaults to 512.

HTH! And good thinking on the TRUNCATE setting, too. That was the first thing I thought of.

View solution in original post

lguinn2
Legend

The maximum number of fields that Splunk will extract, by default, is 512. This value is specified in limits.conf

So, create a file named limits.conf in $SPLUNK_HOME/etc/system/local and put the following in it

[kv]
maxcols = 1800
* When non-zero, the point at which kv should stop creating new fields.
* Defaults to 512.

HTH! And good thinking on the TRUNCATE setting, too. That was the first thing I thought of.

TomJordan
Explorer

Thanks, that's fixed it. Much appreciated!

Took me a while to get the TRUNCATE to work, actually, as it seemed to accept
'TRUNCATE = 1048576' entered via the 'adjust timestamp and event break settings' > 'Advanced Mode' of the preview, as the little warnings disappeared. But then when I ran the search, it still truncated the rows. Only when I entered without spaces around the equals, i.e., 'TRUNCATE=1048576' in the preview, did it work. At least I think that is what happened!

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...