Getting Data In

How do I configure data inputs for .csv files with dynamic field headers for a new event on each line

fox
Path Finder

Running 4.2.1, we are monitoring many csv files that differ on listed fields. We have splunk configured to dynamically read the header row for field names. (props.conf: CHECK_FOR_HEADER=TRUE) and this works brilliantly! However we are not seeing the events split correctly - splunk is indexing 256 rows to one event. This is a .csv file with a clear event new line separation...

Has anyone else done this successfully?

Any ideas?

Tags (1)
0 Karma

gkanapathy
Splunk Employee
Splunk Employee

Most likely, Splunk is not detecting a timestamp in your rows. The default rule for Splunk is to merge lines together (SHOULD_LINEMERGE = true), but to split them whenever it detects a date (BREAK_ONLY_BEFORE_DATE = true). The easiest and best way to break on newlines is to simply set SHOULD_LINEMERGE = false, but if there are dates in your data and Splunk isn't finding them, you should also set TIME_FORMAT and TIME_PREFIX and maybe MAX_TIMESTAMP_LOOKAHEAD.

0 Karma

Ayn
Legend

By default Splunk will merge lines in incoming logs and then break them up according to certain rules. This behavior is controlled by the SHOULD_LINEMERGE directive in props.conf (default is true). Setting SHOULD_LINEMERGE to false will tell Splunk not to combine several lines into a single event, which will give you the behavior you want.

Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Wondering How to Build Resiliency in the Cloud?

IT leaders are choosing Splunk Cloud as an ideal cloud transformation platform to drive business resilience,  ...

Updated Data Management and AWS GDI Inventory in Splunk Observability

We’re making some changes to Data Management and Infrastructure Inventory for AWS. The Data Management page, ...