I currently have a universal forwarder and an indexer.
The universal forwarder reads a number of CSV files. And then ships them off to the indexer.
I also have a props.conf on both that reads:
[csv]
INDEXED_EXTRACTIONS = csv
HEADER_FIELD_LINE_NUMBER=9
TIMESTAMP_FIELDS=date
FIELD_DELIMITER=,
I have checked the source type (CSV), made sure that the correct header field is set, and have made sure the files can be forwarded from the forwarder to the indexer. However it is not parsing the files.
What am I missing that isn't allowing the fields to be parsed properly?
In order for this to work, you must be running an inputs.conf
on your forwarder that has a stanza something like [monitor:///.../*.csv]
which has below it sourcetype = csv
. You need all of it working together and you need to restart the splunk instance after you update these configuration files.
Have you tried testing the upload of the CSV via the GUI?
Use your props settings to check they are doing what you expect.
I just ran on a forwarder -
$ ./splunk cmd btool props list csv
[csv]
ANNOTATE_PUNCT = True
AUTO_KV_JSON = true
BREAK_ONLY_BEFORE =
BREAK_ONLY_BEFORE_DATE = True
CHARSET = UTF-8
DATETIME_CONFIG = /etc/datetime.xml
HEADER_MODE =
INDEXED_EXTRACTIONS = csv
KV_MODE = none
LEARN_SOURCETYPE = true
LINE_BREAKER_LOOKBEHIND = 100
MAX_DAYS_AGO = 2000
MAX_DAYS_HENCE = 2
MAX_DIFF_SECS_AGO = 3600
MAX_DIFF_SECS_HENCE = 604800
MAX_EVENTS = 256
MAX_TIMESTAMP_LOOKAHEAD = 128
MUST_BREAK_AFTER =
MUST_NOT_BREAK_AFTER =
MUST_NOT_BREAK_BEFORE =
SEGMENTATION = indexing
SEGMENTATION-all = full
SEGMENTATION-inner = inner
SEGMENTATION-outer = outer
SEGMENTATION-raw = none
SEGMENTATION-standard = standard
SHOULD_LINEMERGE = False
TRANSFORMS =
TRUNCATE = 10000
category = Structured
description = Comma-separated value format. Set header and other settings in "Delimited Settings"
detect_trailing_nulls = false
maxDist = 100
priority =
pulldown_type = true
sourcetype =
You can run the btool
command and see the combined results for the csv
sourcetype or maybe create your sourcetype as my_csv
, for example, and then you have no dependencies.
I'm not sure how this helps. For the most part, everything looks similar. That said i did update my props.conf file to this:
[csv_test]
SHOULD_LINEMERGE = false
INDEXED_EXTRACTIONS = csv
HEADER_FIELD_DELIMITER = ,
HEADER_FIELD_LINE_NUMBER=9
FIELD_DELIMITER=,
TZ = US/Western
My hope is that it will look at Line 9, and see that that is the header. Great!
These should define the fileds that I can search though in splunk.