Getting Data In

How come my Splunk Universal Forwarder and props.conf are not parsing our CSV files properly?

TitanAE
New Member

I currently have a universal forwarder and an indexer.

The universal forwarder reads a number of CSV files. And then ships them off to the indexer.

I also have a props.conf on both that reads:

[csv]
INDEXED_EXTRACTIONS = csv
HEADER_FIELD_LINE_NUMBER=9
TIMESTAMP_FIELDS=date
FIELD_DELIMITER=,

I have checked the source type (CSV), made sure that the correct header field is set, and have made sure the files can be forwarded from the forwarder to the indexer. However it is not parsing the files.

What am I missing that isn't allowing the fields to be parsed properly?

0 Karma

woodcock
Esteemed Legend

In order for this to work, you must be running an inputs.conf on your forwarder that has a stanza something like [monitor:///.../*.csv] which has below it sourcetype = csv. You need all of it working together and you need to restart the splunk instance after you update these configuration files.

0 Karma

laurie_gellatly
Communicator

Have you tried testing the upload of the CSV via the GUI?

Use your props settings to check they are doing what you expect.

0 Karma

ddrillic
Ultra Champion

I just ran on a forwarder -

    $ ./splunk cmd btool props list csv
    [csv]
    ANNOTATE_PUNCT = True
    AUTO_KV_JSON = true
    BREAK_ONLY_BEFORE = 
    BREAK_ONLY_BEFORE_DATE = True
    CHARSET = UTF-8
    DATETIME_CONFIG = /etc/datetime.xml
    HEADER_MODE = 
    INDEXED_EXTRACTIONS = csv
    KV_MODE = none
    LEARN_SOURCETYPE = true
    LINE_BREAKER_LOOKBEHIND = 100
    MAX_DAYS_AGO = 2000
    MAX_DAYS_HENCE = 2
    MAX_DIFF_SECS_AGO = 3600
    MAX_DIFF_SECS_HENCE = 604800
    MAX_EVENTS = 256
    MAX_TIMESTAMP_LOOKAHEAD = 128
    MUST_BREAK_AFTER = 
    MUST_NOT_BREAK_AFTER = 
    MUST_NOT_BREAK_BEFORE = 
    SEGMENTATION = indexing
    SEGMENTATION-all = full
    SEGMENTATION-inner = inner
    SEGMENTATION-outer = outer
    SEGMENTATION-raw = none
    SEGMENTATION-standard = standard
    SHOULD_LINEMERGE = False
    TRANSFORMS = 
    TRUNCATE = 10000
    category = Structured
    description = Comma-separated value format. Set header and other settings in "Delimited Settings"
    detect_trailing_nulls = false
    maxDist = 100
    priority = 
    pulldown_type = true
    sourcetype = 

You can run the btool command and see the combined results for the csv sourcetype or maybe create your sourcetype as my_csv, for example, and then you have no dependencies.

0 Karma

TitanAE
New Member

I'm not sure how this helps. For the most part, everything looks similar. That said i did update my props.conf file to this:

[csv_test]
SHOULD_LINEMERGE = false
INDEXED_EXTRACTIONS = csv
HEADER_FIELD_DELIMITER = ,
HEADER_FIELD_LINE_NUMBER=9
FIELD_DELIMITER=,
TZ = US/Western

My hope is that it will look at Line 9, and see that that is the header. Great!

These should define the fileds that I can search though in splunk.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...