Hi,
I have a logfile which is single-line and well structured, but the feed doesn't always write out entire lines. I might get 10 lines (with cr/lf) and then half a line, until the next "batch" of data comes in. As a result, I'm getting events with only half of the data sometimes. How do i tell the forwarder to wait until a cr/lf comes in, and don't process it until that happens?
If this is a problem of lines partially written, check the option time_before_close in the inputs.conf
and increase the time to wait before considering a line to be finalized.
http://docs.splunk.com/Documentation/Splunk/6.0/admin/Inputsconf
OK. It works, but it almost seems like it goes into "sleep" and then wakes up and reads the file, which, while it works, leaves me behind a batch of entries. Is there a way to tell it to break before the beginning of every line, but only if it contains a timestamp? The file is very well structured - timestamp|field|value|field|value
Here's a sample:
1385823300000|132210|NormalizedMemoryInfo|Free|287534508|Memory|testdevice|Enhanced-MemoryPool: Processor 3012.1
1385823300000|132210|NormalizedMemoryInfo|Utilization|23.67089250345821|Memory|testdevice2|Enhanced-MemoryPool: Processor 3012.1
Please move that config to inputs.conf in the affected stanza and restart the indexer.
Tried it, but it was flagged as a possible type error after starting up the indexer.
Here's the props.conf entry:
[snmpinfo]
MAX_TIMESTAMP_LOOKAHEAD = 30
SHOULD_LINEMERGE = true
LINE_BREAKER = ([\r\n]+)
NO_BINARY_CHECK = 1
TIME_PREFIX = ^
TIME_FORMAT = %s%3N
time_before_close = 310