Hi,
Please suggest any possibility of replaying this below event without eventgen app.
Event #3:"2010-07-17 11:43:54.425"| P0_11=700810|I0_111=000000000000|P0_114=R| P0_12=150606193548| P0_14=1607| P0_18=5499| I0_19=840| M0_2=6011000000008714| M0_22=810| I0_24=001| I0_25=0059| I0_3=000000| I0_37= 82689954| I0_38=00673Q| I0_39=000| I0_4=000000000100| I0_41=00000001| I0_42=000445127207998| I0_44=5| I0_49=840| I0_50=840| I0_63=5| I0_7=0607193548| I1_13= | I1_54=1001300000| I1_6=41| I1_79=Y| I1_80=085136461484009| I1_85=000000000100|I2_114=000000|I2_119=C|I2_121=000|I2_127=USA| I2_13=00673Q | I2_14=00003| I2_15=000| I2_37=07| I2_4= | I2_40=N| I2_42=0| I2_44= | I2_62=0| I2_63=OPS@PARTAN
Thanks,
Madan
oneshot is not sufficient for me.
cat historical.log | replayThrottleScript > /tmp/replay.log
use a splunk data monitor to tail follow /tmp/replay.log and write the data to index=replay
use a dashboard backed by a (indexed) real-time search
the linux pv -L command can throttle the rate to n lines per sec
you could use a perl one-liner to filter the historical.log and insert a null character every time the timestamp changes by more than 1 sec. then apply pv -L.
if you need fastFwd, rewind, 4x 8x 16x, parse the timestamp within perl/python, do some calculation with dt and system time.
Case 193187 - "Replay" command
https://wiki.splunk.com/Community:ERs
I have also tried creating a custom throttle command and putting that into commands.conf. But that didn't work because splunk flushes the pipeline between search commands every 50k lines in a lot of cases. My data rate was less than 10 lines/sec.
Note that replaying data with repeating surrogate keys generated by database sequence leads to duplication.
Put it in a text file, and ./splunk add oneshot myfile.txt -index myindex -sourcetype mysourcetype ..