All Apps and Add-ons

SNMP traps not being indexed by SNMP Modular Input

robert_a
Engager

The env:

Win2k8r2

Splunk 6.1 Ent

SNMP Modular Input 1.2.4

The problem:

No traps are being indexed

The stanza

[snmp://iDRAC]
communitystring = public
do_bulk_get = 0
index = myindex
ipv6 = 0
snmp_mode = traps
snmp_version = 1
sourcetype = snmp_traps
split_bulk_output = 0
listen_traps = 1
trap_host = localhost
trap_port = 162
v3_authProtocol = usmHMACMD5AuthProtocol
v3_privProtocol = usmDESPrivProtocol

Note: the stanza was generated by splunk web. The dynamically generated entry did NOT contain "listen_traps = 1" which appeared to be needed according to the readme. So I added that line manually. The behavior was unchanged by this.

Steps taken:

  • I installed wireshark and verified that the trap was indeed making it to the box splunk runs on, and that it was coming on the correct port.
  • I tested with windows firewall enabled and disabled, which made no difference
  • I have a rule in windows firewall explicitly allowing UDP 162 for snmp traps
  • I ran netstat and verified that the host is listening on UDP 162

So, I can show the the trap is received by the host. I don't believe the firewall is interfering and testing with it disabled yielded the same results. I believe that leave either the snmp daemon that snmp_ta is using, some other misconfiguration on my part or something else on the host causing some sort of conflict.

Splunk is the only real utility running on the host, as that is its sole purpose. But I'm not 100% if there is anything native to Windows that could interfere with snmp handling.

Has anyone hit this? Any suggestions? Is there any other info that would be helpful?

Tags (1)
0 Karma
1 Solution

hlarimer
Communicator

I had this same problem and fixed it by changing trap_host to match what was set up in the device that was sending the snmp traps. In my case we used a DNS entry to point the traps to the Splunk Indexer/SH and when I entered that DNS entry in for trap_host instead of "localhost" then the SNMP traps immediately started showing up (I almost positive that this requires a restart)

I actually found this post while looking up another question about this app that maybe you can answer, can this app be used on a forwarder instead of directly on the indexer? I'm assuming it can since its a TA but I wasn't for sure.

View solution in original post

neerajsafenet
New Member

The env:

Linux rhel 6.2
Splunk 6.2 Ent
SNMP Modular Input v1.2.7

The problem:
No traps are being captured

The stanza
[snmp://snmp_ta]
do_bulk_get = 0
do_get_subtree = 0
host = 10.164.12.25
index = new_test
ipv6 = 0
mib_names = SAFENET-APPLIANCE-MIB, SAFENET-GLOBAL-MIB, SAFENET-HSM-MIB, SNMPv2-SMI
snmp_mode = traps
snmp_version = 3
sourcetype = snmp_ta
split_bulk_output = 0
trap_host = 10.164.12.25
trap_port = 162
trap_rdns = 1
v3_authKey = 87654321
v3_authProtocol = usmHMACSHAAuthProtocol
v3_privKey = 12345678
v3_privProtocol = usmAesCfb128Protocol
v3_securityName = pete
listen_traps = 1
disabled = 0

Source IP for my TRAP 10.164.64.49
Splunk IP 10.164.12.25  

I have also tried to configure Splunk Modular input with SNMP version V3, using actual-host-IP as well as localhost in 'TRAP listener host'. But I am not able to get the traps on my splunk server. Moreover I also run Tcpdumps to capture the packets. When I run splunk modular imput(snmp.py) utility, I got only incoming packets of traps from source IP. there is no acknowledgement packet sent from splunk server ends. So whenever I tried to search snmp traps on splunk using 'index=new_test ExecProcessor error snmp.py' and 'sourcetype=snmp_ta' but it gives me nothing. TCPdumps are given below for this:
18:13:37.300227 IP 10.164.64.49.46271 > 10.164.12.25.snmptrap: F=apr U=pete [!scoped PDU]05_b6_62_8b_e3_2a_25_83_ac_5c_61_f6_0e_e2_49_79_2f_8c_7d_1e_4b_55_26_84_27_fd_8e_cf_9b_fc_01_4d_52_74_7a_3f_64_aa_ab_5b_4b_01_83_07_46_0e_60_a9_7d_95_d7_9e_21_cd_43_e5_0c_45_f3_35_21_e9_e1_4c_ab_fc_05_74_b5_c1_3a_14_de_f6_a7_7d_0d_7b_73_0f_20_83_fb_38_14_bd_7a_ca_63_e9_c6_2a_82_47_86_92_6f_9d_d3_85_d1_f9_16_dd_f4_e1_40_30_b5_09_02_a2_7f_79_94_a7_7a_76_b7_f9_25_92_13_a1_1e_8e_ec_be_55_7a_65_0a_68_d2_f3_bc_de_dd_39_37_4d_3d_a2_ab_a8_64_0b_58_8a_e5_f9_79_46_13_6e_55_db_97_aa_90_4c_a8_4e_6a_b7_a7_f6_4b_74_23_37_42_70_83_1a_a1_fc_13_5e_ed_03_92_34_1b
18:13:38.300835 IP 10.164.64.49.46271 > 10.164.12.25.snmptrap: F=apr U=pete [!scoped PDU]31_bc_1d_40_f2_3f_d4_0c_d1_6a_7b_59_fd_8f_b5_c0_5e_88_7d_3b_5b_6f_12_b6_7e_77_81_9e_d6_e2_c9_18_70_75_3f_33_a5_9d_26_eb_7b_a9_05_1b_58_db_8f_88_ef_00_4f_90_fd_7f_08_ac_cf_c8_fe_76_87_bd_c8_78_10_ea_96_7e_00_52_58_77_2b_47_e2_99_60_39_4f_54_88_cb_90_6f_ef_cf_d7_be_28_e8_0d_48_2e_08_f4_6e_b3_2e_12_5d_74_21_b7_d6_d0_9b_9f_77_55_0e_05_a7_e0_e4_74_23_2a_c2_2b_35_11_7b_9c_09_92_99_46_46_b7_fc_24_f1_60_81_46_63_bb_49_14_99_a3_05_a7_92_d9_e2_fb_8a_24_25_1a_42_15_0a_1c_b5_0c_b0_78_16_45_a6_97_82_d2_4e_13_79_65_d1_9e_6b_cf_8a_0c_08_a2_46_df_9c_28_0c_14_c7
18:13:39.301335 IP 10.164.64.49.46271 > 10.164.12.25.snmptrap: F=apr U=pete [!scoped PDU]da_2d_ec_3f_87_26_ba_e0_7d_00_d4_3b_a6_c3_0a_3c_59_12_70_01_f2_21_27_76_fb_0d_39_2d_8d_6c_a5_20_e0_39_0e_07_9a_23_3d_6e_80_9c_a0_a1_0a_78_1b_3f_17_b9_e0_db_16_39_fd_16_18_a1_de_c7_69_21_45_9c_b8_8a_00_8c_dc_2b_7c_9b_b8_6e_7a_79_a2_80_f0_1b_34_5d_4c_eb_b1_68_e3_0d_86_b6_53_89_f8_a7_21_9f_2a_32_dc_76_9e_b3_ac_4e_79_5d_d2_52_c9_dc_92_92_c8_0f_85_a9_5d_9c_c5_f7_2e_83_c7_09_c7_17_1a_2c_1c_ee_c8_6f_d5_d5_a6_0b_6e_5c_f3_a3_c5_79_e5_ee_19_54_da_2c_af_14_92_b7_51_23_a8_de_21_f7_a0_0e_cf_25_1c_29_fc_bd_f9_bd_6d_ea_c1_4a_43_5e_77_16_4a_34_e2_36_ea_40_be_c6
18:13:40.302181 IP 10.164.64.49.46271 > 10.164.12.25.snmptrap: F=apr U=pete [!scoped PDU]d5_63_97_1c_6f_29_68_ba_c7_63_b9_0a_d5_57_9c_d1_e9_74_59_28_0c_26_d2_28_7e_19_57_bc_2f_67_67_19_0d_12_5f_fb_3f_0b_1e_0e_9a_50_32_86_ae_61_c8_ab_c2_99_6b_7d_9e_89_c2_7c_93_19_c3_ef_71_d9_af_8b_ae_da_bf_97_45_b6_f1_05_c3_8c_ea_ee_80_1d_0b_f5_93_b2_9b_64_ff_70_34_c0_f8_92_88_4f_7d_b8_f3_3e_e4_28_30_8c_da_94_4b_ec_cf_c9_7d_82_e1_93_7d_b2_ed_9d_7d_55_3d_71_d2_c5_eb_02_f1_09_89_34_1c_52_fd_7d_2c_73_26_9e_8a_dd_87_8f_15_12_cb_c5_c5_b4_0f_16_59_56_79_0e_45_b6_43_a0_32_f9_31_79_a8_8b_90_ba_37_89_1b_72_09_c6_a7_5c_f6_d4_1b_1c_68_03_84_78_43_80_21_85_e0_43
18:13:41.303037 IP 10.164.64.49.46271 > 10.164.12.25.snmptrap: F=apr U=pete [!scoped PDU]b8_ad_69_f4_75_f3_f4_49_70_20_4d_ae_40_cf_8b_fd_63_52_80_e5_62_09_7c_46_f3_54_4c_7e_c8_46_5c_64_84_e2_0b_83_c7_a8_98_e7_07_5d_ee_ca_34_95_2f_c5_c1_04_6c_ff_4f_e2_4b_5d_1f_15_7a_cd_d5_46_6e_43_bf_eb_b7_ed_91_2f_b8_59_04_26_da_fb_82_f5_5b_08_2c_a3_0e_6d_4e_d1_a9_04_25_4a_f8_4e_e9_68_86_ac_24_28_b7_25_a0_ad_31_26_74_47_91_af_3d_c0_59_27_fe_d1_26_ea_70_5f_97_57_62_16_31_1d_12_33_04_48_97_2a_c7_1f_20_03_ba_ef_58_af_8b_77_60_ff_35_1c_d4_92_1d_54_95_74_94_bb_f6_13_58_64_bb_b1_74_cc_be_97_e2_ea_a8_d1_f5_a1_c0_a4_f6_86_3d_5d_92_f8_0a_e7_19_9f_2f_d4_59_3f
18:13:42.303968 IP 10.164.64.49.46271 > 10.164.12.25.snmptrap: F=apr U=pete [!scoped PDU]af_8c_70_51_ca_c3_91_86_73_33_52_18_62_2f_62_ee_6c_b2_98_17_9f_41_0f_40_d1_92_82_11_5f_dd_74_6a_ab_0c_b5_89_59_55_0a_0b_74_b8_ef_d8_90_81_4b_5b_2b_75_b4_12_af_19_32_b0_53_11_01_22_16_c9_23_aa_a4_a9_58_8b_f8_b9_1c_10_99_7b_06_bc_31_3a_dd_9d_b3_c1_b9_81_51_b7_d7_06_50_28_2d_6f_55_cb_55_2f_3d_fa_69_72_ed_df_d6_3a_8d_d7_f2_ad_36_64_4a_36_6e_ab_6c_df_62_6e_cd_c1_cc_d5_eb_f2_cf_cf_9e_53_6b_91_e8_43_25_43_56_e6_56_08_5f_9d_6b_d9_cf_a1_72_15_f0_d7_79_91_5b_08_dd_24_c0_88_cf_66_41_d1_b2_ab_99_85_bb_d0_5c_f2_90_ba_5e_36_0c_01_4f_1b_48_81_75_09_6d_20_29_5e
18:13:43.311487 IP 10.164.64.49.53068 > 10.164.12.25.palace-4: Flags [P.], seq 419:478, ack 1, win 46, length 59
18:13:43.311518 IP 10.164.12.25.palace-4 > 10.164.64.49.53068: Flags [.], ack 478, win 4230, length 0

But whenever I stop splunk snmp.py process and run snmptrapd independently using 'snmptrapd -Dusm -f -Le' on the console, I get the traps on my machine(not in solunk) and even on tcpdump, my machine sends acknowledgement packets. Please help me to resolve this case. Tcpdumps are given below:
22:21:39.031577 IP 10.164.64.49.46532 > 10.164.12.25.snmptrap: F=apr U=pete [!scoped PDU]b5_04_f9_65_88_7e_c2_bb_bb_aa_14_9c_bc_22_d0_70_a1_5a_cb_63_e5_2b_ea_68_fe_31_a4_2e_99_6a_bb_ad_ec_25_27_a6_df_ba_20_ae_34_c3_ed_45_79_2a_b1_bf_29_f5_96_3c_4e_19_54_c3_34_e2_1f_de_4a_e3_87_a2_2c_d6_22_31_c5_a6_84_a2_ca_34_5d_30_3d_1f_ec_15_08_34_a3_c3_7c_bb_98_7f_d5_4f_3c_98_a1_e1_70_c1_fc_7b_df_08_ce_f9_5b_a0_9f_6a_6f_de_48_45_a0_00_81_4f_76_45_af_96_16_e9_c3_16_34_54_d6_cd_0c_15_e9_b5_2e_32_5a_d6_34_f1_d3_66_92_01_64_b7_58_22_b0_b0_70_9a_8a_6a_19_57_69_24_1b_65_2b_4a_99_88_15_46_ed_77_78_15_5c_ce_17_e4_ad_d2_e4_e4_73_10_ca_fd_ea_0a_93_1e_d4_d9
22:21:39.047629 IP 10.164.12.25.snmptrap > 10.164.64.49.46532: F=ap U=pete [!scoped PDU]9b_57_35_ab_37_b0_e6_66_86_33_eb_fb_56_23_8a_fe_5a_26_55_00_14_e7_81_5e_96_3f_5e_f0_8d_e2_23_ac_fe_b8_83_6d_30_a3_da_bd_6c_60_e1_9f_c0_6d_e3_7d_2b_53_33_ac_08_01_29_cb_d1_2a_ee_90_b8_05_22_fc_ad_d6_b8_1b_c4_80_d2_ae_15_b3_18_77_3b_64_e8_43_88_1d_04_28_67_97_e6_30_83_3d_6a_3c_8a_e3_ea_f2_e3_9b_6a_b5_94_47_2f_90_fa_0d_68_9b_68_08_97_ab_cb_0c_41_f1_f9_ba_84_74_ff_fa_38_f6_40_4e_70_ed_ae_ea_7f_17_c8_90_2a_5c_0a_c2_0e_8c_52_42_b8_97_bc_dc_ee_28_81_ff_13_d7_11_99_d9_6a_9a_7c_6b_e6_f5_54_0e_b6_26_5b_8a_4d_29_12_cf_5a_bc_55_f3_9a_57_bd_30_3b_49_4a_7b_d3
22:21:46.413862 IP 10.164.64.49.53062 > 10.164.12.25.palace-4: Flags [P.], seq 417:566, ack 1, win 46, length 149

0 Karma

hlarimer
Communicator

I had this same problem and fixed it by changing trap_host to match what was set up in the device that was sending the snmp traps. In my case we used a DNS entry to point the traps to the Splunk Indexer/SH and when I entered that DNS entry in for trap_host instead of "localhost" then the SNMP traps immediately started showing up (I almost positive that this requires a restart)

I actually found this post while looking up another question about this app that maybe you can answer, can this app be used on a forwarder instead of directly on the indexer? I'm assuming it can since its a TA but I wasn't for sure.

deodion
Path Finder

yes this is correct, just simply change default localhost to source ip sending the traps, thanks!

0 Karma

hlarimer
Communicator

I'm glad that worked for you. I actually just found this information for anyone that wants to use the app on a Universal Forwarder:
http://www.georgestarcher.com/

I am still testing to see if it works but I do see that he recommends using the IP address instead of localhost in trap_host which I think would work the same as the computer hostname or the dns hostname.

Thanks for the feedback!

0 Karma

Bauerna
Engager

You are genuinely a life saver. We have been debugging this issue for hours with so much confusion.
Thank you so much!

0 Karma

robert_a
Engager

You are my hero haha. Changing from localhost default to the ip\hostname that is configured in the trap source was indeed the answer.

I had sent logs\captures off to the dev like he asked but never heard anything back. I am very happy to be able to use this.

Sadly I cannot answer your question on forwarder vs indexer.

Thanks again. Such a simple fix.

0 Karma

Damien_Dallimor
Ultra Champion

Can you email me a wireshark capture of the traps that are not showing up in Splunk ? ddallimore@splunk.com

0 Karma

rbacon
Path Finder

I'm having the same issue. No errors either. I double checked that I'm getting data on UDP port 162 by listening on that port with Netcat. Were you able to solve this?

0 Karma

robert_a
Engager

Well, I have no doubt that this app works for most people. But it sure doesn't work for me and with sparse documentation and "community support" there isn't much hope for getting it to actually do anything.

I guess I'll try to rig net-snmp. It seems super clunky compared to a built in solution... but it works.

0 Karma

robert_a
Engager

The entry:
[05/Jun/2014:08:52:24.986 -0400] "GET /en-US/api/shelper?snippet=true&snippetEmbedJS=false&namespace=search&search=search+index%3D_internal+ExecProcessor+error+snmp.py&useTypeahead=true&useAssistant=true&showCommandHelp=true&showCommandHistory=true&showFieldInfo=false&_=1401968578523 HTTP/1.1" 200 579 "http://:8000/en-US/app/search/search?q=search%20index%3D_internal%20sourcetype%3Dsnmp_traps&sid=1401968706.126&earliest=&latest=" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.114 Safari/537.36" - 53906808fc437fe10 168ms

0 Karma

robert_a
Engager

Ok thanks. I'll take that back out of the stanza.

All searches listed will be over "all time". The splunk installation isn't very old, so there isn't much to search. Running the search you named above yielded one result... but it sort of looks like the search generated it.

I'll post another reply with the contents, as the message puts me over the char limit.

0 Karma

Damien_Dallimor
Ultra Champion

Firstly , you do not need to manually add "listen_traps = 1" , that is just a legacy property for pre 1.2.2 support.The "snmp_mode" property is the new way forward.

What do you see if you search over "all time" ?

Any errors in "index=_internal ExecProcessor error snmp.py" ?

0 Karma

robert_a
Engager

index=_internal sourcetype=snmp_traps yields nothing in the logs. And just looking at the recent _internal doesn't show any errors minus one unrelated to this data input.

0 Karma

MarioM
Motivator

do you have anything in splunkd.log ? (index=_internal sourcetype=splunkd snmp)

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...