Activity Feed
- Got Karma for Re: How can I show the time of the last event in a stats by count table?. 12-11-2024 08:35 PM
- Got Karma for Re: Search to Identify when a host stops sending logs to Splunk. 10-09-2024 12:04 AM
- Got Karma for Re: Logging multiple sources from Docker with HEC: Are multiple sourcetypes possible?. 04-09-2024 06:41 AM
- Got Karma for Logging multiple sources from Docker with HEC: Are multiple sourcetypes possible?. 04-09-2024 06:41 AM
- Got Karma for Re: Logging multiple sources from Docker with HEC: Are multiple sourcetypes possible?. 04-09-2024 06:41 AM
- Got Karma for Re: How do I remove a dash (-) from the Account_Name field?. 02-22-2024 08:53 AM
- Got Karma for Re: How to configure props.conf for a Unix timestamp in a JSON log file?. 01-20-2024 09:06 AM
- Got Karma for Re: Why is my search head cluster captain logging KV Store replication errors?. 06-08-2023 12:21 PM
- Got Karma for Is there a way to convert a scheduled report to an alert? (6.6.3). 05-04-2023 03:51 PM
- Got Karma for Re: WinEventLog inputs: Why does current_only=1 skip server reboot events?. 03-15-2023 12:41 PM
- Got Karma for Re: How to change '_time'? _time is by default picking first alphabetical date column (Assigned) and doing +05:30. 09-30-2022 04:56 AM
- Got Karma for Re: Why aren't defined field transformations showing in the GUI?. 04-18-2022 10:33 PM
- Posted Kafka add-on: Unable to initialize modular input on All Apps and Add-ons. 04-07-2021 07:52 AM
- Got Karma for Re: Is there any way to monitor CPU on Mac OS?. 03-20-2021 07:11 AM
- Got Karma for Re: With 8.1.2 or later should we still use an HEC tier on HWFs?. 03-08-2021 07:40 AM
- Posted Re: With 8.1.2 or later should we still use an HEC tier on HWFs? on Getting Data In. 03-05-2021 05:20 PM
- Posted With 8.1.2 or later should we still use an HEC tier on HWFs? on Getting Data In. 03-05-2021 05:19 PM
- Got Karma for Is there a way to convert a scheduled report to an alert? (6.6.3). 02-16-2021 11:23 PM
- Karma Re: Can you create/modify a lookup file via REST API? for 4n0m4l1. 12-15-2020 07:26 AM
- Got Karma for Re: Blacklist using props.conf and transforms.conf. 12-04-2020 08:58 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
1 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
04-07-2021
07:52 AM
The Kafka TA hasn't been updated since before 8 was released. Is it still supported? Running with 8.0.x and 8.1.x I get this error in the search GUI: Unable to initialize modular input "kafka_mod" defined in the app "Splunk_TA_kafka": Introspecting scheme=kafka_mod: script running failed (exited with code 1).. splunkd.log shows 04-06-2021 09:14:08.054 -0400 ERROR ModularInputs - <stderr> Introspecting scheme=kafka_mod: File "/app/splunk/etc/apps/Splunk_TA_kafka/bin/kafka_mod.py", line 67 04-06-2021 09:14:08.054 -0400 ERROR ModularInputs - <stderr> Introspecting scheme=kafka_mod: """.format(c.ta_short_name, desc, kcdl.use_single_instance()) 04-06-2021 09:14:08.054 -0400 ERROR ModularInputs - <stderr> Introspecting scheme=kafka_mod: ^ 04-06-2021 09:14:08.054 -0400 ERROR ModularInputs - <stderr> Introspecting scheme=kafka_mod: SyntaxError: invalid syntax 04-06-2021 09:14:08.138 -0400 ERROR ModularInputs - Introspecting scheme=kafka_mod: script running failed (exited with code 1). 04-06-2021 09:14:08.138 -0400 ERROR ModularInputs - Unable to initialize modular input "kafka_mod" defined in the app "Splunk_TA_kafka": Introspecting scheme=kafka_mod: script running failed (exited with code 1).. EDIT: This is a simple case of not wrapping print function in parens, as Python 3 requires. But that tells me this is not Python 3 aware. Seems this TA has been abandoned.
... View more
Labels
- Labels:
-
administration
03-05-2021
05:20 PM
1 Karma
We've always had a physical load balancer (F5) handling our HEC load balancing duties. But behind that we've tried 2 scenarios. Originally, we were direct to indexers back in the 6.x and 7.x days. When a user would "overlog" this would cause problems on the indexers. The solution was to disable the token in play, but doing so through config files required a restart of the cluster. Likewise, any CRUD to HEC tokens would also require a cluster restart. To address this major issue we installed a set of VMs to sit in front of the indexing tier acting as Heavyweight Forwarders (HWFs). All HEC deliveries happen there, and they forward to the indexing tier. This seemed like a win win. I could perform token management easily without interrupting the much more vital indexing services. And the indexers were isolated from aggressive loggers. However, there were drawbacks lurking under the surface. Performance under heavy load was still not great. Due to the way HWFs "bake" the data, it's actually MORE data being delivered to the indexers. And it bypasses the normal queuing process, jumping ahead. This can cause perf issues not seen in the optimized, normal stream processing of data. And in higher volume scenarios we'd see the indexers queues backing up eventually to the point of impacting HEC queues as well. You also have the "lensing" effect to account for. The HECs act as concentrators, delivering to 1 or 2 indexers at a time (depending on your pipeline count). If many HECs happen to land on one indexer, they could Real Genius -style laser beam it out of existence. Finally, managing another set of .conf files was a not insignificant layer of additional complexity I really didn't need. 8.1.2 brought good news: CRUD operations on HEC inputs no longer trigger a rolling restart on indexers. As this was one of our primary reasons for the HEC tier, it seemed obvious to question the need. After some testing, we removed the HEC tier from one of bigger environments, having the LB deliver direct to indexers. I've seen fewer queue spikes, and literally no negative impact on indexer load, and even hints of exactly the opposite. The environment footprint was reduced by 20-some instances, and complexity is reduced. Now *that* is a win-win. 🙂 (FWIW, for both the HEC farm and the indexing tier, we have persistent queue enabled on every token.) EDIT: typos
... View more
03-05-2021
05:19 PM
I read that in 8.1.2 it's less painful to update HEC configs, no longer requiring a restart for CRUD operations. Should I keep my HEC on HWF or move it directly to indexers?
... View more
Labels
12-04-2020
07:25 AM
1 Karma
The regex is simple. You'd have to answer if it's appropriate or not. Any log event with "notice" anywhere in the event will match. Pattern matching in the `[source::]` qualifier works like it does with inputs. `*` matches anything but file delimiters, and `...` matches anything. Something like this might work `[source::/folder/folder/logs/firewall-*/*/*/*/local*.log]`
... View more
07-17-2020
08:47 AM
The catches both API and splunkweb users. I'm not clear how to isolate them
... View more
07-17-2020
06:57 AM
We are planning to move to SAML SSO soon. One of the drawbacks of SAML is that you cannot authenticate on the API any longer. Up to this point, any user defined to use splunkweb has had access to the API. How can I find out who will be impacted by yanking API access?
... View more
Labels
- Labels:
-
access control
-
authentication
-
SAML
03-27-2020
01:06 PM
1 Karma
No. Don't do it. Here's my story.
Our COVID19 work from home barage started up and our execs wanted VPN stats pronto. Security guys said they'd point their syslog at where we wanted. I quickly built a VM and threw a UF on it. Next, I set-up a UDP input on 514, and configured the props, indexes, etc. Finally I lit it up, boom! Data coming in.
For a few days we worked on reports. Then I started noticing missing events here and there. As I dug in I found duckfez's post on tracking UDP errors. And oh man were there a lot of errors. Like thousands per second. We're definitely dropping events.
I set-up rsyslog to receive instead. It's writing to disk, and splunk is reading from there. I also set-up logrotate to clean up cuz these logs are gonna be big. With some tweaking of my props and transforms, I got everything to match the slightly different appearance of the logs.
End results with the exact same stream of data being thrown at the server:
While using Splunk to receive directly:
2,500 events/sec
10,000 UDP rcv buf errors/sec
While using rsyslog to receive, and Splunk reads from disk:
25,000 events/sec
0 UDP rcv buf errors/sec
No other changes were made to the host or the log stream being shoved at it.
Don't use Splunk to receive syslog.
... View more
03-27-2020
01:05 PM
We need to ingest syslog data. Rather then send to a syslog server, then read data from disk with a Forwarder, it seems like sending directly to a Forwarder listening on port 514 would be more efficient. Are there any problems with doing this?
... View more
01-27-2020
02:50 PM
best solution on this page. thanks!
... View more
01-08-2020
01:25 PM
Fun fact, we were already using symlinks. My Windows admin was just using the 'shortcut' term interchangeably. In other words, symlinks in Windows do not behave reliably.
... View more
01-08-2020
12:53 PM
Good point. Testing, will get back soon.
... View more
01-08-2020
12:44 PM
We do the same layout in linux. No issues there. And I understand the links are different. If it had failed entirely, I would have moved on. But it works ... sometimes. I guess that's the same as failure. Sigh. I'll come up with another strategy for Windows. No, the internal logs for file monitoring and tailreader don't give any clues. It sees the files on start-up, but new files are invisible until another restart.
... View more
01-08-2020
09:17 AM
1 Karma
In an effort to get our inventory of inputs under control, I'm trying to get all servers to have one place for logs. Eg, C:\LOGS . When they want to add new files to monitor, they add a directory there, named after the sourcetype: C:\LOGS\newsourcetype . In that directory, they link (shortcut in the case of Windows) to the actual directory containing their logs. So C:\LOGS\newsourcetype\link1 -> D:\some\path\to\app\log_dir\
I'm having mixed results with this. Sometimes it reads all the logs in the original dir fine and continues to update. Other times it only reads what's there at startup. Once they roll, it stops updating until I restart Splunk again.
EDIT: Clarification, this is for Windows. Symlinks in Linux are working fine using this same layout strategy.
... View more
12-22-2019
04:12 PM
1 Karma
I would add use %F and %T, so %FT%T%:z
... View more
12-08-2019
05:50 PM
The config there is basically what i use (once migration completed and coldDB was empty)
... View more
12-08-2019
05:42 PM
I've migrated 8 production index clusters (all i have). We started with the beta of S2 in 2 environments (7.1.x). Moving to all others with 7.2.3/4. In every migration from non-S2 to S2, the cold storage was emptied after migration was complete.
... View more
12-06-2019
04:54 PM
When my migration was complete, cold was empty and is no longer used for any bucket storage. None. Zero. I don't know if I'm a unicorn, or the posts above are not accurate. Once it's empty, point the location to whatever you want (that exists and has correct perms). It won't be used.
... View more
11-11-2019
10:44 AM
I see the same. It sure seems like the API input will require a full Splunk install since it requires python.
... View more
10-03-2019
07:09 AM
2 Karma
According to user-prefs.conf.spec:
It also means that values in another app will never be used unless they are
exported globally (to system scope) or to the user-prefs app
So you need to create the meta file entry to export this setting to system. In appname/metadata/default.meta include this:
[user-prefs/role_myrole]
export = system
Your entry in user-prefs.conf:
[role_myrole]
default_namespace = appname
Will now behave as expected.
... View more
10-03-2019
07:08 AM
I am trying to to default particular roles to particular apps by including default_namespace in a user-prefs file inside the target app. This doesn't work.
How do you customize user-prefs and have it take effect?
... View more
10-02-2019
03:24 PM
any chance you can share a properly redacted version?
... View more
09-13-2019
08:18 AM
Well that sucks. Thanks for the confirmation. Without direct control over the thousands of forwarders sending to my indexers, I guess I'm just boned.
... View more
09-13-2019
07:50 AM
Since 7.3 the missing indexes message below goes to all my users causing many panicked questions about Splunk being down. How can I block this message? I don't see any stanza in default/messages.conf that matches this verbiage.
Search peer indx01 has the following message: Received event for unconfigured/disabled/deleted index=indexname with source="source::vmstat" host="hostname" sourcetype="sourcetype::vmstat". So far received events from 1 missing index
... View more
09-13-2019
05:00 AM
Follow-up: RAID5 was okay at first, but the relatively poor IO perf caught up with us. Eventually I had to re-create the volumes as RAID10. SmartStore made this fairly easy. We just updated these servers and went with fewer drives in RAID0, relying on remote storage (S2) and clustering for all redundancy.
... View more
08-12-2019
07:40 AM
@star_gh: maybe, but we would need more info on what the problem is. Fair warning, I'm no systemd expert.
... View more