Hi,
@martin_mueller and me find a solution for our problem alekksi.
It seems to be a bug in the Splunk_TA_nix. We have a small workaround for the problem.
Go into the default/props.conf of the Splunk_TA_nix and remove "CHECK_METHOD = modtime" from the following two stanzas.
[source::(....(config|conf|cfg|inii|cfg|emacs|ini|license|lng|plist|presets|properties|props|vim|wsdl))]
sourcetype = config_file
CHECK_METHOD = modtime
[config_file]
LINE_BREAKER = ^((?!))$
TRUNCATE = 1000000
SHOULD_LINEMERGE = false
DATETIME_CONFIG = NONE
CHECK_METHOD = modtime
KV_MODE = none
pulldown_type = true
SEGMENTATION-all = whitespace-only
SEGMENTATION-inner = whitespace-only
SEGMENTATION-outer = whitespace-only
SEGMENTATION-standard = whitespace-only
LEARN_MODEL = false
LEARN_SOURCETYPE = false
Restart your Splunk Servers and the crazy indexing should stop. We tested it a little bit and it seems to work very well.
Kind regards
... View more
Yes i agree, i tested it on some machines this morning. Everywhere the same behavior. Continously re-indexing the files.
I will try to find a solution for this, because i need this feature in near future.
... View more
I have read it again... Ok not a problem with indexing on your searchheads. The Splunk TA for *nix monitors the conf files in /etc if you have enabled the input.
Check if the input is enabled in inputs.conf of the Unix TA (Splunk_TA_nix). Just search the file for "/etc"
... View more
Hi,
whats your outputs.conf configuration for the searchheads? Did you configure the following in outputs.conf? Your searchheads try to index data. Something they should not do.
To disable the indexing on your searchheads. Add the following in outputs.conf.
[indexAndForward]
index = false
Then configure the Searchheads to send the data to your indexers via outputs.conf
kind regards
... View more
Hi,
you cannot install the Splunk App for VMware by uploading the whole package over the UI.
The installation of the VMware App is well documented. Just follow the following link.
http://docs.splunk.com/Documentation/VMW/3.3.2/Installation/InstalltheSplunkAppforVMwareinadistributeddeployment
For a clustered environment there are some differences of course.
Kind regards
... View more
Hi,
probably this post can help you.
https://answers.splunk.com/answers/62908/universal-forwarder-not-load-balancing-to-indexers.html
In addition you can check the configuration parameter "forceTimebasedAutoLB" this often get rid of such problems.
kind regards
... View more
Hi,
just give me a response if i understand something wrong.
Your main problem is that you need to change the "Splunk_DB" variable am i right?
To change the "Splunk_DB" variable just du this.
Stop Splunk
Unset the Splunk_DB variable by "unset SPLUNK_DB"
Then go to "$SPLUNK_HOME/etc/splunk_launch.conf" and change the "SPLUNK_DB" variable to the path of your choice.
Start Splunk
I think there should be no problem by migrating your indexes as described in Docs. The described way in Docs should work for this scenario.
For the .dat files im not 100% sure but i think they hold the next bucket_id for the index.
... View more
Hi,
first: Its not possible to delete specific events from your index and get the disk space back. You need to clean the whole index to do that.
second: There is an role in splunk which gives you the option to delete events based on a specific search. This command will not delete the events from you disk. It will only eliminate them in search.
Search for "can_delete" on splunk docs. The role is not assigned to anyone by default. Even the admin don´t have it.
!!!BUT: Be carefull by execute the "delete" command. You cannot make it undone!!!
... View more
Hi bugnet,
you can send unwanted events to the nullQueue. The link below shows you an example.
In your environment you can do the configuration to to this on the Heavy Forwarder or on the Indexers. As you like. I would prefer to do it on the Heavy Forwarders.
For the Regex part for example you can take the EventID as unique identifier.
http://docs.splunk.com/Documentation/Splunk/6.5.1/Forwarding/Routeandfilterdatad#Discard_specific_events_and_keep_the_rest
Kind regards
... View more
Hi,
you always can contact Splunk for getting informations about licensing. Maybe this will help you too.
https://www.splunk.com/en_us/solutions/industries/higher-education/academic-licenses.html
kind regards.
... View more
Hi,
this is more a cronjob question than a splunk one. But its simple. You can´t to it by defining a single cronjob like * * * *.
On linux site for example you need to have an extra script for doing this, or need to to it in 3 cronjob definitions.
I think the fastet way to do it is to set up the three alerts. It is not shiny but it works.
kind regards
... View more
In addition with forwarders you can compress your data before sending it to the indexer. You can encrypt your connection between the splunk instances, you can specify sourcetypes at the forwarder site to get some load from the indexers and so on. There are many benefits of using the forwarders.
You just can say. if you can use them. Use them.
... View more
Just look at the following link. There is explained which Changes generate a restart or a reload of the indexer Cluster Peers.
http://docs.splunk.com/Documentation/Splunk/6.5.0/Indexer/Updatepeerconfigurations
Kind regards
... View more
The maximum size of a warm bucket must be the maximum size of a hot bucket. The difference between hot and warm buckets is, that a hot bucket is Open for write Operations. After the hot bucket is full It is rotated to a non writable only readable warm bucket. As you see you define the warm buckets through your hot buckets.
Is this what you wanted to know?
... View more
Correct. Timestamps with integrated timezone information are automatically normalized to UTC time on the indexer. Its for showing consistent time data through the environment. In example you have users in different timezones. So the users get set the timestamp to their configured timezone.
... View more
Can you provide the sourcetype in props.conf to us pls. Its easier to fix problems by having the full information.
Some infos first. Splunk Indexers normalize every Date to UTC time. Just take a look at this answers post.
https://answers.splunk.com/answers/135193/splunk-indexing-and-time-zone-normalization.html
Give me some feedback if this is helpful.
kind regards
... View more
Hi,
Splunk always processes the props.conf sequential. Which means if you define the same field in two or more Extractions it will always be the value of the last called EXTRACT in props.conf.
Just give me a feedback if this is what you wanted to know. Im not quite sure about.
... View more
Hi,
you have multiple configuration parameter for roling buckets.
Hot to Warm:
maxHotBuckets=<integer>
maxHotSpanSecs=<integer>
Hot/Warm to Cold:
homepath.maxDataSizeMB=<integer> |this is the max Data size for hot and warm db
Warm to Cold:
maxwarmDBCount=<integer>
Cold to Frozen:
coldPath.maxDataSize=<integer>
And the last is for the retention on the index
maxTotalDataSizeMB=<integer>
frozenTimePeriodInSecs=<integer>
Check this paramenter on the indexes.conf Documentation.
Hope this helps
kind regards
... View more
Hi,
you can delete events by pipe them to delete. This will not free any disk space. It only flags the events as deleted in splunk and will not show them in future searches. But be careful with this. You can not negotiate this command. Once deleted the events are gone.
You need to get you the "can_delete" role. No one has this role by default. Log to your admin account and go to Settings --> Access Control --> Users. And add the "can_delete" role to your user.
Before piping somthing to delete you should verify that your search is correct.
Example:
<your_search> | delete
... View more
Hi,
this will answer your question.
https://answers.splunk.com/answers/127729/splunk-app-for-netapp-data-collection-node.html
kind regards
... View more
Hi,
i think you should take a look at this answer post. Its a smart solution for your problem by creating a little lookup file.
https://answers.splunk.com/answers/422889/how-to-search-for-newly-added-servers-by-comparing.html
kind regards
... View more
Hi,
yes there are predefined sourcetypes.
This will help you.
http://docs.splunk.com/Documentation/AddOns/released/NGINX/Configureinputsv2monitor
Regards
... View more