Knowledge Management

How to change the ACCELERATE Data Model saved searches out of quotas?

dsbruce
Explorer

We installed splunk_app_aws with default setting. The next day ALL the savedsearches were on the Skipped Search report because they were running as "nobody" and needed updated search quotas.

We changed the install to have all the objects owned by "admin". modified the metadata files and all the objects show in the web gui as owned by "admin". All the reports run fine now except for the below ACCELERATE searches.

I have been unable to locate these objects to change the owner and unsure how to even do this.
I can see the datamodels but not the accelerate searches.
How and what do I change to get these to be owned by "admin" so they will not run out of quotas.

scheduler.log error
INFO SavedSplunker - savedsearch_id="nobody;splunk_app_aws;ACCELERATE_DM_splunk_app_aws_CloudFront_Access_Log_ACCELERATE", search_type="datamodel_acceleration", user="nobody", app="splunk_app_aws", savedsearch_name="ACCELERATE_DM_splunk_app_aws_CloudFront_Access_Log_ACCELERATE", priority=default, status=skipped, reason="The maximum number of concurrent historical scheduled searches on this cluster has been reached", concurrency_category="historical_scheduled", concurrency_context="cluster-wide", concurrency_limit=270, scheduled_time=1517805900, window_time=0

skipped search - savedsearch_name
ACCELERATE_DM_splunk_app_aws_CloudFront_Access_Log_ACCELERATE
ACCELERATE_DM_splunk_app_aws_Detailed_Billing_ACCELERATE
ACCELERATE_DM_splunk_app_aws_Instance_Hour_ACCELERATE
ACCELERATE_DM_splunk_app_aws_S3_Access_Log_ACCELERATE
ACCELERATE_705E6442-8741-4922-A554-A7C0D8D9FD7D_splunk_app_aws_admin_308f04f30c2782b1_ACCELERATE
ACCELERATE_705E6442-8741-4922-A554-A7C0D8D9FD7D_splunk_app_aws_admin_945d9afb3516cfdf_ACCELERATE
ACCELERATE_705E6442-8741-4922-A554-A7C0D8D9FD7D_splunk_app_aws_admin_a96344b626325889_ACCELERATE

Thank-you

harsmarvania57
SplunkTrust
SplunkTrust

Hi @dsbruce,

You are running out of resources in your SH cluster so that SH cluster is throwing message The maximum number of concurrent historical scheduled searches on this cluster has been reached, in this case you need to add more search head into your SH cluster OR you can schedule some of your searches to run at different time(odd time) like running Every 15 minutes searches to 01,16,31,46 minutes so that scheduled searches load on your SH cluster will distribute and SH cluster will not run out of resources at certain interval.

0 Karma

dsbruce
Explorer

Thank-you for the input, but we know this is not our issue. The issue is how do we change these jobs from running as "nobody" when the app is owner=admin and they do not show up in searches or data models settings

0 Karma

sutanunandigram
Explorer

What was the solution?

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...