Dashboards & Visualizations

Panels randomly display "could not create search" messages instead of results. How is this troubleshot?

Cuyose
Builder

I have checked the obvious, base max searches, etc. However still have not been able to solve this issue of panels randomly returning a "could not create search" message instead of results.

One thing I have failed to find, is a query for checking why this is displaying within the internal logs on that search head dispatching that search.

Has anyone been able to map a 1:1 error event in the _internal logs with occurrences of this happening?

0 Karma

valiquet
Contributor

You could try:

1- index=_internal sourcetype=splunkd user=YOURUSERNAME

2- index=_internal sourcetype=scheduler status=*

3- Check Javascript console.

4- index=_internal LOG_LEVEL!=INFO

5- Double check limits, perhaps you are reaching a global value.
0 Karma

Cuyose
Builder

Nothing abnormal is showing with any of those queries unfortunately.

0 Karma

rjthibod
Champion

I definitely have not.

You probably need to share more details about your dashboard, e.g., the XML, if you want more specific help that gets into the details of your dashboard / search design.

0 Karma

Cuyose
Builder

There is nothing unique about the dashboard, only that some panels periodically cannot create search. The panel will load 80% of the time just fine.

Here is the query running it, nothing fancy:
index=web host=orpinginx* ( sourcetype=nginx:access OR sourcetype=access_combined_wcookie ) useragent="Mozilla*" referer_domain="https://cctoken.abc.com" uri_query="channelType=call" eventtype="nix-all-logs" uri_path="/"
| timechart count by status

0 Karma

cmerriman
Super Champion

have you checked out this answer?
https://answers.splunk.com/answers/484453/display-error-could-not-create-search.html

what browser are you running this in?
in limits.conf, check on base_max_searches, max_searches_per_cpu and max_rt_search_multiplier.
how many queries are running at the same time?
are you using post processing?

0 Karma

Cuyose
Builder

Yes checked that, we have ample concurrent searches allowed, but I was seeing some warnings related to concurrency instance wide dictated by role. Our CPU's on the search heads are barely utilized, as well as memory on indexer peers and search heads.
we have 16 physical / 32 virtual cores on search head

base_max_searches = 100
max_searches_per_cpu = 4
max_rt_search_multiplier = 2

here is whole search stanza from btool limits on search head
[search]
addpeer_skew_limit = 600
allow_batch_mode = 1
allow_inexact_metasearch = 0
auto_cancel_after_pause = 0
base_max_searches = 100
batch_retry_max_interval = 300
batch_retry_min_interval = 5
batch_retry_scaling = 1.5
batch_search_max_index_values = 10000000
batch_search_max_pipeline = 2
batch_search_max_results_aggregator_queue_size = 100000000
batch_search_max_serialized_results_queue_size = 100000000
batch_wait_after_end = 900
cache_ttl = 300
chunk_multiplier = 5
default_allow_queue = 1
default_save_ttl = 604800
disabled = 0
dispatch_dir_warning_size = 5000
dispatch_quota_retry = 4
dispatch_quota_sleep_ms = 100
enable_cumulative_quota = 0
enable_datamodel_meval = 1
enable_history = 1
enable_memory_tracker = 0
failed_job_ttl = 86400
fetch_remote_search_log = disabled
fieldstats_update_freq = 0
fieldstats_update_maxperiod = 60
force_saved_search_dispatch_as_user = 0
idle_process_cache_search_count = 8
idle_process_cache_timeout = 0.5
idle_process_reaper_period = 30.0
idle_process_regex_cache_hiwater = 2500
launcher_max_idle_checks = 5
launcher_threads = -1
load_remote_bundles = 0
long_search_threshold = 2
max_chunk_queue_size = 10000000
max_combiner_memevents = 50000
max_count = 500000
max_history_length = 1000
max_id_length = 150
max_macro_depth = 100
max_mem_usage_mb = 1000
max_old_bundle_idle_time = 5.0
max_rawsize_perchunk = 100000000
max_results_perchunk = 2500
max_rt_search_multiplier = 2
max_searches_per_cpu = 4
max_searches_per_process = 500
max_subsearch_depth = 8
max_time_per_process = 300.0
max_tolerable_skew = 60
max_workers_searchparser = 5
min_freq = 0.01
min_prefix_len = 1
min_results_perchunk = 100
preview_duty_cycle = 0.25
process_max_age = 7200.0
process_min_age_before_user_change = -1
queued_job_check_freq = 1
realtime_buffer = 10000
reduce_duty_cycle = 0.25
reduce_freq = 10
remote_event_download_finalize_pool = 5
remote_event_download_initialize_pool = 5
remote_event_download_local_pool = 5
remote_timeline = 1
remote_timeline_connection_timeout = 5
remote_timeline_fetchall = 1
remote_timeline_min_peers = 1
remote_timeline_receive_timeout = 10
remote_timeline_send_timeout = 10
remote_timeline_touchperiod = 300
remote_ttl = 600
replication_file_ttl = 600
replication_period_sec = 60
result_queue_max_size = 100000000
results_queue_min_size = 10
rr_max_sleep_ms = 1000
rr_min_sleep_ms = 10
rr_sleep_factor = 2
search_process_memory_usage_percentage_threshold = 25
search_process_memory_usage_threshold = 4000
search_process_mode = auto
stack_size = 4194304
status_buckets = 0
status_cache_size = 10000
summary_mode = all
sync_bundle_replication = auto
target_time_perchunk = 2000
timeline_events_preview = 0
track_indextime_range = 1
truncate_report = 0
ttl = 600
unified_search = 0
use_bloomfilter = 1
write_multifile_results_out = 1

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...