Knowledge Management

Best practices for installing Splunk with a NAS

olivier120987
New Member

Hey there I want to install Splunk (standalone) on one machine that's got a NAS drive mounted. I know best practices say I should install or at least keep my indexes on /opt/, for performance matters.

I have 5.6G free on /opt/ which could lead to some space disk issues.. and 5T on a mounted partition (NAS).

I've installed it on my /mounted_NAS/ for now, but I meant to double check with you guys whether it was recommended to achieve this differently. Like, I could install everything on /opt/ (5G) and have my indexes sat on /mounted/NAS/ (5T).

Or I could just leave everything installed on /mounted_NAS.

Worst case scenario I could also partition my disk and grant more space to /opt/ (10-15G), I'd like to avoid re-doing this.

Thoughts?
Thank you

0 Karma
1 Solution

renems
Communicator

As Martin mueller already mentioned: your performance on indexing-side greatly depends on how fast your storage is. In my experience, a NAS is 9-out-of-10 not the fastest storage.

However:
Most searches run in splunk, are over the last 24hours or so. Your newly written data is also pretty recent. So it's particular the hot/warm buckets that has the largest impact (by far).

My recommendation would be to store your hot buckets on local storage (/opt?) assuming that this is the fastest storage you have available, and store the rest on your NAS. An easy way to achieve this, is by using volumes in your indexes.conf. It is explained here in a little more detail.

View solution in original post

0 Karma

renems
Communicator

As Martin mueller already mentioned: your performance on indexing-side greatly depends on how fast your storage is. In my experience, a NAS is 9-out-of-10 not the fastest storage.

However:
Most searches run in splunk, are over the last 24hours or so. Your newly written data is also pretty recent. So it's particular the hot/warm buckets that has the largest impact (by far).

My recommendation would be to store your hot buckets on local storage (/opt?) assuming that this is the fastest storage you have available, and store the rest on your NAS. An easy way to achieve this, is by using volumes in your indexes.conf. It is explained here in a little more detail.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

If you're looking for anything closely resembling performance, you'll want fast large disks for your indexes.
If you're just playing around, any disk will usually do.
Where in the filesystem you put your indexes - /opt, /foo, etc. - doesn't matter. What matters is the IO below that.
If 10-15G would be enough depends on what you're doing with that instance.

TL;DR: It depends.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...