Getting Data In

How Data cloning can be done through a heavy forwarder?

pavanae
Builder

I have a test environment(search head) in which there aren't any events. Now I want to do some data cloning and get some dummy events to my Testing search head. For that I'm thinking of getting those events from a heavy forwarder. But not sure how it can be done. I'm not much of a technical person. So any clear explanation would be helpful for me to getting understand.

Any .conf files to be configured to get that done? If yes where should I have to do that either in heavy forwarder side or search head side?

0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

From your description I understand that you are using a distributed environment with at least one Indexer and one Search Head.

Do you need to modify the parsing phase (for example change props.conf and transforms.con to index logs in a different way)?
if not and you need only an environment in which develop and test your apps, you could use a search head (not the production one) that uses as search peer the production indexers without any data cloning.
In this way, you could use the real data for your development without modifying your production apps, if instead you want to index other new or different logs, you could put them in a test index on the Indexers.

The only problem is the additional load on the indexer machines for the searches you execute in your development: you should understand how much high is this overload? an heavy development activity or sometimes any modification?

I hope to be clear.

Bye.
Giuseppe

View solution in original post

0 Karma

gcusello
SplunkTrust
SplunkTrust

From your description I understand that you are using a distributed environment with at least one Indexer and one Search Head.

Do you need to modify the parsing phase (for example change props.conf and transforms.con to index logs in a different way)?
if not and you need only an environment in which develop and test your apps, you could use a search head (not the production one) that uses as search peer the production indexers without any data cloning.
In this way, you could use the real data for your development without modifying your production apps, if instead you want to index other new or different logs, you could put them in a test index on the Indexers.

The only problem is the additional load on the indexer machines for the searches you execute in your development: you should understand how much high is this overload? an heavy development activity or sometimes any modification?

I hope to be clear.

Bye.
Giuseppe

0 Karma

pavanae
Builder

Sorry for the confussion. But we are using the stand alone environment for testing. In those cases what should be done?

0 Karma

Marc785
Explorer

Hey Pavanae,

I understand wanting to clone your data for testing purposes. But honestly, if you're only using a standalone environment, vice distributed, for testing, training, playing etc. why not just use the eventgen app on splunkbase? It has tons of sample logs from coming soureces amd you can also plug in your own logs to more closely simulate the data within your environment. The best part??? You can have data flowing in under 20 minutes with minimal but attentive configuration. Most of the apps i've seen has some type of eventgen component in them.

https://splunkbase.splunk.com/app/1924/

or

https://github.com/splunk/eventgen

Happy Splunking!

  • Marc
0 Karma

somesoni2
Revered Legend

If you just have a Search Head (no indexer) and want to be able to see data (say from PROD or any other environment), then you could just add the Indexers that have the data that you need as Search Peer to your Test Search Head.
Caution: 1) Be careful to disable data delete capability (can_delete) from all the roles of this test search head as you probably just want the read only access to data. 2) This will put some extra load (of the activities happening on the Test Search Head) to the Indexers.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...