Getting Data In

Data validation across multiple deployments

mbasharat
Builder

Hi,

We have old Splunk architecture which we will be retiring. New architecture is in place. We have configured data feeds to currently going into both architectures in parallel until all validations are complete. Right now, we need to validate if data is same in both deployments e.g. Deployment A (old) and B (new) for all data sources.

I need guidance in the right steps and validations to make sure of this. I am thinking doing event counts for each data source, data size, events, event's structure, fields etc. Can someone guide in outlaying this if I am missing something pls?

Deployment A is on version 6.6.5 and new deployment B is on version 7.x.

Thanks in-advance.

Tags (1)
0 Karma
1 Solution

amitm05
Builder

Out of curiosity, I'd want to know that why did you setup a new deployment rather than upgrading your existing one ? There must be a good reason for why one would increase the task level manifolds.

Now for your answer you can perform some of the following validations -
1. Compare licensing volumes. Overall and Per sourcetypes.
2. Compare knowledge objects e.g. Fields, Tags, Event Types, DMs, Lookups, Macros etc.
3. DM accelerations. These would be important if any of your searches are based on them.
4. Users and Roles
5. Any external Authentications that you might be using.
6. Apps and Addons. [It'll be important to take note about if there are any customizations done on the Apps and Addons you are already using]. May be keeping copy of /local directory of these apps and addons would help.

Now you can choose about how much thorough you want to be while checking on these pointers above

View solution in original post

amitm05
Builder

Out of curiosity, I'd want to know that why did you setup a new deployment rather than upgrading your existing one ? There must be a good reason for why one would increase the task level manifolds.

Now for your answer you can perform some of the following validations -
1. Compare licensing volumes. Overall and Per sourcetypes.
2. Compare knowledge objects e.g. Fields, Tags, Event Types, DMs, Lookups, Macros etc.
3. DM accelerations. These would be important if any of your searches are based on them.
4. Users and Roles
5. Any external Authentications that you might be using.
6. Apps and Addons. [It'll be important to take note about if there are any customizations done on the Apps and Addons you are already using]. May be keeping copy of /local directory of these apps and addons would help.

Now you can choose about how much thorough you want to be while checking on these pointers above

amitm05
Builder

Please accept as answer, if this responds to your query. Thanks.

0 Karma
Get Updates on the Splunk Community!

More Ways To Control Your Costs With Archived Metrics | Register for Tech Talk

Tuesday, May 14, 2024  |  11AM PT / 2PM ET Register to Attend Join us for this Tech Talk and learn how to ...

.conf24 | Personalize your .conf experience with Learning Paths!

Personalize your .conf24 Experience Learning paths allow you to level up your skill sets and dive deeper ...

Threat Hunting Unlocked: How to Uplevel Your Threat Hunting With the PEAK Framework ...

WATCH NOWAs AI starts tackling low level alerts, it's more critical than ever to uplevel your threat hunting ...