Splunk Dev

Load csv from GCP into a KVStore lookup using the Python SDK

cdhippen
Path Finder

We currently have a 45mb csv file that we're going to be loading into a Splunk kvstore. I want to be able to accomplish this via the python SDK but I'm running into a bit of trouble loading the records.

The only way I can find to update a kvstore is the service.collection.insert() function which as far as I can tell only accepts 1 row at a time. Being that we have 250k rows in this file, I can't afford to wait for all lines to upload every day.

This is what I have so far:

from splunklib import client, binding
import json, pandas as pd
from copy import deepcopy

data_file = '/path/to/file.csv'

username = 'user'
password = 'splunk_pass'
connectionHandler = binding.handler(timeout=12400)
connect_kwargs = {
    'host': 'splunk-host.com',
    'port': 8089,
    'username': username,
    'password': password,
    'scheme': 'https',
    'autologin': True,
    'handler': connectionHandler
}
flag = True
while flag:
    try:
        service = client.connect(**connect_kwargs)
        service.namespace['owner'] = 'Nobody'
        flag = False
    except binding.HTTPError:
        print('Splunk 504 Error')

kv = service.kvstore
kv['test_data'].delete()
df = pd.read_csv(data_file)
df.replace(pd.np.nan, '', regex=True)
df['_key'] = df['key_field']
result = df.to_dict(orient='records')
fields = deepcopy(result[0])
for field in fields.keys():
    fields[field] = type(fields[field]).__name__
df = df.astype(fields)
kv.create(name='test_data', fields=fields, owner='nobody', sharing='system')
for row in result:
    row = json.dumps(row)
    row.replace("nan", "'nan'")
    kv['learning_center'].data.insert(row)
transforms = service.confs['transforms']
transforms.create(name='learning_center_lookup', **{'external_type': 'kvstore', 'collection': 'learning_center', 'fields_list': '_key, userGuid', 'owner': 'nobody'})
# transforms['learning_center_lookup'].delete()
collection = service.kvstore['learning-center']
print(collection.data.query())

In addition to the problem of taking forever to load a quarter million records, it keeps failing on a row with nan as the value, and no matter what I put in there to try to deal with the nan, it persists in the dictionary value.

Labels (3)
0 Karma

starcher
Influencer
0 Karma

cdhippen
Path Finder

Is there no way to do this with the Splunk Python SDK?

0 Karma
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...