Here is my use-case:
For every hour, I need to download a .csv file from my server using REST API. Using Splunk, I need to index these .csv files
My Approach:
Wrote a Splunk modular input app using Splunk SDK to download CSV files onto a user-specified folder on Splunk file system and
then Splunk monitors entire folder/directory.
Could you guys validate this approach?. Also looking for ways to optimize.
That's a good way to do it. Another way would be to install a universal forwarder on the CSV server and have it send the files to Splunk as they are created.
But downloading on to Splunk server is a good practice ?. Is it possible to write apps on Universal forwarder?
Best Practice is to use a forwarder.
Universal forwarders don't run apps, but you shouldn't need your MI with a forwarder on the server where the CSVs reside. If you really need the app, consider using a heavy forwarder.