I am trying a job using Splunk Hadoop Connect. When the job ran for the 1st time, output was written into a new .csv file.. after an hour, when the job reran, I see it create a new file instead of append the output to the old .csv file.
I am confused.
Can anyone tell me how write output to same .csv file instead of creating new one?
What you see is the default behavior of Hadoop Connect. As the job runs, the Splunk platform processes chunks of data received from the search and creates compressed files, locally on the search head. These files are moved to HDFS or the mounted file system if they reach 64MB or if cumulatively they consume more than 1GB, or the search finishes successfully. You can change these limits in the Export Defaults configuration section.
The reason most people select to stay with these default settings - 64MB or 128MB - is that Hadoop itself likes to process the file based on the Hadoop Block size