All Apps and Add-ons

unable_to_write_batch in db connect add-on

GenRockeR
Explorer

when installing and configuring the add-on, the following problem occurred.

2018-08-21 18: 10: 29.047 +0300 [QuartzScheduler_Worker-6] INFO org.easybatch.core.job.BatchJob - Job 'FULL_DB' started
2018-08-21 18: 10: 29.301 +0300 [QuartzScheduler_Worker-6] INFO c.s.dbx.server.dbinput.recordwriter.HecEventWriter - action = write_records batch_size = 1000
2018-08-21 18: 10: 29.301 +0300 [QuartzScheduler_Worker-6] INFO c.s.d.s.dbinput.recordwriter.HttpEventCollector - action = writing_events_via_http_event_collector
2018-08-21 18: 10: 29.322 +0300 [QuartzScheduler_Worker-6] INFO c.s.d.s.dbinput.recordwriter.HttpEventCollector - action = writing_events_via_http_event_collector record_count = 1000
2018-08-21 18: 10: 29.559 +0300 [QuartzScheduler_Worker-6] ERROR c.s.d.s.task.listeners.RecordWriterMetricsListener - action = unable_to_write_batch
java.io.IOException: HTTP Error 400: Bad Request

I use Splunk DB Connect 3.1.3 with jre1.8.0_181 and postgresql JDBC driver in Windows Operation system.
When I test my SQL query with the db-connect SQL Explorer, I get the correct data from my PostgreSQL database.
When I use input with rising or batch I get this HTTP Error

0 Karma
1 Solution

thomasroulet
Path Finder

Http Event Collector expects to receive dates in the format:

timestamp.microsecondes

Splunk DB connect transforms dates in this format via Java. If the default locale takes the comma as the decimal separator, the problems start ...

To solve this problem :

In Splunk DB Connect > Configuration> Settings> General, add the option in JVM Options:

-Duser.language=en

Save, java server restarts.

View solution in original post

thomasroulet
Path Finder

Http Event Collector expects to receive dates in the format:

timestamp.microsecondes

Splunk DB connect transforms dates in this format via Java. If the default locale takes the comma as the decimal separator, the problems start ...

To solve this problem :

In Splunk DB Connect > Configuration> Settings> General, add the option in JVM Options:

-Duser.language=en

Save, java server restarts.

effem
Communicator

Awesome. That resolved the issue for me. (Have a server in german Locale)

0 Karma

nachtgold
New Member

I had the same error with DB Connect 3.1.4, jre1.8.0_201 and postgres jdbc 42.2.5

Querying the data with dbxquery is no problem, also was the definition of the input. But each execution, the error appears.

Update: I tried a Teradata Express with terajdbc4 16.20.00.10. Same error

Sample:

2019-01-18 17:35:00.004 +0100  [QuartzScheduler_Worker-1] INFO  org.easybatch.core.job.BatchJob - Job 'localteradata' starting
2019-01-18 17:35:00.005 +0100  [QuartzScheduler_Worker-1] INFO  org.easybatch.core.job.BatchJob - Batch size: 1.000
2019-01-18 17:35:00.005 +0100  [QuartzScheduler_Worker-1] INFO  org.easybatch.core.job.BatchJob - Error threshold: N/A
2019-01-18 17:35:00.005 +0100  [QuartzScheduler_Worker-1] INFO  org.easybatch.core.job.BatchJob - Jmx monitoring: false
2019-01-18 17:35:00.005 +0100  [QuartzScheduler_Worker-1] DEBUG c.s.d.s.dbinput.recordreader.DbInputRecordReader - action=opening_db_reader task=localteradata
2019-01-18 17:35:00.005 +0100  [QuartzScheduler_Worker-1] DEBUG com.splunk.dbx.connector.ConnectorFactory - action=database_connection_established connection=localteradata jdbc_url=jdbc:teradata://192.168.224.128 user=dbc is_pooled=true number_of_connection_pools=1
2019-01-18 17:35:00.007 +0100  [QuartzScheduler_Worker-1] DEBUG c.splunk.dbx.connector.utils.JdbcConnectorFactory - action=create_connector_by_name class=com.splunk.dbx.connector.connector.impl.TeraDataConnectorImpl
2019-01-18 17:35:00.007 +0100  [QuartzScheduler_Worker-1] INFO  c.s.d.s.dbinput.recordreader.DbInputRecordReader - action=db_input_record_reader_is_opened task=localteradata query=select * from dbc.dbcinfo;
2019-01-18 17:35:00.007 +0100  [QuartzScheduler_Worker-1] DEBUG c.s.d.s.dbinput.recordreader.DbInputRecordReader - action=exec_batch_input_query limit=0
2019-01-18 17:35:00.007 +0100  [QuartzScheduler_Worker-1] DEBUG c.s.dbx.connector.connector.impl.JdbcConnectorImpl - action=connector_executes_query sql='select * from dbc.dbcinfo;' args=null limit=0 fetch_size=300
2019-01-18 17:35:00.016 +0100  [QuartzScheduler_Worker-1] INFO  org.easybatch.core.job.BatchJob - Job 'localteradata' started
2019-01-18 17:35:00.017 +0100  [QuartzScheduler_Worker-1] DEBUG c.s.d.s.d.t.p.ExtractIndexingTimeProcessor - action=setting_event_time_to_current_time input=localteradata time=1547829300017
2019-01-18 17:35:00.017 +0100  [QuartzScheduler_Worker-1] DEBUG c.s.d.s.d.task.processors.EventPayloadProcessor - action=analyzing_result_set_metadata
2019-01-18 17:35:00.017 +0100  [QuartzScheduler_Worker-1] DEBUG c.s.d.s.d.task.processors.EventPayloadProcessor - action=extracting_field_names_finished fields=[InfoKey, InfoData]
2019-01-18 17:35:00.018 +0100  [QuartzScheduler_Worker-1] DEBUG c.s.d.s.dbinput.task.processors.EventMarshaller - action=start_format_hec_events_from_payload record=Record: {header=[number=1, source="localteradata", creationDate="2019-01-18 17:35:00.017"], payload=[EventPayload{fieldNames=[InfoKey, InfoData], row=[LANGUAGE SUPPORT MODE, Standard]}]}
2019-01-18 17:35:00.021 +0100  [QuartzScheduler_Worker-1] DEBUG c.s.d.s.dbinput.task.processors.EventMarshaller - action=finish_format_hec_events record=Record: {header=[number=1, source="localteradata", creationDate="2019-01-18 17:35:00.017"], payload=[{"time":"1547829300,017","event":"2019-01-18 17:35:00.017, InfoKey=\"LANGUAGE SUPPORT MODE\", InfoData=\"Standard\"","source":"localteradata","sourcetype":"teradata","index":"main","host":"192.168.224.128"}]}
2019-01-18 17:35:00.021 +0100  [QuartzScheduler_Worker-1] DEBUG c.s.d.s.d.t.p.ExtractIndexingTimeProcessor - action=setting_event_time_to_current_time input=localteradata time=1547829300021
2019-01-18 17:35:00.021 +0100  [QuartzScheduler_Worker-1] DEBUG c.s.d.s.dbinput.task.processors.EventMarshaller - action=start_format_hec_events_from_payload record=Record: {header=[number=2, source="localteradata", creationDate="2019-01-18 17:35:00.021"], payload=[EventPayload{fieldNames=[InfoKey, InfoData], row=[RELEASE, 16.20.23.01]}]}
2019-01-18 17:35:00.021 +0100  [QuartzScheduler_Worker-1] DEBUG c.s.d.s.dbinput.task.processors.EventMarshaller - action=finish_format_hec_events record=Record: {header=[number=2, source="localteradata", creationDate="2019-01-18 17:35:00.021"], payload=[{"time":"1547829300,021","event":"2019-01-18 17:35:00.021, InfoKey=\"RELEASE\", InfoData=\"16.20.23.01\"","source":"localteradata","sourcetype":"teradata","index":"main","host":"192.168.224.128"}]}
2019-01-18 17:35:00.021 +0100  [QuartzScheduler_Worker-1] DEBUG c.s.d.s.d.t.p.ExtractIndexingTimeProcessor - action=setting_event_time_to_current_time input=localteradata time=1547829300021
2019-01-18 17:35:00.021 +0100  [QuartzScheduler_Worker-1] DEBUG c.s.d.s.dbinput.task.processors.EventMarshaller - action=start_format_hec_events_from_payload record=Record: {header=[number=3, source="localteradata", creationDate="2019-01-18 17:35:00.021"], payload=[EventPayload{fieldNames=[InfoKey, InfoData], row=[VERSION, 16.20.23.01]}]}
2019-01-18 17:35:00.021 +0100  [QuartzScheduler_Worker-1] DEBUG c.s.d.s.dbinput.task.processors.EventMarshaller - action=finish_format_hec_events record=Record: {header=[number=3, source="localteradata", creationDate="2019-01-18 17:35:00.021"], payload=[{"time":"1547829300,021","event":"2019-01-18 17:35:00.021, InfoKey=\"VERSION\", InfoData=\"16.20.23.01\"","source":"localteradata","sourcetype":"teradata","index":"main","host":"192.168.224.128"}]}
2019-01-18 17:35:00.021 +0100  [QuartzScheduler_Worker-1] INFO  c.s.dbx.server.dbinput.recordwriter.HecEventWriter - action=write_records batch_size=3
2019-01-18 17:35:00.021 +0100  [QuartzScheduler_Worker-1] INFO  c.s.d.s.dbinput.recordwriter.HttpEventCollector - action=writing_events_via_http_event_collector
2019-01-18 17:35:00.021 +0100  [QuartzScheduler_Worker-1] INFO  c.s.d.s.dbinput.recordwriter.HttpEventCollector - action=writing_events_via_http_event_collector record_count=3
2019-01-18 17:35:00.056 +0100  [QuartzScheduler_Worker-1] ERROR c.s.d.s.task.listeners.RecordWriterMetricsListener - action=unable_to_write_batch
java.io.IOException: HTTP Error 400, HEC response body: {"text":"Error in handling indexed fields","code":15,"invalid-event-number":0}, trace: HttpResponseProxy{HTTP/1.1 400 Bad Request [Date: Fri, 18 Jan 2019 16:35:00 GMT, Content-Type: application/json; charset=UTF-8, X-Content-Type-Options: nosniff, Content-Length: 78, Vary: Authorization, Connection: Keep-Alive, X-Frame-Options: SAMEORIGIN, Server: Splunkd] ResponseEntityProxy{[Content-Type: application/json; charset=UTF-8,Content-Length: 78,Chunked: false]}}
    at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEventBatch(HttpEventCollector.java:132)
    at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEvents(HttpEventCollector.java:96)
    at com.splunk.dbx.server.dbinput.recordwriter.HecEventWriter.writeRecords(HecEventWriter.java:36)
    at org.easybatch.core.job.BatchJob.writeBatch(BatchJob.java:203)
    at org.easybatch.core.job.BatchJob.call(BatchJob.java:79)
    at org.easybatch.extensions.quartz.Job.execute(Job.java:59)
    at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
    at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
2019-01-18 17:35:00.056 +0100  [QuartzScheduler_Worker-1] ERROR org.easybatch.core.job.BatchJob - Unable to write records
java.io.IOException: HTTP Error 400, HEC response body: {"text":"Error in handling indexed fields","code":15,"invalid-event-number":0}, trace: HttpResponseProxy{HTTP/1.1 400 Bad Request [Date: Fri, 18 Jan 2019 16:35:00 GMT, Content-Type: application/json; charset=UTF-8, X-Content-Type-Options: nosniff, Content-Length: 78, Vary: Authorization, Connection: Keep-Alive, X-Frame-Options: SAMEORIGIN, Server: Splunkd] ResponseEntityProxy{[Content-Type: application/json; charset=UTF-8,Content-Length: 78,Chunked: false]}}
    at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEventBatch(HttpEventCollector.java:132)
    at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEvents(HttpEventCollector.java:96)
    at com.splunk.dbx.server.dbinput.recordwriter.HecEventWriter.writeRecords(HecEventWriter.java:36)
    at org.easybatch.core.job.BatchJob.writeBatch(BatchJob.java:203)
    at org.easybatch.core.job.BatchJob.call(BatchJob.java:79)
    at org.easybatch.extensions.quartz.Job.execute(Job.java:59)
    at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
    at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
2019-01-18 17:35:00.056 +0100  [QuartzScheduler_Worker-1] INFO  org.easybatch.core.job.BatchJob - Job 'localteradata' finished with status: FAILED
2019-01-18 17:35:00.058 +0100  [QuartzScheduler_Worker-1] DEBUG c.s.d.s.dbinput.recordreader.DbInputRecordReader - action=closing_db_reader task=localteradata
0 Karma
Get Updates on the Splunk Community!

More Ways To Control Your Costs With Archived Metrics | Register for Tech Talk

Tuesday, May 14, 2024  |  11AM PT / 2PM ET Register to Attend Join us for this Tech Talk and learn how to ...

.conf24 | Personalize your .conf experience with Learning Paths!

Personalize your .conf24 Experience Learning paths allow you to level up your skill sets and dive deeper ...

Threat Hunting Unlocked: How to Uplevel Your Threat Hunting With the PEAK Framework ...

WATCH NOWAs AI starts tackling low level alerts, it's more critical than ever to uplevel your threat hunting ...