All Apps and Add-ons

Migrate DB Connect V2.4.0 to DB Connect V3.0.1 in a SH Cluster

saranya_fmr
Communicator

Im trying to migrate DB Connect from V 2.4 to latest V 3.0.1 , But Im getting stuck down the line.

Assumptions: I do not have scheduled inputs or ouputs. Only DB connections and lookups.

I followed the steps on the deployer as per doc --> https://docs.splunk.com/Documentation/DBX/latest/DeployDBX/MigratefromDBConnectv1

  1. Copy the JDBC driver from one of the SH node to $SPLUNK_HOME/etc/shcluster/apps/splunk_app_db_connect/bin/lib on the deployer

    1. Installed the new DB Connect 3.0.1 under $SPLUNK_HOME/etc/shcluster/apps/splunk_app_db_connect
    2. splunk apply shcluster-bundle -target : -auth :
    3. Executed the migration step : ./splunk cmd python $SPLUNK_HOME/etc/shcluster/apps/splunk_app_db_connect/bin/app_migration.py -auth admin:password

and I get the below output during execution:


The migration script is to help you to upgrade to DB Connect 3.0.0. If you are running V2.3.0, V2.3.1 or V2.4.0 of
DB Connect, you can use this script to upgrade. If your DB Connect version is not listed here, please refer to the
documentation on how to migrate.
If there is a JDBC driver who depends on other libraries(i.e. ojdbc7 depends on xmlparser.jar), the migration script
can not help you in this case. Please refer the documentation about instructions on to migrate them manually.
Looks like you are running migrate on a deployer, extra steps need to be taken. Please note:
1. This script won't backup anything, please login to one of the SHC node and backup the whole /etc/apps folder
2. Scheduled inputs/outputs won't be supported on SHC any longer, please set up a heavy forwarder to run your scheduled inputs/outputs.
3. Make sure all JDBC drivers used on search heads are installed on this machine in the splunk_app_db_connect folder under bin/lib
4. Migration script need one of the management API endpoint of cluster node, in order to check the configuration
Please input the management API endpoint, For example: https://my-captain:8089


a. what does the 4th point in the above output mean where it mentions to input the management API endpoint??
``The migration just halts with the above ouput...

I dont see this 4th point outlined anywhere in the migration doc.

Migration script need one of the management API endpoint of cluster node, in order to check the configuration
Please input the management API endpoint, For example: https://my-captain:8089

b. Am I wrong anywhere in the steps followed?

0 Karma
1 Solution

wcui_splunk
Splunk Employee
Splunk Employee

I think you are doing right things so far.

For your questions:
a) the 4 notes were printed out because migration script detected that it's executed on a deployer, so there will be additional steps need to taken. Actually the script didn't halt there but waiting for your input.
Did you notice the message: "Please input the management API endpoint, For example: https://my-captain:8089", so you need to provide the management url of one of the cluster's node here, the url is not deployer but a search cluster node, I'll explain why it's needed later.

b) No, you are all right so far.

The difference of migration on SHC with single node is we have to deal with JDBC driver migration manually, and the scheduled input can not be executed because of some implementation limitation.
The reason why user has to provide another management API url during migration is the migration script is running on deployer, in order to migrate the config files the script need to talk to SHC member. Migration on single instance do not need this because migration script knows splunk instance is installed on localhost, but if user has changed the default management port (For example changed to 8000 from 8089), user still need to provide it.

View solution in original post

wcui_splunk
Splunk Employee
Splunk Employee

I think you are doing right things so far.

For your questions:
a) the 4 notes were printed out because migration script detected that it's executed on a deployer, so there will be additional steps need to taken. Actually the script didn't halt there but waiting for your input.
Did you notice the message: "Please input the management API endpoint, For example: https://my-captain:8089", so you need to provide the management url of one of the cluster's node here, the url is not deployer but a search cluster node, I'll explain why it's needed later.

b) No, you are all right so far.

The difference of migration on SHC with single node is we have to deal with JDBC driver migration manually, and the scheduled input can not be executed because of some implementation limitation.
The reason why user has to provide another management API url during migration is the migration script is running on deployer, in order to migrate the config files the script need to talk to SHC member. Migration on single instance do not need this because migration script knows splunk instance is installed on localhost, but if user has changed the default management port (For example changed to 8000 from 8089), user still need to provide it.

saranya_fmr
Communicator

Hi @wcui ,

Thankyou so much for your response.
But how to provide the management URI as input to the scripts? , coz the scripts takes only the below inputs as per doc.

app_migration.py [-h] -auth AUTH [-scheme SCHEME] [-port PORT]

0 Karma

wcui_splunk
Splunk Employee
Splunk Employee

Hi, @saranya_fmr,

Sorry for the confusion.

What you posted here is the command line option - provided before running the script.

The SHC management API endpoint need to be provided in interactive mode. In your case you were asked: "Please input the management API endpoint, For example: https://my-captain:8089", just type the url to the console and press enter to continue. You will be asked to provide the credential in the next step.

saranya_fmr
Communicator

Hi @wcui ,

Thankyou for your continuous guidance, but perhaps Im stuck again. Sorry to prolong on this.

I get below error:

Traceback (most recent call last):
File "/paas/apps/splunk/etc/shcluster/apps/splunk_app_db_connect/bin/app_migration.py", line 912, in
for app in service.apps:
File "/paas/apps/splunk/etc/shcluster/apps/splunk_app_db_connect/bin/splunk_sdk-1.5.0-py2.7.egg/splunklib/client.py", line 1247, in iter
File "/paas/apps/splunk/etc/shcluster/apps/splunk_app_db_connect/bin/splunk_sdk-1.5.0-py2.7.egg/splunklib/client.py", line 1410, in iter
File "/paas/apps/splunk/etc/shcluster/apps/splunk_app_db_connect/bin/splunk_sdk-1.5.0-py2.7.egg/splunklib/client.py", line 1640, in get
File "/paas/apps/splunk/etc/shcluster/apps/splunk_app_db_connect/bin/splunk_sdk-1.5.0-py2.7.egg/splunklib/client.py", line 738, in get
File "/paas/apps/splunk/etc/shcluster/apps/splunk_app_db_connect/bin/splunk_sdk-1.5.0-py2.7.egg/splunklib/binding.py", line 286, in wrapper
File "/paas/apps/splunk/etc/shcluster/apps/splunk_app_db_connect/bin/splunk_sdk-1.5.0-py2.7.egg/splunklib/binding.py", line 68, in new_f
File "/paas/apps/splunk/etc/shcluster/apps/splunk_app_db_connect/bin/splunk_sdk-1.5.0-py2.7.egg/splunklib/binding.py", line 660, in get
File "/paas/apps/splunk/etc/shcluster/apps/splunk_app_db_connect/bin/splunk_sdk-1.5.0-py2.7.egg/splunklib/binding.py", line 1150, in get
File "/paas/apps/splunk/etc/shcluster/apps/splunk_app_db_connect/bin/splunk_sdk-1.5.0-py2.7.egg/splunklib/binding.py", line 1202, in request
File "/paas/apps/splunk/etc/shcluster/apps/splunk_app_db_connect/bin/splunk_sdk-1.5.0-py2.7.egg/splunklib/binding.py", line 1336, in request
File "/paas/apps/splunk/lib/python2.7/httplib.py", line 1001, in request
self.send_request(method, url, body, headers)
File "/paas/apps/splunk/lib/python2.7/httplib.py", line 1035, in _send_request
self.endheaders(body)
File "/paas/apps/splunk/lib/python2.7/httplib.py", line 997, in endheaders
self._send_output(message_body)
File "/paas/apps/splunk/lib/python2.7/httplib.py", line 850, in _send_output
self.send(msg)
File "/paas/apps/splunk/lib/python2.7/httplib.py", line 812, in send
self.connect()
File "/paas/apps/splunk/lib/python2.7/httplib.py", line 1212, in connect
server_hostname=server_hostname)
File "/paas/apps/splunk/lib/python2.7/ssl.py", line 350, in wrap_socket
_context=self)
File "/paas/apps/splunk/lib/python2.7/ssl.py", line 566, in __init
_
self.do_handshake()
File "/paas/apps/splunk/lib/python2.7/ssl.py", line 788, in do_handshake
self._sslobj.do_handshake()
socket.error: [Errno 104] Connection reset by peer

0 Karma

sloshburch
Splunk Employee
Splunk Employee

Hmmm....without knowing anything on this the bottom of the stack implies a connection issue. I wonder if the endpoint details are right and if splunk was running.

0 Karma

wcui_splunk
Splunk Employee
Splunk Employee

Looks like a connection issue.

Endpoint should be OK because before the exception is thrown, there are already several API requests have been sent to SHC member and nothing wrong happened.

Next time you see this error, I would suggest test some API request manually to see if it works. For example, type "https://:/servicesNS/-/-/apps" in your browser to see if there is any error.

0 Karma

saranya_fmr
Communicator

Hi @wcui ,

  1. Second time I executed the script and I get below output , however all the DB-connections are migrated to DBX V3 so was confused what exactly do these output indicate?

Traceback (most recent call last):
File "/paas/apps/splunk/etc/shcluster/apps/splunk_app_db_connect/bin/app_migration.py", line 917, in
check_db_connections_conf(service_with_ns, service)
File "/paas/apps/splunk/etc/shcluster/apps/splunk_app_db_connect/bin/app_migration.py", line 260, in check_db_connections_conf
connection_type_name = connection['connection_type']
File "/paas/apps/splunk/etc/shcluster/apps/splunk_app_db_connect/bin/splunk_sdk-1.5.0-py2.7.egg/splunklib/client.py", line 920, in getitem
File "/paas/apps/splunk/etc/shcluster/apps/splunk_app_db_connect/bin/splunk_sdk-1.5.0-py2.7.egg/splunklib/client.py", line 915, in getattr
AttributeError: connection_type

  1. Also I was trying to automate this process/upgrade. So is there a way to pass the management URI and authentication details in a single command to the script without having to provide these values in interactive mode? I meant something like splunk first time start wherin we pass "yes" and "accept license" in one command itself.

"$SPLUNK_HOME/bin/splunk start --answer-yes --no-prompt --accept-license"

Also , I dont find any errors for this:
For example, type "https://:/servicesNS/-/-/apps" in your browser to see if there is any error.

0 Karma

wcui_splunk
Splunk Employee
Splunk Employee

Hi,

The "AttributeError" indicate that migration script is checking the db_connections.conf. If the .conf is consistent then there should be a property named "connection_type" in each stanza. But for some reason there is a connection without connection type, then the script raise an exception. To address, you need check the .conf manually and fix the broken item.

For your question #2 - running migration script in non-interactive mode, the answer is: Sorry, it's not supported. But you can hack the script, hardcode the management url. Take a look at L890-891. But I need put a big warning here, it's not officially supported, you need understand what you are doing.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...