I am building an LSTM Auto encoder to detect anomalies in Time Series Data
In the Jupiter Lab environment , I am able to run my code and I am getting the expected result
When I try to run through search bar in Splunk, the below things are happening
1.Fit command is actually triggering the training and the model is getting generated. I could see the model in the Jupiter lab environment
2.When I apply the model, either the container is unresponsive for some time and then crashing or I am getting a shape error for the same data which worked in the Jupiter Lab
This is the command I am running
index=“test_50" | head 3000 | apply app:lstm_autoencoder
Could you please help me to resolve this issue. Any documentation on how the code interacts with Splunk Container will also help.
Hi, I have the same problem with CNN shapes. I used the fit command in my model and all works fine but when I used the apply command all crashed because of the shape. I finally solve the problem, let me explain you:
- When apply command is done, the data is send by chunks and problem with shapes arises cause the data is not upload all at the same moment.
I hope your doubt has been resolved!
Hi @hari7696, thanks for your question!
In order to be able to apply a model trained as you described, please ensure that you have implemented the load and save method (Stage 5 and 6 in your Jupyter lstm_autoencoder notebook) and saved the notebook. Reason: when you call | apply lstm_autoencoder, implicitly the model is loaded with the given name by calling your load() method so that it can be applied to your test_50 data. Let me know if this resolves the issue.
If you don't implement your save() and load() methods the consequence is that your model is initialised with a potentially unknown state or data inputs shape definition etc. so it is likely to fail in the way you described above.
Did you find the Model Development Guide in the DLTK app useful? I agree, there can always be more documentation 🙂 Thanks for your feedback