Description
Logstash information: 8.14.1
Please include the following information:
The root cause for this exception is checkpoint.head
file somehow corrupted. The quick remediation is to overwrite checkpoint.head
with checkpoint.head.tmp
(if available).
- Logstash version (e.g.
bin/logstash --version
) - Logstash installation source (e.g. built from source, with a package manager: DEB/RPM, expanded from tar or zip archive, docker)
- How is Logstash being run (e.g. as a service/service manager: systemd, upstart, etc. Via command line, docker/kubernetes)
Plugins installed: (bin/logstash-plugin list --verbose
) default
JVM (e.g. java -version
): bundled
If the affected version of Logstash is 7.9 (or earlier), or if it is NOT using the bundled JDK or using the 'no-jdk' version in 7.10 (or higher), please provide the following information:
- JVM version (
java -version
) - JVM installation source (e.g. from the Operating System's package manager, from source, etc).
- Value of the
LS_JAVA_HOME
environment variable if set.
OS version (uname -a
if on a Unix-like system):
Description of the problem including expected versus actual behavior:
Steps to reproduce:
I failed to reproduce the issue. From the error we can see the disk is full but when disk is full we get No space left on device
when LS is starting: #16389 (comment)
Provide logs (if relevant):
- Behavioral log
The persistent queue on path "/usr/share/logstash/data/queue/logging" won't fit in file system "/dev/mapper/lxc-data" when full. Please free or allocate 19814579200 more bytes. The persistent queue on path "/usr/share/logstash/data/queue/logging_metering" won't fit in file system "/dev/mapper/lxc-data" when full. Please free or allocate 19814579200 more bytes. The persistent queue on path "/usr/share/logstash/data/queue/logging_metering_snapshot_estimator" won't fit in file system "/dev/mapper/lxc-data" when full. Please free or allocate 19814579200 more bytes. The persistent queue on path "/usr/share/logstash/data/queue/logging_service" won't fit in file system "/dev/mapper/lxc-data" when full. Please free or allocate 19814579200 more bytes. The persistent queue on path "/usr/share/logstash/data/queue/auditrecord_logstash" won't fit in file system "/dev/mapper/lxc-data" when full. Please free or allocate 19814579200 more bytes.
abcd.centralus.azure.elastic-cloud.com:9243 failed to respond
Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [https://abcd.centralus.azure.elastic-cloud.com:9243/_bulk?filter_path=errors,items.*.error,items.*.status][Manticore::ClientProtocolException] abcd.centralus.azure.elastic-cloud.com:9243 failed to respond
Elasticsearch Unreachable: [https://abcd.centralus.azure.elastic-cloud.com:9243/_bulk?filter_path=errors,items.*.error,items.*.status][Manticore::ClientProtocolException] abcd.centralus.azure.elastic-cloud.com:9243 failed to respond
Could not index event to Elasticsearch.
Could not index event to Elasticsearch.
Could not index event to Elasticsearch.
Could not index event to Elasticsearch.
Starting Logstash
...
Logstash failed to create queue.
-----> java.io.IOException: Checkpoint checksum mismatch, expected: 278197, actual: 0
- Stack trace
{"level":"INFO","loggerName":"logstash.runner","timeMillis":1721353466915,"thread":"main","logEvent":{"message":"Starting Logstash","logstash.version":"8.14.1","jruby.version":"jruby 9.4.7.0 (3.1.4) 2024-04-29 597ff08ac1 OpenJDK 64-Bit Server VM 17.0.11+9 on 17.0.11+9 +indy +jit [x86_64-linux]"}}
{
"level": "ERROR",
"loggerName": "logstash.agent",
"timeMillis": 1722887281351,
"thread": "Converge PipelineAction::Create<logging>",
"logEvent": {
"message": "java.io.IOException: Checkpoint checksum mismatch, expected: 278197, actual: 0",
"action": "LogStash::PipelineAction::Create/pipeline_id:logging",
"exception": "Java::JavaLang::IllegalStateException",
"backtrace": [
"org.logstash.execution.AbstractPipelineExt.openQueue(AbstractPipelineExt.java:260)",
"org.logstash.execution.AbstractPipelineExt$INVOKER$i$0$0$openQueue.call(AbstractPipelineExt$INVOKER$i$0$0$openQueue.gen)",
"org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:456)",
"org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:195)",
"org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:346)",
"org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)",
"org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:94)",
"org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:275)",
"org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:262)",
"org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:236)",
"usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:49)",
"usr.share.logstash.logstash_minus_core.lib.logstash.agent.RUBY$block$converge_state$2(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:386)",
"org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:141)",
"org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:64)",
"org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58)",
"org.jruby.runtime.Block.call(Block.java:144)",
"org.jruby.RubyProc.call(RubyProc.java:354)",
"org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:111)",
"java.base/java.lang.Thread.run(Thread.java:840)"
]
}
}