-
Notifications
You must be signed in to change notification settings - Fork 28
Open
Description
Standalone clis (specially the interpretation one) are getting stuck very often lately with this error:
ERROR [07-28 00:19:21,666+0000] [pipelines_occurrence_interpretation_standalone-1] com.rabbitmq.client.impl.ForgivingExceptionHandler: Consumer org.gbif.common.messaging.MessageConsumer@1ed8402e (amq.ctag-iTtJr62njhAfXGJGE0sURQ) method handleDelivery for channel AMQChannel(amqp://[email protected]:5672//prod,3) threw an exception for channel AMQChannel(amqp://[email protected]:5672//prod,3)
java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
at java.base/java.lang.Thread.start0(Native Method)
at java.base/java.lang.Thread.start(Thread.java:809)
at org.apache.hadoop.hdfs.DFSOutputStream.start(DFSOutputStream.java:781)
at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:316)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1230)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1209)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1147)
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:533)
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:530)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:544)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:471)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1125)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1105)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:994)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:982)
at org.gbif.pipelines.ingest.java.transforms.InterpretedAvroWriter.createAvroWriter(InterpretedAvroWriter.java:41)
at org.gbif.pipelines.ingest.java.pipelines.VerbatimToOccurrencePipeline.run(VerbatimToOccurrencePipeline.java:273)
at org.gbif.pipelines.ingest.java.pipelines.VerbatimToOccurrencePipeline.run(VerbatimToOccurrencePipeline.java:127)
at org.gbif.pipelines.tasks.occurrences.interpretation.InterpretationCallback.runLocal(InterpretationCallback.java:208)
at org.gbif.pipelines.tasks.occurrences.interpretation.InterpretationCallback.lambda$createRunnable$1(InterpretationCallback.java:157)
at org.gbif.pipelines.tasks.PipelinesCallback.handleMessage(PipelinesCallback.java:169)
at org.gbif.pipelines.tasks.occurrences.interpretation.InterpretationCallback.handleMessage(InterpretationCallback.java:78)
at org.gbif.pipelines.tasks.occurrences.interpretation.InterpretationCallback.handleMessage(InterpretationCallback.java:50)
at org.gbif.common.messaging.MessageConsumer.handleCallback(MessageConsumer.java:129)
at org.gbif.common.messaging.MessageConsumer.handleDelivery(MessageConsumer.java:82)
at com.rabbitmq.client.impl.ConsumerDispatcher$5.run(ConsumerDispatcher.java:149)
at com.rabbitmq.client.impl.ConsumerWorkService$WorkPoolRunnable.run(ConsumerWorkService.java:104)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
Suppressed: java.io.IOException: java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
at org.apache.hadoop.hdfs.ExceptionLastSeen.set(ExceptionLastSeen.java:45)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:817)
Caused by: java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
at java.base/java.lang.Thread.start0(Native Method)
at java.base/java.lang.Thread.start(Thread.java:809)
at org.apache.hadoop.hdfs.DataStreamer.initDataStreaming(DataStreamer.java:634)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:708)
Metadata
Metadata
Assignees
Labels
No labels