• JVM报错GC overhead limit exceeded


    这是生产上报的错,通过修改jetty.sh 里面的配置,目前没有发生异常,具体如图

    具体参考的文章如下:

    May 31, 2012 11:54:25 AM org.apache.coyote.http11.Http11Protocol H t t p 11 C o n n e c t i o n H a n d l e r p r o c e s s S E V E R E : E r r o r r e a d i n g r e q u e s t , i g n o r e d j a v a . l a n g . O u t O f M e m o r y E r r o r : G C o v e r h e a d l i m i t e x c e e d e d E x c e p t i o n i n t h r e a d " h t t p − 80 − 43 " j a v a . l a n g . N u l l P o i n t e r E x c e p t i o n a t j a v a . u t i l . c o n c u r r e n t . C o n c u r r e n t L i n k e d Q u e u e . o f f e r ( C o n c u r r e n t L i n k e d Q u e u e . j a v a : 273 ) a t o r g . a p a c h e . c o y o t e . h t t p 11. H t t p 11 P r o t o c o l Http11ConnectionHandler process SEVERE: Error reading request, ignored java.lang.OutOfMemoryError: GC overhead limit exceeded Exception in thread "http-80-43" java.lang.NullPointerException at java.util.concurrent.ConcurrentLinkedQueue.offer(ConcurrentLinkedQueue.java:273) at org.apache.coyote.http11.Http11Protocol Http11ConnectionHandlerprocessSEVERE:Errorreadingrequest,ignoredjava.lang.OutOfMemoryError:GCoverheadlimitexceededExceptioninthread"http8043"java.lang.NullPointerExceptionatjava.util.concurrent.ConcurrentLinkedQueue.offer(ConcurrentLinkedQueue.java:273)atorg.apache.coyote.http11.Http11ProtocolHttp11ConnectionHandler 1. o f f e r ( H t t p 11 P r o t o c o l . j a v a : 537 ) a t o r g . a p a c h e . c o y o t e . h t t p 11. H t t p 11 P r o t o c o l 1.offer(Http11Protocol.java:537) at org.apache.coyote.http11.Http11Protocol 1.offer(Http11Protocol.java:537)atorg.apache.coyote.http11.Http11ProtocolHttp11ConnectionHandler 1. o f f e r ( H t t p 11 P r o t o c o l . j a v a : 554 ) a t o r g . a p a c h e . c o y o t e . h t t p 11. H t t p 11 P r o t o c o l 1.offer(Http11Protocol.java:554) at org.apache.coyote.http11.Http11Protocol 1.offer(Http11Protocol.java:554)atorg.apache.coyote.http11.Http11ProtocolHttp11ConnectionHandler.process(Http11Protocol.java:618)
    at org.apache.tomcat.util.net.JIoEndpoint W o r k e r . r u n ( J I o E n d p o i n t . j a v a : 489 ) a t j a v a . l a n g . T h r e a d . r u n ( T h r e a d . j a v a : 619 ) M a y 31 , 201211 : 55 : 46 A M o r g . a p a c h e . c o y o t e . h t t p 11. H t t p 11 P r o c e s s o r p r o c e s s S E V E R E : E r r o r p r o c e s s i n g r e q u e s t j a v a . l a n g . O u t O f M e m o r y E r r o r : G C o v e r h e a d l i m i t e x c e e d e d M a y 31 , 201211 : 56 : 58 A M o r g . a p a c h e . c a t a l i n a . c o r e . C o n t a i n e r B a s e Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:619) May 31, 2012 11:55:46 AM org.apache.coyote.http11.Http11Processor process SEVERE: Error processing request java.lang.OutOfMemoryError: GC overhead limit exceeded May 31, 2012 11:56:58 AM org.apache.catalina.core.ContainerBase Worker.run(JIoEndpoint.java:489)atjava.lang.Thread.run(Thread.java:619)May31,201211:55:46AMorg.apache.coyote.http11.Http11ProcessorprocessSEVERE:Errorprocessingrequestjava.lang.OutOfMemoryError:GCoverheadlimitexceededMay31,201211:56:58AMorg.apache.catalina.core.ContainerBaseContainerBackgroundProcessor processChildren

    问题产生原因:

    根据sun的说法: “if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown.”

    jvm gc行为中超过98%以上的时间去释放小于2%的堆空间时会报这个错误。

    处理方法:

    1. 在jvm启动参数中添加 “-XX:-UseGCOverheadLimit”,该参数在JDK6中默认启用(“-XX:+UseGCOverheadLimit”)。

    调整后的生产环境中使用的参数为:

    JAVA_OPTS=‘-Xms512m -Xmx4096m -XX:MaxPermSize=128m -XX:-UseGCOverheadLimit -XX:+UseConcMarkSweepGC’

    2. 检查是否有使用了大量内存的代码或死循环

    3. 使用jstat命令监控gc行为是否正常

    jstat监控gc的命令格式为:

    jstat -gcutil [-t] [-h] [ []]

    vmid为JVM进程id,可以用ps -ef 或 jps -lv命令查找。

    以下命令为每1秒钟输出一次gc状态,共输入5次

    [root@localhost bin]# jstat -gcutil 7675 1000 5
    S0 S1 E O P YGC YGCT FGC FGCT GCT
    .00 0.00 41.98 9.53 99.26 230 0.466 186 24.691 25.156
    .00 0.00 41.98 9.53 99.26 230 0.466 186 24.691 25.156
    .00 0.00 41.98 9.53 99.26 230 0.466 186 24.691 25.156
    .00 0.00 41.98 9.53 99.26 230 0.466 186 24.691 25.156
    .00 0.00 41.98 9.53 99.26 230 0.466 186 24.691 25.156

    经过一段时间的观察,没有再出现该异常。

    参考:

    jstat - Java Virtual Machine Statistics Monitoring Tool

  • 相关阅读:
    云原生之深入解析如何合并多个kubeconfig文件
    FDTD Solutions笔记
    小程序数据导出文件
    Uiautomator项目搭建与实现原理
    Java刷题面试系列习题(十一)
    CompletableFuture 用法全解——源码还没看懂
    Python绘制global mapping
    节点异常检测 node-problem-detector
    Logrus 集成 color 库实现自定义日志颜色输出字符原理
    MongoDB聚合运算符:$bitAnd
  • 原文地址:https://blog.csdn.net/m0_67390788/article/details/126411961