Code Monkey home page Code Monkey logo

discussions's Introduction

JavaBasic

JavaBasic

第一部分:Java基础知识

第二部分:核心技术

第三部分:高级应用

  • 单元测试和JUnit

    • 单元测试
    • JUnit
  • 高级文本处理

    • Java字符编码
    • Java国际化编程
    • Java高级字符串处理
  • 高级文件处理

    • 未整理
    • XML简介
    • XML解析(DOM方法)
    • XML解析(SAX方法)
    • XML解析(Stax方法)
    • JSON简介及解析
    • 图形图像简介及解析
      • 条形码和二维码简介及解析
    • Docx简介及解析
    • 表格文件简介及解析
    • PDF简介及解析
  • Java混合编程

    • Java调用Java程序(RMI)
    • Java调用C程序(JNI)
    • Java调用Javascript程序(Nashorn)
    • Java调用Python程序(Jython)
    • Java调用Web Service
    • Java调用命令行
  • JVM 指令集翻译

  • JVM 内存模型

  • JVM 类家在机制

  • JVM 垃圾回收机制

  • 正确 使用Volatile

  • JMM 线程内存模型

  • CAS 与 AQS

  • 公平同步--公平锁 及相关知识

  • Java多线程和并发编程

    • 多进程和多线程简介
    • Java多线程实现
    • Java多线程信息共享
    • Java多线程管理(1)
    • Java多线程管理(2)
    • Java并发框架Executor
    • Java并发框架Fork-Join
    • Java并发数据结构
    • Java并发协作控制(1)
    • Java并发协作控制(2)
    • Java定时任务执行
  • Swing 看看框架设计

第四部分:项目实战

discussions's People

Contributors

hbnking avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

itcarry

discussions's Issues

如何查看日志中 某个 连接 的 来源 ip

示例如下 :

2020-07-23T18:50:01.073+0800 I COMMAND  [conn1035] command cccdb.xxx  command: findAndModify { findandmodify: "xxx", query: { date: "2020-07-23 18:00"}, .......} planSummary: IXSCAN { date: 1 } keysExamined:201 docsExamined:201 nMatched:1 nModified:1 writeConflicts:1 numYields:2 reslen:342 locks:{ Global: { acquireCount: { r: 4, w: 4 } }, Database: { acquireCount: { w: 4 } }, Collection: { acquireCount: { w: 3 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_query 152ms

示例 日志如下
conn1035 这个链接的 具体ip 如何查看 。

Mongodb 分片集群 多个分片计数不对

image

基本情况说明
mongos 及 各个 shard 上的count 不同
补充说明
该表 未分片
mongos> db.user_hero_star.getShardDistribution()
Collection idlethree.user_hero_star is not sharded.
mongos>

image

补充说明
该库有被 drop 的 历史情况
drop完库之后,重新创建了库;

com.mongodb.MongoInterruptedException: Interrupted acquiring a permit to retrieve an item from the pool

com.mongodb.MongoInterruptedException: Interrupted acquiring a permit to retrieve an item from the pool
at com.mongodb.internal.connection.ConcurrentPool.acquirePermit(ConcurrentPool.java:203) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.ConcurrentPool.get(ConcurrentPool.java:140) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.ConcurrentPool.get(ConcurrentPool.java:123) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.PowerOfTwoBufferPool.getBuffer(PowerOfTwoBufferPool.java:78) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.SocketStream.read(SocketStream.java:105) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:588) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:445) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:299) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:259) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.UsageTrackingInternalConnection.sendAndReceive(UsageTrackingInternalConnection.java:99) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.sendAndReceive(DefaultConnectionPool.java:450) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.CommandProtocolImpl.execute(CommandProtocolImpl.java:72) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:218) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:269) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:131) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.operation.MixedBulkWriteOperation.executeCommand(MixedBulkWriteOperation.java:435) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.operation.MixedBulkWriteOperation.executeBulkWriteBatch(MixedBulkWriteOperation.java:261) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.operation.MixedBulkWriteOperation.access$700(MixedBulkWriteOperation.java:72) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.operation.MixedBulkWriteOperation$1.call(MixedBulkWriteOperation.java:205) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.operation.MixedBulkWriteOperation$1.call(MixedBulkWriteOperation.java:196) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.operation.OperationHelper.withReleasableConnection(OperationHelper.java:501) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:196) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:71) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:206) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.client.internal.MongoCollectionImpl.executeSingleWriteRequest(MongoCollectionImpl.java:1048) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.client.internal.MongoCollectionImpl.executeInsertOne(MongoCollectionImpl.java:498) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.client.internal.MongoCollectionImpl.insertOne(MongoCollectionImpl.java:482) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.client.internal.MongoCollectionImpl.insertOne(MongoCollectionImpl.java:476) [mongo-java-driver-3.11.2.jar:na]
at org.apache.flink.streaming.connectors.mongodb.MongodbSink.invoke(MongodbSink.java:59) [flink-connector-mongodb-0.0.1.jar:na]
at org.apache.flink.streaming.connectors.mongodb.MongodbSink.invoke(MongodbSink.java:20) [flink-connector-mongodb-0.0.1.jar:na]
at org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56) [flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173) [flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151) [flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128) [flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69) [flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311) [flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187) ~[flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487) [flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470) [flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) ~[flink-runtime_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532) ~[flink-runtime_2.11-1.10.0.jar:1.10.0]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_252]
Caused by: java.lang.InterruptedException: null
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1302) ~[na:1.8.0_252]
at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) ~[na:1.8.0_252]
at com.mongodb.internal.connection.ConcurrentPool.acquirePermit(ConcurrentPool.java:199) ~[mongo-java-driver-3.11.2.jar:na]
... 41 common frames omitted

com.mongodb.MongoInternalException: The responseTo (304592644) in the response does not match the requestId (304592647) in the request

com.mongodb.MongoInternalException: The responseTo (304592644) in the response does not match the requestId (304592647) in the request
at com.mongodb.internal.connection.ReplyMessage.(ReplyMessage.java:65)
at com.mongodb.internal.connection.ReplyMessage.(ReplyMessage.java:43)
at com.mongodb.internal.connection.InternalStreamConnection.getCommandResult(InternalStreamConnection.java:409)
at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:305)
at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:255)
at com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)
at com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:38)
at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:180)
at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:124)
at java.lang.Thread.run(Thread.java:745)

Canonical address 172.31.18.1:27017 does not match server address. Removing 54.222.159.186:27017 from client view of cluster

九月 03, 2020 12:53:03 下午 com.mongodb.diagnostics.logging.JULLogger log
信息: Adding discovered server 172.31.31.140:27017 to client view of cluster
九月 03, 2020 12:53:03 下午 com.mongodb.diagnostics.logging.JULLogger log
信息: Server 54.222.159.186:27017 is no longer a member of the replica set.  Removing from client view of cluster.
九月 03, 2020 12:53:03 下午 com.mongodb.diagnostics.logging.JULLogger log
信息: Server 54.222.159.186:27017 is no longer a member of the replica set.  Removing from client view of cluster.
九月 03, 2020 12:53:03 下午 com.mongodb.diagnostics.logging.JULLogger log
信息: Canonical address 172.31.18.1:27017 does not match server address.  Removing 54.222.159.186:27017 from client view of cluster
九月 03, 2020 12:53:03 下午 com.mongodb.diagnostics.logging.JULLogger log
信息: Canonical address 172.31.18.1:27017 does not match server address.  Removing 54.222.159.186:27017 from client view of cluster
九月 03, 2020 12:53:13 下午 com.mongodb.diagnostics.logging.JULLogger log
信息: Exception in monitor thread while connecting to server 172.31.23.166:27017
com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message
	at com.mongodb.internal.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:563)
	at com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:448)
	at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:299)
	at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:259)
	at com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)
	at com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)
	at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:105)
	at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:62)
	at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:129)
	at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: Read timed out
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
	at java.net.SocketInputStream.read(SocketInputStream.java:171)
	at java.net.SocketInputStream.read(SocketInputStream.java:141)
	at com.mongodb.internal.connection.SocketStream.read(SocketStream.java:109)
	at com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:580)
	at com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:445)
	... 9 more

九月 03, 2020 12:53:13 下午 com.mongodb.diagnostics.logging.JULLogger log
信息: Exception in monitor thread while connecting to server 172.31.31.140:27017
com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message
	at com.mongodb.internal.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:563)
	at com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:448)
	at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:299)
	at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:259)
	at com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)
	at com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)
	at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:105)
	at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:62)
	at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:129)
	at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: Read timed out
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
	at java.net.SocketInputStream.read(SocketInputStream.java:171)
	at java.net.SocketInputStream.read(SocketInputStream.java:141)
	at com.mongodb.internal.connection.SocketStream.read(SocketStream.java:109)
	at com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:580)
	at com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:445)
	... 9 more

九月 03, 2020 12:53:13 下午 com.mongodb.diagnostics.logging.JULLogger log
信息: Exception in monitor thread while connecting to server 172.31.18.1:27017
com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message
	at com.mongodb.internal.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:563)
	at com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:448)
	at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:299)
	at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:259)
	at com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)
	at com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)
	at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:105)
	at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:62)
	at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:129)
	at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: Read timed out
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
	at java.net.SocketInputStream.read(SocketInputStream.java:171)
	at java.net.SocketInputStream.read(SocketInputStream.java:141)
	at com.mongodb.internal.connection.SocketStream.read(SocketStream.java:109)
	at com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:580)
	at com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:445)
	... 9 more

九月 03, 2020 12:53:13 下午 com.mongodb.diagnostics.logging.JULLogger log
信息: Exception in monitor thread while connecting to server 172.31.23.166:27017
com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message
	at com.mongodb.internal.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:563)
	at com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:448)
	at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:299)
	at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:259)
	at com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)
	at com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)
	at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:105)
	at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:62)
	at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:129)
	at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: Read timed out
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
	at java.net.SocketInputStream.read(SocketInputStream.java:171)
	at java.net.SocketInputStream.read(SocketInputStream.java:141)
	at com.mongodb.internal.connection.SocketStream.read(SocketStream.java:109)
	at com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:580)
	at com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:445)
	... 9 more

记录一次MongoDB 读写异常的情况,读写全部阻塞。DDL ,build index background

版本3.2
集群环境 三节点复制集
压力 1.5W QPS
相关信息
当时正在后台创建索引
随后 执行 rename 操作

过程的详细描述
后台创建索引 但数据量较大 需要一个小时以上
在创建的过程中 ,从主节点开始复制创建, 先是主节点创建完了 ,然后从节点开始创建 但是还没建完。
在主节点 建完 5分钟后 直接执行rename 操作 。
现象,当从节点 开始复制该 操作时 发现从节点 读取数据 直接 sockettimeout ,
且 主从节点复制故障 ,lag 延迟增高 ,从节点 连接数 不断增加 ,且对应用查询无响应,完全block 住读写 操作 。

连接数不断增加 直至打满所有的链接 ,导致整个复制集不可用

how to use Issues

关于技术讨论区
Hi,欢迎各位朋友来到MongoDB讨论区!

本项目的愿景是:

为MongoDB中文爱好者创建一个活跃的互助平台;
推广和促进MongoDB成为企业数据库应用的首选方案;
聚集MongoDB开发、数据库、运维专家,打造最权威的技术社区。
在此基础上,

本项目提供与MongoDB相关的技术问答支持,收集MongoDB问题场景;
我们鼓励用户在问题得到解决后在社区网站问答板块分享自己的经验,让遇到同类问题的用户有相关参考;
我们半年进行一次统计,对积极帮助其它用户解答问题的朋友赠送社区定制纪念品!
同时,我们将不定期把精华问答整理出来分享给给大家。
请各位朋友在问题得到解决后,在问答板块上补充下问题解决方法哦~

用户需按照以下规范描述问题详情:

业务场景(必填)
环境描述(必填)
问题描述(必填)
复现步骤(如果有,必填)
影响面(可选)
问题定位思路(可选)
问题日志(如果有,必填)
结论(可选)
遵循以上问题描述规范提问,基本上您的问题就能够被清楚了解了,群友和社区的专家们也能更好地为您提供帮助。

但是有些朋友可能还会说,我提问得很清楚了,为什么还没人回复我?!

这里有两种情况,一是不好意思,您的问题我们还真不知道怎么解答;

二是,其实这些问题您自己就能解决。不信?我们一起来看下社区的问答达人张耀星提的一些要求,您要是做到了,那就是第一种情况没错了。

提问要求:
1 提问前
1.1 技术调研
在提出问题前先有自己的思考,并做必要的调研,只有在无法找到答案时才提出新的问题。

正确做法

使用搜索引擎搜索问题关键字;
阅读在线文档;
动手测试自己的想法;
不正确做法

遇到问题马上向别人提问;

1.2 整理思路
想清楚自己想要问什么,如何表述才能够让对方更容易理解你的问题和困难点。

正确做法

分解复杂的问题,找到自己不明白的关键点再提问;
如果需要,使用为大家广泛接受的术语;
不正确做法

提出与本社区不直接相关的问题;
提出非常宽泛的问题,例如某某网站用MongoDB如何实现?
2 提问时
2.1 描述完备
尽量清楚地描述自己遇到的问题,只有别人理解了你的问题才有可能做出正确的回答。

正确做法

描述清楚问题,让大家更容易理解;
提供必要的屏幕截图帮助大家理解问题;
提供样例代码,样例数据方便大家测试;
不正确做法

截图提问:这是怎么回事?

2.2 与回答者互动
当回答者还有其他疑问时,及时与其互动以帮助别人更好地理解问题和作答。

正确做法

与回答者在评论区或社区群组中互动;
将必要的信息补充到问题中以便后来者参考;
3 提问后
提问后应及时反馈问题是否已经解决,给后来者以合适的参考。

正确做法

接受正确答案或补充自己最终的解决方案;
思考和消化学习到的知识点;

不正确的做法
提问后不管不顾,既不接受答案也不做任何回复。

最后 求赞ing~

Caused by: com.mongodb.MongoCursorNotFoundException: Query failed with error code -5 and error message 'Cursor

Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:110)
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:76)
at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:186)
at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:180)
at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:484)
at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:380)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:279)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:194)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
at akka.actor.ActorCell.invoke(ActorCell.scala:561)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
at akka.dispatch.Mailbox.run(Mailbox.scala:225)
at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
... 4 more
Caused by: com.mongodb.MongoCursorNotFoundException: Query failed with error code -5 and error message 'Cursor 30704136349 not found on server javashake:27117' on server javashake:27117
at com.mongodb.operation.QueryHelper.translateCommandException(QueryHelper.java:27)
at com.mongodb.operation.QueryBatchCursor.getMore(QueryBatchCursor.java:267)
at com.mongodb.operation.QueryBatchCursor.hasNext(QueryBatchCursor.java:138)
at com.mongodb.client.internal.MongoBatchCursorAdapter.hasNext(MongoBatchCursorAdapter.java:54)
at org.apache.flink.streaming.connectors.mongodb.source.OplogReader.read(OplogReader.java:61)
at org.apache.flink.streaming.connectors.mongodb.source.Worker.read(Worker.java:301)
at org.apache.flink.streaming.connectors.mongodb.MongoSource.run(MongoSource.java:47)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:196)

21:55:10.318 [flink-akka.actor.default-dispatcher-1510] DEBUG o.a.f.runtime.jobmaster.JobMaster - Close ResourceManager connection 72c549223acce927ac9fd61648e70e2f.
org.apache.flink.util.FlinkException: JobManager is shutting down.
at org.apache.flink.runtime.jobmaster.JobMaster.onStop(JobMaster.java:350) [flink-runtime_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStop(RpcEndpoint.java:218) [flink-runtime_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StartedState.terminate(AkkaRpcActor.java:509) [flink-runtime_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:175) [flink-runtime_2.11-1.10.0.jar:1.10.0]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) ~[scala-library-2.11.12.jar:na]
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) ~[scala-library-2.11.12.jar:na]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) ~[scala-library-2.11.12.jar:na]
at akka.actor.Actor$class.aroundReceive(Actor.scala:517) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.actor.ActorCell.invoke(ActorCell.scala:561) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.dispatch.Mailbox.run(Mailbox.scala:225) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.dispatch.Mailbox.exec(Mailbox.scala:235) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) ~[akka-actor_2.11-2.5.21.jar:2.5.21]

ava.lang.IllegalStateException: open at org.bson.util.Assertions.isTrue

java.lang.IllegalStateException: open
at org.bson.util.Assertions.isTrue(Assertions.java:36)
at com.mongodb.DBTCPConnector.getAddress(DBTCPConnector.java:332)
at com.mongodb.Mongo.getAddress(Mongo.java:455)
at com.mongodb.DBCollectionImpl.updateImpl(DBCollectionImpl.java)
at com.mongodb.DBCollection.update(DBCollection.java:250)
at com.mongodb.DBCollection.update(DBCollection.java:232)
at com.mongodb.DBCollection.update(DBCollection.java:307)
at org.springframework.data.mongodb.core.MongoTemplate$12.doInCollection(MongoTemplate.java:1153)
at org.springframework.data.mongodb.core.MongoTemplate$12.doInCollection(MongoTemplate.java:1133)
at org.springframework.data.mongodb.core.MongoTemplate.execute(MongoTemplate.java:459)
at org.springframework.data.mongodb.core.MongoTemplate.doUpdate(MongoTemplate.java:1133)
at org.springframework.data.mongodb.core.MongoTemplate.upsert(MongoTemplate.java:1095)
at com.ceair.pss.shopping.b2c.savecache.repository.spring.CacheDetailPriceResponseRepositoryImpl.upsertCacheDetailPriceResponseEntity(CacheDetailPriceResponseRepositoryImpl.java:51)
at com.ceair.pss.shopping.b2c.savecache.service.spring.CacheDetailPriceResponseServiceImpl.upsertCacheDetailPriceResponse(CacheDetailPriceResponseServiceImpl.java:65)
at com.ceair.pss.shopping.b2c.service.B2cShoppingIntegrateServiceImpl.lambda$upsertOrDelCacheDetailPriceResponse$2(B2cShoppingIntegrateServiceImpl.java:247)
at java.lang.Thread.run(Thread.java:748)

com.mongodb.MongoCommandException: Command failed with error 43 (CursorNotFound): 'cursor id 30704136349 not found' on server javashake:27117.

21:55:10.261 [Legacy Source Thread - Source: Custom Source (1/1)] DEBUG org.mongodb.driver.protocol.command - Execution of command with request id 13941968 failed to complete successfully in 3.65 ms on connection [connectionId{localValue:47, serverValue:351822}] to server javashake:27117
com.mongodb.MongoCommandException: Command failed with error 43 (CursorNotFound): 'cursor id 30704136349 not found' on server javashake:27117. The full response is {"ok": 0.0, "errmsg": "cursor id 30704136349 not found", "code": 43, "codeName": "CursorNotFound", "operationTime": {"$timestamp": {"t": 1594907706, "i": 1}}, "$clusterTime": {"clusterTime": {"$timestamp": {"t": 1594907706, "i": 1}}, "signature": {"hash": {"$binary": "o9TQEBtfCjakTLcRcC2TOerZ6D4=", "$type": "00"}, "keyId": {"$numberLong": "6837029649613062145"}}}}
at com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:175) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:303) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:259) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.UsageTrackingInternalConnection.sendAndReceive(UsageTrackingInternalConnection.java:99) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.sendAndReceive(DefaultConnectionPool.java:450) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.CommandProtocolImpl.execute(CommandProtocolImpl.java:72) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:218) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:269) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:131) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:123) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.operation.QueryBatchCursor.getMore(QueryBatchCursor.java:260) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.operation.QueryBatchCursor.hasNext(QueryBatchCursor.java:138) [mongo-java-driver-3.11.2.jar:na]
at com.mongodb.client.internal.MongoBatchCursorAdapter.hasNext(MongoBatchCursorAdapter.java:54) [mongo-java-driver-3.11.2.jar:na]
at org.apache.flink.streaming.connectors.mongodb.source.OplogReader.read(OplogReader.java:61) [flink-connector-mongodb-0.0.1.jar:na]
at org.apache.flink.streaming.connectors.mongodb.source.Worker.read(Worker.java:301) [flink-connector-mongodb-0.0.1.jar:na]
at org.apache.flink.streaming.connectors.mongodb.MongoSource.run(MongoSource.java:47) [flink-connector-mongodb-0.0.1.jar:na]
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100) [flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63) [flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:196) [flink-streaming-java_2.11-1.10.0.jar:1.10.0]
21:55:10.262 [Legacy Source Thread - Source: Custom Source (1/1)] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:47, serverValue:351822}] to javashake:27117 because there was a socket exception raised on another connection from this pool.
21:55:10.262 [Legacy Source Thread - Source: Custom Source (1/1)] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:47, serverValue:351822}
21:55:10.268 [Source: Custom Source (1/1)] INFO o.a.flink.runtime.taskmanager.Task - Source: Custom Source (1/1) (0667d85119bbe2015ef395a2f6a1a0eb) switched from RUNNING to FAILED.
com.mongodb.MongoCursorNotFoundException: Query failed with error code -5 and error message 'Cursor 30704136349 not found on server javashake:27117' on server javashake:27117
at com.mongodb.operation.QueryHelper.translateCommandException(QueryHelper.java:27) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.operation.QueryBatchCursor.getMore(QueryBatchCursor.java:267) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.operation.QueryBatchCursor.hasNext(QueryBatchCursor.java:138) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.client.internal.MongoBatchCursorAdapter.hasNext(MongoBatchCursorAdapter.java:54) ~[mongo-java-driver-3.11.2.jar:na]
at org.apache.flink.streaming.connectors.mongodb.source.OplogReader.read(OplogReader.java:61) ~[flink-connector-mongodb-0.0.1.jar:na]
at org.apache.flink.streaming.connectors.mongodb.source.Worker.read(Worker.java:301) ~[flink-connector-mongodb-0.0.1.jar:na]
at org.apache.flink.streaming.connectors.mongodb.MongoSource.run(MongoSource.java:47) ~[flink-connector-mongodb-0.0.1.jar:na]
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100) ~[flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63) ~[flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:196) ~[flink-streaming-java_2.11-1.10.0.jar:1.10.0]
21:55:10.268 [Source: Custom Source (1/1)] INFO o.a.flink.runtime.taskmanager.Task - Freeing task resources for Source: Custom Source (1/1) (0667d85119bbe2015ef395a2f6a1a0eb).
21:55:10.269 [Source: Custom Source (1/1)] DEBUG o.a.flink.runtime.taskmanager.Task - Release task Source: Custom Source (1/1) network resources (state: FAILED).
21:55:10.269 [Source: Custom Source (1/1)] DEBUG o.a.f.r.i.n.TaskEventDispatcher - unregistering 01ef47105b9eb154f23107d2329e9926@0667d85119bbe2015ef395a2f6a1a0eb
21:55:10.269 [Source: Custom Source (1/1)] DEBUG o.a.f.r.i.n.p.ResultPartition - Source: Custom Source (1/1) (0667d85119bbe2015ef395a2f6a1a0eb): Releasing ReleaseOnConsumptionResultPartition 01ef47105b9eb154f23107d2329e9926@0667d85119bbe2015ef395a2f6a1a0eb [PIPELINED_BOUNDED, 10 subpartitions, 10 pending consumptions].
21:55:10.269 [Source: Custom Source (1/1)] DEBUG o.a.f.r.i.n.p.PipelinedSubpartition - Source: Custom Source (1/1) (0667d85119bbe2015ef395a2f6a1a0eb): Released PipelinedSubpartition#0 [number of buffers: 9517 (311833450 bytes), number of buffers in backlog: 0, finished? false, read view? false].
21:55:10.269 [Source: Custom Source (1/1)] DEBUG o.a.f.r.i.n.p.PipelinedSubpartition - Source: Custom Source (1/1) (0667d85119bbe2015ef395a2f6a1a0eb): Released PipelinedSubpartition#1 [number of buffers: 9521 (311966732 bytes), number of buffers in backlog: 0, finished? false, read view? false].
21:55:10.269 [Source: Custom Source (1/1)] DEBUG o.a.f.r.i.n.p.PipelinedSubpartition - Source: Custom Source (1/1) (0667d85119bbe2015ef395a2f6a1a0eb): Released PipelinedSubpartition#2 [number of buffers: 9517 (311833280 bytes), number of buffers in backlog: 0, finished? false, read view? false].
21:55:10.269 [Source: Custom Source (1/1)] DEBUG o.a.f.r.i.n.p.PipelinedSubpartition - Source: Custom Source (1/1) (0667d85119bbe2015ef395a2f6a1a0eb): Released PipelinedSubpartition#3 [number of buffers: 9521 (311966307 bytes), number of buffers in backlog: 0, finished? false, read view? false].
21:55:10.269 [Source: Custom Source (1/1)] DEBUG o.a.f.r.i.n.p.PipelinedSubpartition - Source: Custom Source (1/1) (0667d85119bbe2015ef395a2f6a1a0eb): Released PipelinedSubpartition#4 [number of buffers: 9517 (311833267 bytes), number of buffers in backlog: 0, finished? false, read view? false].
21:55:10.269 [Source: Custom Source (1/1)] DEBUG o.a.f.r.i.n.p.PipelinedSubpartition - Source: Custom Source (1/1) (0667d85119bbe2015ef395a2f6a1a0eb): Released PipelinedSubpartition#5 [number of buffers: 9521 (311966366 bytes), number of buffers in backlog: 0, finished? false, read view? false].
21:55:10.269 [Source: Custom Source (1/1)] DEBUG o.a.f.r.i.n.p.PipelinedSubpartition - Source: Custom Source (1/1) (0667d85119bbe2015ef395a2f6a1a0eb): Released PipelinedSubpartition#6 [number of buffers: 9517 (311833450 bytes), number of buffers in backlog: 0, finished? false, read view? false].
21:55:10.269 [Source: Custom Source (1/1)] DEBUG o.a.f.r.i.n.p.PipelinedSubpartition - Source: Custom Source (1/1) (0667d85119bbe2015ef395a2f6a1a0eb): Released PipelinedSubpartition#7 [number of buffers: 9521 (311966214 bytes), number of buffers in backlog: 0, finished? false, read view? false].
21:55:10.269 [Source: Custom Source (1/1)] DEBUG o.a.f.r.i.n.p.PipelinedSubpartition - Source: Custom Source (1/1) (0667d85119bbe2015ef395a2f6a1a0eb): Released PipelinedSubpartition#8 [number of buffers: 9517 (311833465 bytes), number of buffers in backlog: 0, finished? false, read view? false].
21:55:10.269 [Source: Custom Source (1/1)] DEBUG o.a.f.r.i.n.p.PipelinedSubpartition - Source: Custom Source (1/1) (0667d85119bbe2015ef395a2f6a1a0eb): Released PipelinedSubpartition#9 [number of buffers: 9521 (311966516 bytes), number of buffers in backlog: 0, finished? false, read view? false].
21:55:10.269 [Source: Custom Source (1/1)] DEBUG o.a.f.r.i.n.p.ResultPartitionManager - Released partition 01ef47105b9eb154f23107d2329e9926 produced by 0667d85119bbe2015ef395a2f6a1a0eb.
21:55:10.269 [Source: Custom Source (1/1)] INFO o.a.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task Source: Custom Source (1/1) (0667d85119bbe2015ef395a2f6a1a0eb) [FAILED]
21:55:10.277 [flink-akka.actor.default-dispatcher-1517] INFO o.a.f.r.taskexecutor.TaskExecutor - Un-registering task and sending final execution state FAILED to JobManager for task Source: Custom Source (1/1) 0667d85119bbe2015ef395a2f6a1a0eb.
21:55:10.279 [flink-akka.actor.default-dispatcher-1510] INFO o.a.f.r.e.ExecutionGraph - Source: Custom Source (1/1) (0667d85119bbe2015ef395a2f6a1a0eb) switched from RUNNING to FAILED.
com.mongodb.MongoCursorNotFoundException: Query failed with error code -5 and error message 'Cursor 30704136349 not found on server javashake:27117' on server javashake:27117
at com.mongodb.operation.QueryHelper.translateCommandException(QueryHelper.java:27) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.operation.QueryBatchCursor.getMore(QueryBatchCursor.java:267) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.operation.QueryBatchCursor.hasNext(QueryBatchCursor.java:138) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.client.internal.MongoBatchCursorAdapter.hasNext(MongoBatchCursorAdapter.java:54) ~[mongo-java-driver-3.11.2.jar:na]
at org.apache.flink.streaming.connectors.mongodb.source.OplogReader.read(OplogReader.java:61) ~[flink-connector-mongodb-0.0.1.jar:na]
at org.apache.flink.streaming.connectors.mongodb.source.Worker.read(Worker.java:301) ~[flink-connector-mongodb-0.0.1.jar:na]
at org.apache.flink.streaming.connectors.mongodb.MongoSource.run(MongoSource.java:47) ~[flink-connector-mongodb-0.0.1.jar:na]
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100) ~[flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63) ~[flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:196) ~[flink-streaming-java_2.11-1.10.0.jar:1.10.0]
21:55:10.281 [flink-akka.actor.default-dispatcher-1510] INFO o.a.f.r.e.f.f.RestartPipelinedRegionStrategy - Calculating tasks to restart to recover the failed task bc764cd8ddf7a0cff126f51c16239658_0.
21:55:10.282 [flink-akka.actor.default-dispatcher-1510] INFO o.a.f.r.e.f.f.RestartPipelinedRegionStrategy - 11 tasks should be restarted to recover the failed task bc764cd8ddf7a0cff126f51c16239658_0.
21:55:10.284 [flink-akka.actor.default-dispatcher-1510] INFO o.a.f.r.e.ExecutionGraph - Job testing (73ad176d8f0b66468d91fc440f5269d9) switched from state RUNNING to FAILING.
org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:110) ~[flink-runtime_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:76) ~[flink-runtime_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192) [flink-runtime_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:186) [flink-runtime_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:180) [flink-runtime_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:484) [flink-runtime_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:380) [flink-runtime_2.11-1.10.0.jar:1.10.0]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_201]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_201]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_201]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_201]
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:279) [flink-runtime_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:194) [flink-runtime_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74) [flink-runtime_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152) [flink-runtime_2.11-1.10.0.jar:1.10.0]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) ~[scala-library-2.11.12.jar:na]
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) ~[scala-library-2.11.12.jar:na]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) ~[scala-library-2.11.12.jar:na]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) ~[scala-library-2.11.12.jar:na]
at akka.actor.Actor$class.aroundReceive(Actor.scala:517) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.actor.ActorCell.invoke(ActorCell.scala:561) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.dispatch.Mailbox.run(Mailbox.scala:225) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.dispatch.Mailbox.exec(Mailbox.scala:235) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
com.mongodb.MongoCursorNotFoundException: Query failed with error code -5 and error message 'Cursor 30704136349 not found on server javashake:27117' on server javashake:27117
at com.mongodb.operation.QueryHelper.translateCommandException(QueryHelper.java:27) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.operation.QueryBatchCursor.getMore(QueryBatchCursor.java:267) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.operation.QueryBatchCursor.hasNext(QueryBatchCursor.java:138) ~[mongo-java-driver-3.11.2.jar:na]
at com.mongodb.client.internal.MongoBatchCursorAdapter.hasNext(MongoBatchCursorAdapter.java:54) ~[mongo-java-driver-3.11.2.jar:na]
at org.apache.flink.streaming.connectors.mongodb.source.OplogReader.read(OplogReader.java:61) ~[flink-connector-mongodb-0.0.1.jar:na]
at org.apache.flink.streaming.connectors.mongodb.source.Worker.read(Worker.java:301) ~[flink-connector-mongodb-0.0.1.jar:na]
at org.apache.flink.streaming.connectors.mongodb.MongoSource.run(MongoSource.java:47) ~[flink-connector-mongodb-0.0.1.jar:na]
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100) ~[flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63) ~[flink-streaming-java_2.11-1.10.0.jar:1.10.0]
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:196) ~[flink-streaming-java_2.11-1.10.0.jar:1.10.0]
21:55:10.287 [flink-akka.actor.default-dispatcher-1510] INFO o.a.f.r.e.ExecutionGraph - Sink: Unnamed (1/10) (afcdc1127308009d1abfa35b8e6153de) switched from RUNNING to CANCELING.
21:55:10.288 [flink-akka.actor.default-dispatcher-1517] INFO o.a.flink.runtime.taskmanager.Task

当 调用MongoDB startsession() 时报出not supported 错误

spring-Sessions are not supported by the MongoDB cluster to which this client is connected
原因:

1.mongodb版本过低4.0以下不支持事务的情况下会报这个

2.安全认证问题

报错示例

Exception in thread "main" com.mongodb.MongoClientException: Sessions are not supported by the MongoDB cluster to which this client is connected
	at com.mongodb.client.internal.MongoClientImpl.startSession(MongoClientImpl.java:128)
	at com.mongodb.client.internal.MongoClientImpl.startSession(MongoClientImpl.java:114)

数据节点异常OPS无法更新mongos

在OPS操作MongoDB分片集群添加configserver节点后,ops会重写mongos配置文件并重启。若此时shard节点异常会阻塞该操作执行,直到数据节点状态恢复正常。

No server chosen by ReadPreferenceServerSelector

2020-07-22 23:17:20.717 o.m.d.cluster Thread-21-PROCESS_BOLT-executor[92 92] [INFO] No server chosen by ReadPreferenceServerSelector{readPreference=ReadPreference{name=secondaryPreferred}} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=110.180.131.xx:27017, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=110.180.131.xx:27017, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=110.180.131.xx:27017, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=110.180.131.xxx:27017, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=110.180.131.xxx:27017, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=110.180.131.xxx:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
2020-07-22 23:17:20.717 c.g.s.u.ResourceManager Thread-33-PROCESS_BOLT-executor[98 98] [INFO] finished init stormConfig
2020-07-22 23:17:20.717 o.m.d.cluster Thread-55-PROCESS_BOLT-executor[74 74] [INFO] No server chosen by ReadPreferenceServerSelector{readPreference=ReadPreference{name=secondaryPreferred}} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=110.180.131.xxx:27017, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=110.180.131.xxx:27017, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=110.180.131.xxx:27017, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=110.180.131.xxx:27017, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=110.180.131.xxx:27017, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=110.180.131.xxx:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
2020-07-22 23:17:20.715 o.m.d.cluster Thread-43-PARSE_BOLT-executor[50 50] [INFO] No server chosen by ReadPreferenceServerSelector{readPreference=ReadPreference{name=secondaryPreferred}} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=MULTIPLE, serverDescriptions=[

com.mongodb.MongoQueryException: CollectionScan died due to position in capped collection being deleted

Caused by: com.mongodb.MongoQueryException: Query failed with error code 136 and error message 'Executor error during find command :: caused by :: errmsg: "CollectionScan died due to position in capped collection being deleted. Last seen record id: RecordId(6852600765781915978)"' on server :27119
at com.mongodb.operation.FindOperation$1.call(FindOperation.java:735)
at com.mongodb.operation.FindOperation$1.call(FindOperation.java:725)
at com.mongodb.operation.OperationHelper.withReadConnectionSource(OperationHelper.java:463)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:725)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:89)
at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:189)
at com.mongodb.client.internal.MongoIterableImpl.execute(MongoIterableImpl.java:143)
at com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:92)
at org.apache.flink.streaming.connectors.mongodb.worker.BackendOplogReader.read(BackendOplogReader.java:59)
at org.apache.flink.streaming.connectors.mongodb.worker.Worker.read(Worker.java:178)
at org.apache.flink.streaming.connectors.mongodb.MongoSource.run(MongoSource.java:33)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:196)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.