zhangst

zhangst

Member Since 9 years ago

HangZhou,China

Experience Points
6
follower
Lessons Completed
6
follow
Lessons Completed
4
stars
Best Reply Awards
4
repos

2 contributions in the last year

Pinned
⚡ Multi-Read from one bind port
⚡ Git Source Code Mirror - This is a publish-only repository and all pull requests are ignored. Please follow Documentation/SubmittingPatches procedure for any of your improvements.
⚡ This is document of gotoos.
⚡ Kamailio - The Open Source SIP Server -
Activity
Nov
30
5 days ago
Activity icon
issue

zhangst issue comment alibaba/MongoShake

zhangst
zhangst

分片集群change_stream增量同步失败,如何恢复,2.6.4版本

线上一个增量模式的change_stream同步,期间由于网络原因中断了,最后的报错日志是需要重新全量同步,删除了ckpt集合再次拉起,发现不同步数据也不重新生成ckpt集合。

人工创建设置了一个断点 { "_id" : ObjectId("61a59e272e5e7f812b0355c7"), "name" : "mongos", "ckpt" : Timestamp(1638243875, 1572), "fetch_method" : "", "oplog_disk_queue" : "", "oplog_disk_queue_apply_finish_ts" : Timestamp(0, 1), "version" : 2 } 重新也没有再重新增量同步数据 [2021/11/30 11:50:06 CST] [INFO] Close session with xxxxxxxxxxxxxxxxxxxx [2021/11/30 11:50:06 CST] [INFO] all node timestamp map: map[mongodSrv27031:{7035210311212728710 7036205287631487987} mongodSrv27032:{7035895629079380179 7036205287631487998}] [2021/11/30 11:50:06 CST] [INFO] try to fetch mongos checkpoint [2021/11/30 11:50:06 CST] [INFO] New session to mongodb:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx successfully [2021/11/30 11:50:06 CST] [INFO] mongos Load exist checkpoint. content {"name":"mongos","ckpt":7036203865997313572,"version":2,"fetch_method":"","oplog_disk_queue":"","oplog_disk_queue_apply_finish_ts":1} [2021/11/30 11:50:06 CST] [INFO] mongodSrv27031 checkpoint using mongos: {"name":"mongos","ckpt":7036203865997313572,"version":2,"fetch_method":"","oplog_disk_queue":"","oplog_disk_queue_apply_finish_ts":1} [2021/11/30 11:50:06 CST] [INFO] mongodSrv27032 checkpoint using mongos: {"name":"mongos","ckpt":7036203865997313572,"version":2,"fetch_method":"","oplog_disk_queue":"","oplog_disk_queue_apply_finish_ts":1} [2021/11/30 11:50:06 CST] [INFO] sync mode run incr [2021/11/30 11:50:06 CST] [INFO] start running with mode[incr], fullBeginTs[0[0, 0]] [2021/11/30 11:50:06 CST] [INFO] start incr replication [2021/11/30 11:50:07 CST] [INFO] Oplog sync[mongos] create checkpoint manager with url[mongodb:/xxxxxxxxxxxxxxxxxxxxxxxxxxxxx] table[mongoshake.xxx] start-position[7036203865997313572[1638243875, 1572]] [2021/11/30 11:50:07 CST] [INFO] load checkpoint value: {"name":"mongos","ckpt":7036203865997313572,"version":2,"fetch_method":"","oplog_disk_queue":"","oplog_disk_queue_apply_finish_ts":1} [2021/11/30 11:50:07 CST] [INFO] persister replset[mongos] update fetch status to: store memory and apply [2021/11/30 11:50:07 CST] [INFO] mongos Load exist checkpoint. content {"name":"mongos","ckpt":7036203865997313572,"version":2,"fetch_method":"","oplog_disk_queue":"","oplog_disk_queue_apply_finish_ts":1} [2021/11/30 11:50:07 CST] [INFO] set query timestamp: 7036203865997313572[1638243875, 1572] [2021/11/30 11:50:07 CST] [INFO] start fetcher with src [2021/11/30 11:50:07 CST] [INFO] change stream options: BatchSize[8192] MaxAwaitTime[24h0m0s] StartAtOperationTime[{1638243875 1572}] [2021/11/30 11:50:11 CST] [INFO] [name=mongos, stage=incr, get=0, filter=0, write_success=0, tps=0, ckpt_times=0, lsn_ckpt={0[0, 0], 1970-01-01 08:00:00}, lsn_ack={0[0, 0], 1970-01-01 08:00:00}]] [2021/11/30 11:50:16 CST] [INFO] [name=mongos, stage=incr, get=0, filter=0, write_success=0, tps=0, ckpt_times=0, lsn_ckpt={0[0, 0], 1970-01-01 08:00:00}, lsn_ack={0[0, 0], 1970-01-01 08:00:00}]]

这种情况要如何去修复? 重新发起一次all同步吗,在网上看到其他地方QA Q: 分片集群,我先用的oplog方式,全量+增量了一段时间,然后开了balancer,mongoshake挂了,再改成change stream模式为什么不能拉取? A: oplog模式下,checkpoint是一个mongod一个,而change_stream模式下,全局就只有一个checkpoint。checkpoint是用于断点续传的,启动后会去拉取这个位点,如果启动发现这个位点不存在,那么就有问题了,而oplog模式的checkpoint跟change_stream模式的checkpoint是不兼容的,所以不能这么使用。正确的使用姿势:all模式启动,change_stream对接即可,无论是全量还是增量,都不需要关闭balancer。

zhangst
zhangst

这个是常规用法,参考wiki排查下

Activity icon
issue

zhangst issue comment alibaba/MongoShake

zhangst
zhangst

分片集群change_stream增量同步失败,如何恢复,2.6.4版本

线上一个增量模式的change_stream同步,期间由于网络原因中断了,最后的报错日志是需要重新全量同步,删除了ckpt集合再次拉起,发现不同步数据也不重新生成ckpt集合。

人工创建设置了一个断点 { "_id" : ObjectId("61a59e272e5e7f812b0355c7"), "name" : "mongos", "ckpt" : Timestamp(1638243875, 1572), "fetch_method" : "", "oplog_disk_queue" : "", "oplog_disk_queue_apply_finish_ts" : Timestamp(0, 1), "version" : 2 } 重新也没有再重新增量同步数据 [2021/11/30 11:50:06 CST] [INFO] Close session with xxxxxxxxxxxxxxxxxxxx [2021/11/30 11:50:06 CST] [INFO] all node timestamp map: map[mongodSrv27031:{7035210311212728710 7036205287631487987} mongodSrv27032:{7035895629079380179 7036205287631487998}] [2021/11/30 11:50:06 CST] [INFO] try to fetch mongos checkpoint [2021/11/30 11:50:06 CST] [INFO] New session to mongodb:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx successfully [2021/11/30 11:50:06 CST] [INFO] mongos Load exist checkpoint. content {"name":"mongos","ckpt":7036203865997313572,"version":2,"fetch_method":"","oplog_disk_queue":"","oplog_disk_queue_apply_finish_ts":1} [2021/11/30 11:50:06 CST] [INFO] mongodSrv27031 checkpoint using mongos: {"name":"mongos","ckpt":7036203865997313572,"version":2,"fetch_method":"","oplog_disk_queue":"","oplog_disk_queue_apply_finish_ts":1} [2021/11/30 11:50:06 CST] [INFO] mongodSrv27032 checkpoint using mongos: {"name":"mongos","ckpt":7036203865997313572,"version":2,"fetch_method":"","oplog_disk_queue":"","oplog_disk_queue_apply_finish_ts":1} [2021/11/30 11:50:06 CST] [INFO] sync mode run incr [2021/11/30 11:50:06 CST] [INFO] start running with mode[incr], fullBeginTs[0[0, 0]] [2021/11/30 11:50:06 CST] [INFO] start incr replication [2021/11/30 11:50:07 CST] [INFO] Oplog sync[mongos] create checkpoint manager with url[mongodb:/xxxxxxxxxxxxxxxxxxxxxxxxxxxxx] table[mongoshake.xxx] start-position[7036203865997313572[1638243875, 1572]] [2021/11/30 11:50:07 CST] [INFO] load checkpoint value: {"name":"mongos","ckpt":7036203865997313572,"version":2,"fetch_method":"","oplog_disk_queue":"","oplog_disk_queue_apply_finish_ts":1} [2021/11/30 11:50:07 CST] [INFO] persister replset[mongos] update fetch status to: store memory and apply [2021/11/30 11:50:07 CST] [INFO] mongos Load exist checkpoint. content {"name":"mongos","ckpt":7036203865997313572,"version":2,"fetch_method":"","oplog_disk_queue":"","oplog_disk_queue_apply_finish_ts":1} [2021/11/30 11:50:07 CST] [INFO] set query timestamp: 7036203865997313572[1638243875, 1572] [2021/11/30 11:50:07 CST] [INFO] start fetcher with src [2021/11/30 11:50:07 CST] [INFO] change stream options: BatchSize[8192] MaxAwaitTime[24h0m0s] StartAtOperationTime[{1638243875 1572}] [2021/11/30 11:50:11 CST] [INFO] [name=mongos, stage=incr, get=0, filter=0, write_success=0, tps=0, ckpt_times=0, lsn_ckpt={0[0, 0], 1970-01-01 08:00:00}, lsn_ack={0[0, 0], 1970-01-01 08:00:00}]] [2021/11/30 11:50:16 CST] [INFO] [name=mongos, stage=incr, get=0, filter=0, write_success=0, tps=0, ckpt_times=0, lsn_ckpt={0[0, 0], 1970-01-01 08:00:00}, lsn_ack={0[0, 0], 1970-01-01 08:00:00}]]

这种情况要如何去修复? 重新发起一次all同步吗,在网上看到其他地方QA Q: 分片集群,我先用的oplog方式,全量+增量了一段时间,然后开了balancer,mongoshake挂了,再改成change stream模式为什么不能拉取? A: oplog模式下,checkpoint是一个mongod一个,而change_stream模式下,全局就只有一个checkpoint。checkpoint是用于断点续传的,启动后会去拉取这个位点,如果启动发现这个位点不存在,那么就有问题了,而oplog模式的checkpoint跟change_stream模式的checkpoint是不兼容的,所以不能这么使用。正确的使用姿势:all模式启动,change_stream对接即可,无论是全量还是增量,都不需要关闭balancer。

zhangst
zhangst

是的,会从最早的oplog开始拉取,

Nov
29
6 days ago
Activity icon
created branch

zhangst in alibaba/NimoShake create branch improve-1.0.13

createdAt 6 days ago
Activity icon
issue

zhangst issue comment alibaba/MongoShake

zhangst
zhangst

也可以直接指定DB的名称,不需要指定每一个集合。这样可以缩短长度,这样能解决你的问题吗? 你这个用法很少

Activity icon
issue

zhangst issue alibaba/MongoShake

zhangst
zhangst

docker自己本地搭的mongo,启动mongoshake一直报错?

Conf.Options check failed: connect source mongodb[mongodb://admin:[email protected]:27017/admin] failed[server returned error on SASL authentication step: BSON field 'saslContinue.mechanism' is an unknown field.] [17:12:54 CST 2021/11/26] [CRIT] (mongoshake/common.NewMongoConn:24) Connect to mongodb://admin:***@127.0.0.1:27017/admin failed. server returned error on SASL authentication step: BSON field 'saslContinue.mechanism' is an unknown field.

Activity icon
issue

zhangst issue comment alibaba/MongoShake

zhangst
zhangst

docker自己本地搭的mongo,启动mongoshake一直报错?

Conf.Options check failed: connect source mongodb[mongodb://admin:[email protected]:27017/admin] failed[server returned error on SASL authentication step: BSON field 'saslContinue.mechanism' is an unknown field.] [17:12:54 CST 2021/11/26] [CRIT] (mongoshake/common.NewMongoConn:24) Connect to mongodb://admin:***@127.0.0.1:27017/admin failed. server returned error on SASL authentication step: BSON field 'saslContinue.mechanism' is an unknown field.

zhangst
zhangst

shake还不支持,5.0版本的mongo。下个版本会支持,下个月吧

Nov
26
1 week ago
push

zhangst push alibaba/MongoShake

zhangst
zhangst

switch to mongo-go-dirver from mgo, in progress

commit sha: dc014251ed5389efcb2ed24ec1632cdc07076747

push time in 1 week ago
Activity icon
created branch

zhangst in alibaba/MongoShake create branch improve-2.6.7

createdAt 1 week ago
Activity icon
issue

zhangst issue comment alibaba/MongoShake

zhangst
zhangst

syncer_inner_queue队列满,worker队列空,如何提升性能

image image

zhangst
zhangst

麻烦上传下日志和完成的监控信息:

Activity icon
issue

zhangst issue alibaba/MongoShake

zhangst
zhangst

最新版本有问题

版本:mongo-shake-v2.6.4_2.tar.gz 解压之后, 输入:./collector.linux -version 输出:$

Activity icon
issue

zhangst issue comment alibaba/MongoShake

zhangst
zhangst

最新版本有问题

版本:mongo-shake-v2.6.4_2.tar.gz 解压之后, 输入:./collector.linux -version 输出:$

zhangst
zhangst

日志输出要指定-verbose int

push

zhangst push alibaba/MongoShake

zhangst
zhangst

kafka支持配置ssl (#666)

  • 支持kafka ssl Co-authored-by: zzm
zhangst
zhangst

hidden url password in log

commit sha: 49a7afb4b27b594ae0ae8962e66ef74b95b5ddf5

push time in 1 week ago
Activity icon
issue

zhangst issue alibaba/MongoShake

zhangst
zhangst

Mongo SCRAM-SHA-256 support

Does mongoshake supports SCRAM-SHA-256?

Activity icon
issue

zhangst issue comment alibaba/MongoShake

zhangst
zhangst

Mongo SCRAM-SHA-256 support

Does mongoshake supports SCRAM-SHA-256?

zhangst
zhangst

yes, mongoshake rely pkg mgo, it support SCRAM-SHA-256

Activity icon
issue

zhangst issue alibaba/MongoShake

zhangst
zhangst

mongoshake2.6.5版本增量同步

mongoshake 2.6.5 源端和目标端都是mongos,同步模式为all,在进入增量模式后,mongoshake所在机器内存快速增长甚至达到了200G,日志显示在进行chunk merge操作。 [2021/10/14 07:58:13 CST] [INFO] Collector-worker-25 transfer retransmit:false send [32] logs. reply_acked [7018397541473452038[1634098017, 6]], list_unack [0] [2021/10/14 07:58:13 CST] [INFO] Replayer-39 Executor-39 doSync oplogRecords received[1] merged[1]. merge to 100.00% chunks [2021/10/14 07:58:13 CST] [INFO] Collector-worker-39 transfer retransmit:false send [1] logs. reply_acked [7018494715108528178[1634120642, 4146]], list_unack [0] [2021/10/14 07:58:13 CST] [INFO] Replayer-37 Executor-37 doSync oplogRecords received[134] merged[5]. merge to 3.73% chunks [2021/10/14 07:58:13 CST] [INFO] Collector-worker-37 transfer retransmit:false send [134] logs. reply_acked [7018388526337097804[1634095918, 76]], list_unack [0] [2021/10/14 07:58:13 CST] [INFO] worker offset [7018411671915855940] use lowest 7018411671915855940[1634101307, 68] [2021/10/14 07:58:13 CST] [INFO] worker offset [7018418462259150886] use lowest 7018418462259150886[1634102888, 38] [2021/10/14 07:58:13 CST] [INFO] Replayer-28 Executor-28 doSync oplogRecords received[67] merged[3]. merge to 4.48% chunks [2021/10/14 07:58:13 CST] [INFO] Collector-worker-28 transfer retransmit:false send [67] logs. reply_acked [7018409034805936130[1634100693, 2]], list_unack [0] [2021/10/14 07:58:13 CST] [INFO] worker offset [7018388187034681390] use lowest 7018388187034681390[1634095839, 46]

现象出现过两次,第一次目标端没开启balance,第二次目标端开启balance,两次结果都是一样,mongoshake所在机器在进入增量后内存大幅上涨。

增量配置如下 incr_sync.mongo_fetch_method = oplog incr_sync.change_stream.watch_full_document = false incr_sync.oplog.gids = incr_sync.shard_key = collection incr_sync.shard_by_object_id_whitelist = incr_sync.worker = 16 incr_sync.tunnel.write_thread = 16 incr_sync.target_delay = 0 incr_sync.worker.batch_queue_size = 64 incr_sync.adaptive.batching_max_size = 1024 incr_sync.fetcher.buffer_capacity = 256 incr_sync.executor.upsert = false incr_sync.executor.insert_on_dup_update = false incr_sync.conflict_write_to = none incr_sync.executor.majority_enable = false

Activity icon
issue

zhangst issue comment alibaba/MongoShake

zhangst
zhangst

mongoshake2.6.5版本增量同步

mongoshake 2.6.5 源端和目标端都是mongos,同步模式为all,在进入增量模式后,mongoshake所在机器内存快速增长甚至达到了200G,日志显示在进行chunk merge操作。 [2021/10/14 07:58:13 CST] [INFO] Collector-worker-25 transfer retransmit:false send [32] logs. reply_acked [7018397541473452038[1634098017, 6]], list_unack [0] [2021/10/14 07:58:13 CST] [INFO] Replayer-39 Executor-39 doSync oplogRecords received[1] merged[1]. merge to 100.00% chunks [2021/10/14 07:58:13 CST] [INFO] Collector-worker-39 transfer retransmit:false send [1] logs. reply_acked [7018494715108528178[1634120642, 4146]], list_unack [0] [2021/10/14 07:58:13 CST] [INFO] Replayer-37 Executor-37 doSync oplogRecords received[134] merged[5]. merge to 3.73% chunks [2021/10/14 07:58:13 CST] [INFO] Collector-worker-37 transfer retransmit:false send [134] logs. reply_acked [7018388526337097804[1634095918, 76]], list_unack [0] [2021/10/14 07:58:13 CST] [INFO] worker offset [7018411671915855940] use lowest 7018411671915855940[1634101307, 68] [2021/10/14 07:58:13 CST] [INFO] worker offset [7018418462259150886] use lowest 7018418462259150886[1634102888, 38] [2021/10/14 07:58:13 CST] [INFO] Replayer-28 Executor-28 doSync oplogRecords received[67] merged[3]. merge to 4.48% chunks [2021/10/14 07:58:13 CST] [INFO] Collector-worker-28 transfer retransmit:false send [67] logs. reply_acked [7018409034805936130[1634100693, 2]], list_unack [0] [2021/10/14 07:58:13 CST] [INFO] worker offset [7018388187034681390] use lowest 7018388187034681390[1634095839, 46]

现象出现过两次,第一次目标端没开启balance,第二次目标端开启balance,两次结果都是一样,mongoshake所在机器在进入增量后内存大幅上涨。

增量配置如下 incr_sync.mongo_fetch_method = oplog incr_sync.change_stream.watch_full_document = false incr_sync.oplog.gids = incr_sync.shard_key = collection incr_sync.shard_by_object_id_whitelist = incr_sync.worker = 16 incr_sync.tunnel.write_thread = 16 incr_sync.target_delay = 0 incr_sync.worker.batch_queue_size = 64 incr_sync.adaptive.batching_max_size = 1024 incr_sync.fetcher.buffer_capacity = 256 incr_sync.executor.upsert = false incr_sync.executor.insert_on_dup_update = false incr_sync.conflict_write_to = none incr_sync.executor.majority_enable = false

zhangst
zhangst

如果全部连不上,用mongo shell连接试试,需要先解决网络问题

Activity icon
issue

zhangst issue alibaba/MongoShake

zhangst
zhangst

mongoshake支持分布式吗

mongoshake支持分布式吗,后续考虑支持分布式部署 ,横向扩展吗

Activity icon
issue

zhangst issue comment alibaba/MongoShake

zhangst
zhangst

mongoshake支持分布式吗

mongoshake支持分布式吗,后续考虑支持分布式部署 ,横向扩展吗

zhangst
zhangst
Activity icon
issue

zhangst issue alibaba/MongoShake

zhangst
zhangst

全量同步 full模式结束之后,没有自动退出,又重新拉起任务重复执行

全量同步 full模式结束之后,没有自动退出,又重新拉起任务重复执行全量同步,这个问题有遇到过吗 sync_mode = full

Activity icon
issue

zhangst issue comment alibaba/MongoShake

zhangst
zhangst

全量同步 full模式结束之后,没有自动退出,又重新拉起任务重复执行

全量同步 full模式结束之后,没有自动退出,又重新拉起任务重复执行全量同步,这个问题有遇到过吗 sync_mode = full

zhangst
zhangst

hypervisor本身是自动重启程序。给需要一直长期运行的增量同步使用。全量同步只运行一次,不需要使用hypervisor

Activity icon
issue

zhangst issue comment alibaba/MongoShake

zhangst
zhangst

出现这个错误:[CRIT] Replayer-0, executor-0, oplog for namespace[zx.$cmd] op[c] failed. error type[*mgo.QueryError] error[the 'temp' field is an invalid option]

源端是副本集,目的端是分片集群 [2021/11/10 15:39:56 CST] [CRIT] Replayer-0, executor-0, oplog for namespace[zx.$cmd] op[c] failed. error type[*mgo.QueryError] error[the 'temp' field is an invalid option], logs number[1], firstLog: {"ts":7028842807448043521,"v":2,"op":"c","ns":"zx.$cmd","o":[{"Name":"create","Value":"tmp.mr.date_0"},{"Name":"temp","Value":true},{"Name":"idIndex","Value":[{"Name":"v","Value":2},{"Name":"key","Value":[{"Name":"_id","Value":1}]},{"Name":"name","Value":"id"},{"Name":"ns","Value":"zx.tmp.mr.date_0"}]}],"o2":null}

zhangst
zhangst

"Name":"temp" 字段改一个名字试试

Activity icon
issue

zhangst issue alibaba/MongoShake

zhangst
zhangst

出现这个错误:[CRIT] Replayer-0, executor-0, oplog for namespace[zx.$cmd] op[c] failed. error type[*mgo.QueryError] error[the 'temp' field is an invalid option]

源端是副本集,目的端是分片集群 [2021/11/10 15:39:56 CST] [CRIT] Replayer-0, executor-0, oplog for namespace[zx.$cmd] op[c] failed. error type[*mgo.QueryError] error[the 'temp' field is an invalid option], logs number[1], firstLog: {"ts":7028842807448043521,"v":2,"op":"c","ns":"zx.$cmd","o":[{"Name":"create","Value":"tmp.mr.date_0"},{"Name":"temp","Value":true},{"Name":"idIndex","Value":[{"Name":"v","Value":2},{"Name":"key","Value":[{"Name":"_id","Value":1}]},{"Name":"name","Value":"id"},{"Name":"ns","Value":"zx.tmp.mr.date_0"}]}],"o2":null}

Nov
23
1 week ago
Activity icon
issue

zhangst issue alibaba/MongoShake

zhangst
zhangst

全量同步,同步索引无法保证顺序

看代码,处理索引时用的是 bson.M{} 处理 key 数组(顺序matter)

// M is an unordered representation of a BSON document. This type should be used when the order of the elements does not // matter. This type is handled as a regular map[string]interface{} when encoding and decoding. Elements will be // serialized in an undefined, random order. If the order of the elements matters, a D should be used instead. // // Example usage: // // bson.M{"foo": "bar", "hello": "world", "pi": 3.14159} type M = primitive.M


改了下 doc_syncer_test.go 里面的案例,test 能 pass,但是最终结果不能保证正确

TestStartIndexSync 案例

// 新增的案例 { "unique": true, "key": bson2.M{ "id_type": int32(1), "did": int32(1), "uid": int32(1), "msg_id": int32(1), }, "name": "id_type_1_did_1_uid_1_msg_id_1", "ns": "test_db.test_coll", "background": true, }

` func TestStartIndexSync(t *testing.T) { // test StartIndexSync

var nr int

utils.InitialLogger("", "", "info", true, 1)

// test drop
{
	fmt.Printf("TestStartIndexSync case %d.\n", nr)
	nr++

	conf.Options.FullSyncReaderCollectionParallel = 4

	conn, err := utils.NewMongoCommunityConn(testMongoAddress, utils.VarMongoConnectModeSecondaryPreferred, true,
		utils.ReadWriteConcernLocal, utils.ReadWriteConcernDefault)
	assert.Equal(t, nil, err, "should be equal")

	// drop old db
	err = conn.Client.Database("test_db").Drop(nil)
	assert.Equal(t, nil, err, "should be equal")

	indexInput := []bson2.M{
		{
			"key": bson2.M{
				"_id": int32(1),
			},
			"name": "_id_",
			"ns":   "test_db.test_coll",
		},
		{
			"key": bson2.M{
				"hello": "hashed",
			},
			"name": "hello_hashed",
			"ns":   "test_db.test_coll",
		},
		{
			"key": bson2.M{
				"x": int32(1),
				"y": int32(1),
			},
			"name": "x_1_y_1",
			"ns":   "test_db.test_coll",
		},
		{
			"key": bson2.M{
				"z": int32(1),
			},
			"name": "z_1",
			"ns":   "test_db.test_coll",
		},
		{
			"unique": true,
			"key": bson2.M{
				"id_type": int32(1),
				"did":     int32(1),
				"uid":     int32(1),
				"msg_id":  int32(1),
			},
			"name":       "id_type_1_did_1_uid_1_msg_id_1",
			"ns":         "test_db.test_coll",
			"background": true,
		},
	}
	indexMap := map[utils.NS][]bson2.M{
		utils.NS{"test_db", "test_coll"}: indexInput,
	}
	err = StartIndexSync(indexMap, testMongoAddress, nil, true)
	assert.Equal(t, nil, err, "should be equal")

	cursor, err := conn.Client.Database("test_db").Collection("test_coll").Indexes().List(nil)
	assert.Equal(t, nil, err, "should be equal")

	indexes := make([]bson2.M, 0)

	cursor.All(nil, &indexes)
	assert.Equal(t, nil, err, "should be equal")
	assert.Equal(t, len(indexes), len(indexInput), "should be equal")
	assert.Equal(t, isEqual(indexInput, indexes), true, "should be equal")
}

// serverless
{
	fmt.Printf("TestStartIndexSync case %d.\n", nr)
	nr++

	conf.Options.FullSyncReaderCollectionParallel = 4

	conn, err := utils.NewMongoCommunityConn(testMongoAddressServerless, utils.VarMongoConnectModePrimary, true,
		utils.ReadWriteConcernLocal, utils.ReadWriteConcernDefault)
	assert.Equal(t, nil, err, "should be equal")

	// drop old db
	err = conn.Client.Database("test_db").Drop(nil)
	assert.Equal(t, nil, err, "should be equal")

	indexInput := []bson2.M{
		{
			"key": bson2.M{
				"_id": int32(1),
			},
			"name": "_id_",
			//"ns":   "test_db.test_coll",
		},
		{
			"key": bson2.M{
				"hello": "hashed",
			},
			"name": "hello_hashed",
			//"ns":   "test_db.test_coll",
		},
		{
			"key": bson2.M{
				"x": int32(1),
				"y": int32(1),
			},
			"name": "x_1_y_1",
			//"ns":   "test_db.test_coll",
		},
		{
			"key": bson2.M{
				"z": int32(1),
			},
			"name": "z_1",
			//"ns":   "test_db.test_coll",
		},
		{
			"unique": true,
			"key": bson2.M{
				"id_type": int32(1),
				"did":     int32(1),
				"uid":     int32(1),
				"msg_id":  int32(1),
			},
			"name": "id_type_1_did_1_uid_1_msg_id_1",
			//"ns":         "test_db.test_coll",
			//"background": true,
		},
	}
	indexMap := map[utils.NS][]bson2.M{
		utils.NS{"test_db", "test_coll"}: indexInput,
	}
	err = StartIndexSync(indexMap, testMongoAddressServerless, nil, true)
	assert.Equal(t, nil, err, "should be equal")

	cursor, err := conn.Client.Database("test_db").Collection("test_coll").Indexes().List(nil)
	assert.Equal(t, nil, err, "should be equal")

	indexes := make([]bson2.M, 0)

	cursor.All(nil, &indexes)
	assert.Equal(t, nil, err, "should be equal")
	assert.Equal(t, len(indexes), len(indexInput), "should be equal")
	assert.Equal(t, isEqual(indexInput, indexes), true, "should be equal")
}

} `


test 结果

=== RUN TestStartIndexSync TestStartIndexSync case 0. [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] start writing index with background[true], indexMap length[1] [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] Close client with mongodb://xx.xx.xx.xx:27017 [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] Close client with mongodb://xx.xx.xx.xx:27017 [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] Close client with mongodb://xx.xx.xx.xx:27017 [2021/10/20 11:00:54 CST] [INFO] Create indexes for ns {test_db test_coll} of dest mongodb finish [2021/10/20 11:00:54 CST] [INFO] Close client with mongodb://xx.xx.xx.xx:27017 [2021/10/20 11:00:54 CST] [INFO] finish writing index [map[key:map[_id:1] name:id] map[background:true key:map[hello:hashed] name:hello_hashed] map[background:true key:map[did:1 id_type:1 msg_id:1 uid:1] name:id_type_1_did_1_uid_1_msg_id_1 unique:true] map[background:true key:map[x:1 y:1] name:x_1_y_1] map[background:true key:map[z:1] name:z_1]] [map[key:map[_id:1] name:id] map[background:true key:map[hello:hashed] name:hello_hashed] map[background:true key:map[did:1 id_type:1 msg_id:1 uid:1] name:id_type_1_did_1_uid_1_msg_id_1 unique:true] map[background:true key:map[x:1 y:1] name:x_1_y_1] map[background:true key:map[z:1] name:z_1]] TestStartIndexSync case 1. [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] start writing index with background[true], indexMap length[1] [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] Close client with mongodb://xx.xx.xx.xx:27017 [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] Close client with mongodb://xx.xx.xx.xx:27017 [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] Close client with mongodb://xx.xx.xx.xx:27017 [2021/10/20 11:00:55 CST] [INFO] Create indexes for ns {test_db test_coll} of dest mongodb finish [2021/10/20 11:00:55 CST] [INFO] Close client with mongodb://xx.xx.xx.xx:27017 [2021/10/20 11:00:55 CST] [INFO] finish writing index [map[key:map[_id:1] name:id] map[background:true key:map[hello:hashed] name:hello_hashed] map[background:true key:map[did:1 id_type:1 msg_id:1 uid:1] name:id_type_1_did_1_uid_1_msg_id_1 unique:true] map[background:true key:map[x:1 y:1] name:x_1_y_1] map[background:true key:map[z:1] name:z_1]] [map[key:map[_id:1] name:id] map[background:true key:map[hello:hashed] name:hello_hashed] map[background:true key:map[did:1 id_type:1 msg_id:1 uid:1] name:id_type_1_did_1_uid_1_msg_id_1 unique:true] map[background:true key:map[x:1 y:1] name:x_1_y_1] map[background:true key:map[z:1] name:z_1]] --- PASS: TestStartIndexSync (0.38s) PASS


实际db内索引

[ { "v" : 1, "key" : { "_id" : 1 }, "name" : "id", "ns" : "test_db.test_coll" }, { "v" : 1, "key" : { "hello" : "hashed" }, "name" : "hello_hashed", "ns" : "test_db.test_coll", "background" : true }, { "v" : 1, "key" : { "x" : 1, "y" : 1 }, "name" : "x_1_y_1", "ns" : "test_db.test_coll", "background" : true }, { "v" : 1, "key" : { "z" : 1 }, "name" : "z_1", "ns" : "test_db.test_coll", "background" : true }, { "v" : 1, "unique" : true, "key" : { "uid" : 1, "msg_id" : 1, "id_type" : 1, "did" : 1 }, "name" : "id_type_1_did_1_uid_1_msg_id_1", "ns" : "test_db.test_coll", "background" : true } ]

Activity icon
issue

zhangst issue comment alibaba/MongoShake

zhangst
zhangst

全量同步,同步索引无法保证顺序

看代码,处理索引时用的是 bson.M{} 处理 key 数组(顺序matter)

// M is an unordered representation of a BSON document. This type should be used when the order of the elements does not // matter. This type is handled as a regular map[string]interface{} when encoding and decoding. Elements will be // serialized in an undefined, random order. If the order of the elements matters, a D should be used instead. // // Example usage: // // bson.M{"foo": "bar", "hello": "world", "pi": 3.14159} type M = primitive.M


改了下 doc_syncer_test.go 里面的案例,test 能 pass,但是最终结果不能保证正确

TestStartIndexSync 案例

// 新增的案例 { "unique": true, "key": bson2.M{ "id_type": int32(1), "did": int32(1), "uid": int32(1), "msg_id": int32(1), }, "name": "id_type_1_did_1_uid_1_msg_id_1", "ns": "test_db.test_coll", "background": true, }

` func TestStartIndexSync(t *testing.T) { // test StartIndexSync

var nr int

utils.InitialLogger("", "", "info", true, 1)

// test drop
{
	fmt.Printf("TestStartIndexSync case %d.\n", nr)
	nr++

	conf.Options.FullSyncReaderCollectionParallel = 4

	conn, err := utils.NewMongoCommunityConn(testMongoAddress, utils.VarMongoConnectModeSecondaryPreferred, true,
		utils.ReadWriteConcernLocal, utils.ReadWriteConcernDefault)
	assert.Equal(t, nil, err, "should be equal")

	// drop old db
	err = conn.Client.Database("test_db").Drop(nil)
	assert.Equal(t, nil, err, "should be equal")

	indexInput := []bson2.M{
		{
			"key": bson2.M{
				"_id": int32(1),
			},
			"name": "_id_",
			"ns":   "test_db.test_coll",
		},
		{
			"key": bson2.M{
				"hello": "hashed",
			},
			"name": "hello_hashed",
			"ns":   "test_db.test_coll",
		},
		{
			"key": bson2.M{
				"x": int32(1),
				"y": int32(1),
			},
			"name": "x_1_y_1",
			"ns":   "test_db.test_coll",
		},
		{
			"key": bson2.M{
				"z": int32(1),
			},
			"name": "z_1",
			"ns":   "test_db.test_coll",
		},
		{
			"unique": true,
			"key": bson2.M{
				"id_type": int32(1),
				"did":     int32(1),
				"uid":     int32(1),
				"msg_id":  int32(1),
			},
			"name":       "id_type_1_did_1_uid_1_msg_id_1",
			"ns":         "test_db.test_coll",
			"background": true,
		},
	}
	indexMap := map[utils.NS][]bson2.M{
		utils.NS{"test_db", "test_coll"}: indexInput,
	}
	err = StartIndexSync(indexMap, testMongoAddress, nil, true)
	assert.Equal(t, nil, err, "should be equal")

	cursor, err := conn.Client.Database("test_db").Collection("test_coll").Indexes().List(nil)
	assert.Equal(t, nil, err, "should be equal")

	indexes := make([]bson2.M, 0)

	cursor.All(nil, &indexes)
	assert.Equal(t, nil, err, "should be equal")
	assert.Equal(t, len(indexes), len(indexInput), "should be equal")
	assert.Equal(t, isEqual(indexInput, indexes), true, "should be equal")
}

// serverless
{
	fmt.Printf("TestStartIndexSync case %d.\n", nr)
	nr++

	conf.Options.FullSyncReaderCollectionParallel = 4

	conn, err := utils.NewMongoCommunityConn(testMongoAddressServerless, utils.VarMongoConnectModePrimary, true,
		utils.ReadWriteConcernLocal, utils.ReadWriteConcernDefault)
	assert.Equal(t, nil, err, "should be equal")

	// drop old db
	err = conn.Client.Database("test_db").Drop(nil)
	assert.Equal(t, nil, err, "should be equal")

	indexInput := []bson2.M{
		{
			"key": bson2.M{
				"_id": int32(1),
			},
			"name": "_id_",
			//"ns":   "test_db.test_coll",
		},
		{
			"key": bson2.M{
				"hello": "hashed",
			},
			"name": "hello_hashed",
			//"ns":   "test_db.test_coll",
		},
		{
			"key": bson2.M{
				"x": int32(1),
				"y": int32(1),
			},
			"name": "x_1_y_1",
			//"ns":   "test_db.test_coll",
		},
		{
			"key": bson2.M{
				"z": int32(1),
			},
			"name": "z_1",
			//"ns":   "test_db.test_coll",
		},
		{
			"unique": true,
			"key": bson2.M{
				"id_type": int32(1),
				"did":     int32(1),
				"uid":     int32(1),
				"msg_id":  int32(1),
			},
			"name": "id_type_1_did_1_uid_1_msg_id_1",
			//"ns":         "test_db.test_coll",
			//"background": true,
		},
	}
	indexMap := map[utils.NS][]bson2.M{
		utils.NS{"test_db", "test_coll"}: indexInput,
	}
	err = StartIndexSync(indexMap, testMongoAddressServerless, nil, true)
	assert.Equal(t, nil, err, "should be equal")

	cursor, err := conn.Client.Database("test_db").Collection("test_coll").Indexes().List(nil)
	assert.Equal(t, nil, err, "should be equal")

	indexes := make([]bson2.M, 0)

	cursor.All(nil, &indexes)
	assert.Equal(t, nil, err, "should be equal")
	assert.Equal(t, len(indexes), len(indexInput), "should be equal")
	assert.Equal(t, isEqual(indexInput, indexes), true, "should be equal")
}

} `


test 结果

=== RUN TestStartIndexSync TestStartIndexSync case 0. [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] start writing index with background[true], indexMap length[1] [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] Close client with mongodb://xx.xx.xx.xx:27017 [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] Close client with mongodb://xx.xx.xx.xx:27017 [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] Close client with mongodb://xx.xx.xx.xx:27017 [2021/10/20 11:00:54 CST] [INFO] Create indexes for ns {test_db test_coll} of dest mongodb finish [2021/10/20 11:00:54 CST] [INFO] Close client with mongodb://xx.xx.xx.xx:27017 [2021/10/20 11:00:54 CST] [INFO] finish writing index [map[key:map[_id:1] name:id] map[background:true key:map[hello:hashed] name:hello_hashed] map[background:true key:map[did:1 id_type:1 msg_id:1 uid:1] name:id_type_1_did_1_uid_1_msg_id_1 unique:true] map[background:true key:map[x:1 y:1] name:x_1_y_1] map[background:true key:map[z:1] name:z_1]] [map[key:map[_id:1] name:id] map[background:true key:map[hello:hashed] name:hello_hashed] map[background:true key:map[did:1 id_type:1 msg_id:1 uid:1] name:id_type_1_did_1_uid_1_msg_id_1 unique:true] map[background:true key:map[x:1 y:1] name:x_1_y_1] map[background:true key:map[z:1] name:z_1]] TestStartIndexSync case 1. [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] start writing index with background[true], indexMap length[1] [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] Close client with mongodb://xx.xx.xx.xx:27017 [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] Close client with mongodb://xx.xx.xx.xx:27017 [2021/10/20 11:00:54 CST] [INFO] New session to mongodb://xx.xx.xx.xx:27017 successfully [2021/10/20 11:00:54 CST] [INFO] Close client with mongodb://xx.xx.xx.xx:27017 [2021/10/20 11:00:55 CST] [INFO] Create indexes for ns {test_db test_coll} of dest mongodb finish [2021/10/20 11:00:55 CST] [INFO] Close client with mongodb://xx.xx.xx.xx:27017 [2021/10/20 11:00:55 CST] [INFO] finish writing index [map[key:map[_id:1] name:id] map[background:true key:map[hello:hashed] name:hello_hashed] map[background:true key:map[did:1 id_type:1 msg_id:1 uid:1] name:id_type_1_did_1_uid_1_msg_id_1 unique:true] map[background:true key:map[x:1 y:1] name:x_1_y_1] map[background:true key:map[z:1] name:z_1]] [map[key:map[_id:1] name:id] map[background:true key:map[hello:hashed] name:hello_hashed] map[background:true key:map[did:1 id_type:1 msg_id:1 uid:1] name:id_type_1_did_1_uid_1_msg_id_1 unique:true] map[background:true key:map[x:1 y:1] name:x_1_y_1] map[background:true key:map[z:1] name:z_1]] --- PASS: TestStartIndexSync (0.38s) PASS


实际db内索引

[ { "v" : 1, "key" : { "_id" : 1 }, "name" : "id", "ns" : "test_db.test_coll" }, { "v" : 1, "key" : { "hello" : "hashed" }, "name" : "hello_hashed", "ns" : "test_db.test_coll", "background" : true }, { "v" : 1, "key" : { "x" : 1, "y" : 1 }, "name" : "x_1_y_1", "ns" : "test_db.test_coll", "background" : true }, { "v" : 1, "key" : { "z" : 1 }, "name" : "z_1", "ns" : "test_db.test_coll", "background" : true }, { "v" : 1, "unique" : true, "key" : { "uid" : 1, "msg_id" : 1, "id_type" : 1, "did" : 1 }, "name" : "id_type_1_did_1_uid_1_msg_id_1", "ns" : "test_db.test_coll", "background" : true } ]

push

zhangst push alibaba/MongoShake

zhangst
zhangst

kafka支持配置ssl (#666)

  • 支持kafka ssl Co-authored-by: zzm

commit sha: b13d4a8174274042243f435af2061a8ba8fd0e93

push time in 1 week ago
Previous