Version
3.5.1
Describe the problem you're encountering
Originally we had 5 node(membership was 5 nodes) , we created a new node (6th node ) and this node got added to the load gcp load balancer. Before adding this 6th node as a part of cluster membership , we found that some database shards got created on the 6th node , we would like to know why some database shards got created on the 6th node even though the 6th node was not part of membership. Now for these databases we can see only documents from only 6th node , we are not able access any previous documents located on other nodes.
Now for these databases (example database : xxxx_xxxxx_166) , the shards showing only on the 6th node , we are missing all the documents located in other nodes.
curl -u 'admin':XXXX -X GET [https://xxxx-xxxx-master.xx-xx-xxxxxx.str.xxxxxxxx.com:443/xxxx_xxxxx-166/_shards| jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 653 100 653 0 0 4411 0 --:--:-- --:--:-- --:--:-- 4442
{
"shards": {
"00000000-1fffffff": [
"couchdb@xxxx-xxxx-master-6.c."
],
"20000000-3fffffff": [
"couchdb@xxxx-xxxx-master-6.c."
],
"40000000-5fffffff": [
"couchdb@xxxx-xxxx-master-6.c."
],
"60000000-7fffffff": [
"couchdb@xxxx-xxxx-master-6.c.l"
],
"80000000-9fffffff": [
"couchdb@xxxx-xxxx-master-6.c."
],
"a0000000-bfffffff": [
"couchdb@xxxx-xxxx-master-6.c."
],
"c0000000-dfffffff": [
"couchdb@xxxx-xxxx-master-6.c."
],
"e0000000-ffffffff": [
"couchdb@xxxx-xxxx-master-6.c."
]
}
root@xxxx-xxxxx-master-6:/opt/couchdb/data/shards# ls -l / | grep xxxx_xxxxx-166
-rw-r--r-- 1 couchdb couchdb 27816195 May 13 16:01 00000000-1fffffff/xxxx_xxxxx-166.1764818594.couch
-rw-r--r-- 1 couchdb couchdb 55939331 May 13 16:26 20000000-3fffffff/xxxx_xxxxx-166.1764818594.couch
-rw-r--r-- 1 couchdb couchdb 28541187 May 13 16:26 40000000-5fffffff/xxxx_xxxxx-166.1764818594.couch
-rw-r--r-- 1 couchdb couchdb 70893827 May 13 16:26 60000000-7fffffff/xxxx_xxxxx-166.1764818594.couch
-rw-r--r-- 1 couchdb couchdb 27926787 May 13 16:26 80000000-9fffffff/xxxx_xxxxx-166.1764818594.couch
-rw-r--r-- 1 couchdb couchdb 55787779 May 13 16:26 a0000000-bfffffff/xxxx_xxxxx-166.1764818594.couch
-rw-r--r-- 1 couchdb couchdb 70152451 May 13 16:26 c0000000-dfffffff/xxxx_xxxxx-166.1764818594.couch
-rw-r--r-- 1 couchdb couchdb 28586240 May 13 16:26 e0000000-ffffffff/xxxx_xxxxx-166.1764818594.couch
The shards located on other nodes(node 1 to 5) are:
root@xxxx-xxxxxx-master-1:/opt/couchdb/data/shards# ls -l / | grep xxxx_xxxxx-166
-rw-r--r-- 1 couchdb couchdb 605348099 Dec 4 03:29 00000000-3fffffff/xxxx_xxxxx-166.1642704430.couch
-rw-r--r-- 1 couchdb couchdb 495493379 Dec 4 17:46 c0000000-ffffffff/xxxx_xxxxx-166.1642704430.couch
root@xxxx-xxxxxx-master-2:/opt/couchdb/data/shards# ls -l / | grep xxxx_xxxxx-166
-rw-r--r-- 1 couchdb couchdb 601940227 Dec 4 03:29 00000000-3fffffff/xxxx_xxxxx-166.1642704430.couch
-rw-r--r-- 1 couchdb couchdb 575910147 Dec 4 03:27 80000000-bfffffff/xxxx_xxxxx-166.1642704430.couch
root@xxxx-xxxxxx-master-3:/opt/couchdb/data/shards# ls -l / | grep xxxx_xxxxx-166
-rw-r--r-- 1 couchdb couchdb 573518083 Dec 4 03:27 80000000-bfffffff/xxxx_xxxxx-166.1642704430.couch
root@xxxx-xxxxxx-master-4:/opt/couchdb/data/shards# ls -l / | grep xxxx_xxxxx-166
-rw-r--r-- 1 couchdb couchdb 575324419 Dec 4 03:29 40000000-7fffffff/xxxx_xxxxx-166.1642704430.couch
root@xxxx-xxxxxx-master-5:/opt/couchdb/data/shards# ls -l / | grep xxxx_xxxxx-166
-rw-r--r-- 1 couchdb couchdb 576049411 Dec 4 03:29 40000000-7fffffff/xxxx_xxxxx-166.1642704430.couch
-rw-r--r-- 1 couchdb couchdb 514052355 Dec 4 03:28 c0000000-ffffffff/xxxx_xxxxx-166.1642704430.couch
Now, how can we recover all the documents (old(1-5 nodes) and new(6th node) ) , can someone provide some solution ?
Expected Behaviour
how can we recover all the documents (old(1-5 nodes) and new(6th node) ) , can someone provide some solution ?
Steps to Reproduce
Not sure how to reproduce:
may be : create a new VM , add to load balanacer , make sure when you access through load balancer , some time it will hit old node and some time it sholud hit new node. load massive data and run compaction ( same time ) , this may create some shards in new node,
Your Environment
{
"couchdb": "Welcome",
"version": "3.5.1",
"git_sha": "44xxxx",
"uuid": "0dd90435a009a3fb01ffad6410xxxxx",
"features": [
"access-ready",
"partitioned",
"pluggable-storage-engines",
"reshard",
"scheduler"
],
"vendor": {
"name": "The Apache Software Foundation"
}
}
we are in GCP cloud , running 6 VM as cluster
Additional Context
No response
Version
3.5.1
Describe the problem you're encountering
Originally we had 5 node(membership was 5 nodes) , we created a new node (6th node ) and this node got added to the load gcp load balancer. Before adding this 6th node as a part of cluster membership , we found that some database shards got created on the 6th node , we would like to know why some database shards got created on the 6th node even though the 6th node was not part of membership. Now for these databases we can see only documents from only 6th node , we are not able access any previous documents located on other nodes.
Now for these databases (example database : xxxx_xxxxx_166) , the shards showing only on the 6th node , we are missing all the documents located in other nodes.
curl -u 'admin':XXXX -X GET [https://xxxx-xxxx-master.xx-xx-xxxxxx.str.xxxxxxxx.com:443/xxxx_xxxxx-166/_shards| jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 653 100 653 0 0 4411 0 --:--:-- --:--:-- --:--:-- 4442
{
"shards": {
"00000000-1fffffff": [
"couchdb@xxxx-xxxx-master-6.c."
],
"20000000-3fffffff": [
"couchdb@xxxx-xxxx-master-6.c."
],
"40000000-5fffffff": [
"couchdb@xxxx-xxxx-master-6.c."
],
"60000000-7fffffff": [
"couchdb@xxxx-xxxx-master-6.c.l"
],
"80000000-9fffffff": [
"couchdb@xxxx-xxxx-master-6.c."
],
"a0000000-bfffffff": [
"couchdb@xxxx-xxxx-master-6.c."
],
"c0000000-dfffffff": [
"couchdb@xxxx-xxxx-master-6.c."
],
"e0000000-ffffffff": [
"couchdb@xxxx-xxxx-master-6.c."
]
}
root@xxxx-xxxxx-master-6:/opt/couchdb/data/shards# ls -l / | grep xxxx_xxxxx-166
-rw-r--r-- 1 couchdb couchdb 27816195 May 13 16:01 00000000-1fffffff/xxxx_xxxxx-166.1764818594.couch
-rw-r--r-- 1 couchdb couchdb 55939331 May 13 16:26 20000000-3fffffff/xxxx_xxxxx-166.1764818594.couch
-rw-r--r-- 1 couchdb couchdb 28541187 May 13 16:26 40000000-5fffffff/xxxx_xxxxx-166.1764818594.couch
-rw-r--r-- 1 couchdb couchdb 70893827 May 13 16:26 60000000-7fffffff/xxxx_xxxxx-166.1764818594.couch
-rw-r--r-- 1 couchdb couchdb 27926787 May 13 16:26 80000000-9fffffff/xxxx_xxxxx-166.1764818594.couch
-rw-r--r-- 1 couchdb couchdb 55787779 May 13 16:26 a0000000-bfffffff/xxxx_xxxxx-166.1764818594.couch
-rw-r--r-- 1 couchdb couchdb 70152451 May 13 16:26 c0000000-dfffffff/xxxx_xxxxx-166.1764818594.couch
-rw-r--r-- 1 couchdb couchdb 28586240 May 13 16:26 e0000000-ffffffff/xxxx_xxxxx-166.1764818594.couch
The shards located on other nodes(node 1 to 5) are:
root@xxxx-xxxxxx-master-1:/opt/couchdb/data/shards# ls -l / | grep xxxx_xxxxx-166
-rw-r--r-- 1 couchdb couchdb 605348099 Dec 4 03:29 00000000-3fffffff/xxxx_xxxxx-166.1642704430.couch
-rw-r--r-- 1 couchdb couchdb 495493379 Dec 4 17:46 c0000000-ffffffff/xxxx_xxxxx-166.1642704430.couch
root@xxxx-xxxxxx-master-2:/opt/couchdb/data/shards# ls -l / | grep xxxx_xxxxx-166
-rw-r--r-- 1 couchdb couchdb 601940227 Dec 4 03:29 00000000-3fffffff/xxxx_xxxxx-166.1642704430.couch
-rw-r--r-- 1 couchdb couchdb 575910147 Dec 4 03:27 80000000-bfffffff/xxxx_xxxxx-166.1642704430.couch
root@xxxx-xxxxxx-master-3:/opt/couchdb/data/shards# ls -l / | grep xxxx_xxxxx-166
-rw-r--r-- 1 couchdb couchdb 573518083 Dec 4 03:27 80000000-bfffffff/xxxx_xxxxx-166.1642704430.couch
root@xxxx-xxxxxx-master-4:/opt/couchdb/data/shards# ls -l / | grep xxxx_xxxxx-166
-rw-r--r-- 1 couchdb couchdb 575324419 Dec 4 03:29 40000000-7fffffff/xxxx_xxxxx-166.1642704430.couch
root@xxxx-xxxxxx-master-5:/opt/couchdb/data/shards# ls -l / | grep xxxx_xxxxx-166
-rw-r--r-- 1 couchdb couchdb 576049411 Dec 4 03:29 40000000-7fffffff/xxxx_xxxxx-166.1642704430.couch
-rw-r--r-- 1 couchdb couchdb 514052355 Dec 4 03:28 c0000000-ffffffff/xxxx_xxxxx-166.1642704430.couch
Now, how can we recover all the documents (old(1-5 nodes) and new(6th node) ) , can someone provide some solution ?
Expected Behaviour
how can we recover all the documents (old(1-5 nodes) and new(6th node) ) , can someone provide some solution ?
Steps to Reproduce
Not sure how to reproduce:
may be : create a new VM , add to load balanacer , make sure when you access through load balancer , some time it will hit old node and some time it sholud hit new node. load massive data and run compaction ( same time ) , this may create some shards in new node,
Your Environment
{
"couchdb": "Welcome",
"version": "3.5.1",
"git_sha": "44xxxx",
"uuid": "0dd90435a009a3fb01ffad6410xxxxx",
"features": [
"access-ready",
"partitioned",
"pluggable-storage-engines",
"reshard",
"scheduler"
],
"vendor": {
"name": "The Apache Software Foundation"
}
}
we are in GCP cloud , running 6 VM as cluster
Additional Context
No response