Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] cannot connect to replicaset mod mongodb outside k8s cluster #8373

Closed
Pieerot opened this issue Oct 31, 2024 · 5 comments
Closed

[BUG] cannot connect to replicaset mod mongodb outside k8s cluster #8373

Pieerot opened this issue Oct 31, 2024 · 5 comments
Assignees
Labels
kind/bug Something isn't working
Milestone

Comments

@Pieerot
Copy link

Pieerot commented Oct 31, 2024

Describe the bug
when i try to connect to a deployed replicaset mode mongodb by golang mongo driver, the driver will monitor cluster topology and update topology to headless domain. But at outside of k8s cluster i can't resolve the headless domain, so it will occur error which "no such host".

I have expose the service by NodePort service, And if I directly connect a pod, it can work properly.

I noticed that Lorry uses the headless domain component Replicset topology. Is there any configuration to use custom external domains for component topology, or is there any other way for me to connect to MongoDB outside the k8s cluster?

$ kubectl get svc -n mongo-jy-test09
NAME                                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                 AGE
mongo-jy-test09-mongodb                      ClusterIP   10.97.194.156    <none>        27017/TCP                               9d
mongo-jy-test09-mongodb-headless             ClusterIP   None             <none>        27017/TCP,3601/TCP,3501/TCP,50001/TCP   9d
mongo-jy-test09-mongodb-mongodb              ClusterIP   10.100.55.69     <none>        27017/TCP                               9d
mongo-jy-test09-mongodb-mongodb-nodeport-0   NodePort    10.101.236.126   <none>        27017:32413/TCP                         2d
mongo-jy-test09-mongodb-mongodb-nodeport-1   NodePort    10.108.123.16    <none>        27017:31196/TCP                         2d
mongo-jy-test09-mongodb-mongodb-nodeport-2   NodePort    10.108.173.85    <none>        27017:31721/TCP                         2d
mongo-jy-test09-mongodb-mongodb-ro           ClusterIP   10.101.35.109    <none>        27017/TCP                               9d

$ kubectl get pods -n mongo-jy-test09 -o wide
NAME                        READY   STATUS    RESTARTS   AGE    IP            NODE    NOMINATED NODE   READINESS GATES
mongo-jy-test09-mongodb-0   2/2     Running   0          6d1h   10.244.0.40   node1   <none>           <none>
mongo-jy-test09-mongodb-1   2/2     Running   0          6d1h   10.244.0.41   node1   <none>           <none>
mongo-jy-test09-mongodb-2   2/2     Running   0          6d1h   10.244.0.42   node1   <none>           <none>

$ kubectl get endpoints -n mongo-jy-test09
NAME                                         ENDPOINTS                                                           AGE
mongo-jy-test09-mongodb                      10.244.0.40:27017,10.244.0.41:27017,10.244.0.42:27017               9d
mongo-jy-test09-mongodb-headless             10.244.0.40:50001,10.244.0.41:50001,10.244.0.42:50001 + 9 more...   9d
mongo-jy-test09-mongodb-mongodb              10.244.0.42:27017                                                   9d
mongo-jy-test09-mongodb-mongodb-nodeport-0   10.244.0.40:27017                                                   2d
mongo-jy-test09-mongodb-mongodb-nodeport-1   10.244.0.41:27017                                                   2d
mongo-jy-test09-mongodb-mongodb-nodeport-2   10.244.0.42:27017                                                   2d
mongo-jy-test09-mongodb-mongodb-ro           10.244.0.40:27017,10.244.0.41:27017                                 9d
panic: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: mongo-jy-test09-mongodb-0.mongo-jy-test09-mongodb-headless.mongo-jy-test09.svc.cluster.local:27017, Type: U
nknown, Last error: dial tcp: lookup mongo-jy-test09-mongodb-0.mongo-jy-test09-mongodb-headless.mongo-jy-test09.svc.cluster.local: no such host }, { Addr: mongo-jy-test09-mongodb-1.mongo-jy-test09-mongodb-headless.mongo-jy-test09
.svc.cluster.local:27017, Type: Unknown, Last error: dial tcp: lookup mongo-jy-test09-mongodb-1.mongo-jy-test09-mongodb-headless.mongo-jy-test09.svc.cluster.local: no such host }, { Addr: mongo-jy-test09-mongodb-2.mongo-jy-test09
-mongodb-headless.mongo-jy-test09.svc.cluster.local:27017, Type: Unknown, Last error: dial tcp: lookup mongo-jy-test09-mongodb-2.mongo-jy-test09-mongodb-headless.mongo-jy-test09.svc.cluster.local: no such host }, ] }

To Reproduce
Steps to reproduce the behavior:

  1. deploy a replicaset mongodb, and expose each pod by NodePort service.
  2. use golang mongodb driver to connect mongodb cluster by multiple hosts.
const uri = "mongodb://[email protected]:32413,10.52.140.27:31196,10.52.140.27:31721/?authSource=admin"

func main() {

	serverAPI := options.ServerAPI(options.ServerAPIVersion1)
	opts := options.Client().ApplyURI(uri).SetServerAPIOptions(serverAPI).SetAuth(options.Credential{
		Username: "root",
		Password: "0RV6j6P098BZ0F8w",
	})

	client, err := mongo.Connect(context.TODO(), opts)
	if err != nil {
		panic(err)
	}
	defer func() {
		if err = client.Disconnect(context.TODO()); err != nil {
			panic(err)
		}
	}()

	var result bson.M
	if err := client.Database("admin").RunCommand(context.TODO(), bson.D{{"ping", 1}}).Decode(&result); err != nil {
		panic(err)
	}
	fmt.Println("Pinged your deployment. You successfully connected to MongoDB!")
}

Expected behavior
could connect mongodb outside k8s cluster.

Screenshots

Desktop (please complete the following information):

  • go.mongodb.org/mongo-driver: v1.16.1
  • mongodb: 5.0.14
  • kubeblock: v0.9.0

Additional context

@Pieerot Pieerot added the kind/bug Something isn't working label Oct 31, 2024
@xuriwuyun
Copy link
Contributor

xuriwuyun commented Oct 31, 2024

In Kubeblocks 0.9.0, MongoDB support initialization with host networking, enabling access through the host IP outside of Kubernetes. Before using this feature, you should upgrade the MongoDB addon to version 0.9.1 and create a new cluster with the following commands:
`
helm repo add kubeblocks-addons https://apecloud.github.io/helm-charts

helm upgrade -i kb-addon-mongodb kubeblocks-addons/mongodb -n kb-system --version 0.9.1

helm install your-cluster-name kubeblocks-addons/mongodb-cluster --version 0.9.1 --set hostnetwork=enable
`

@Pieerot
Copy link
Author

Pieerot commented Oct 31, 2024

In Kubeblocks 0.9.0, MongoDB support initialization with host networking, enabling access through the host IP outside of Kubernetes. Before using this feature, you should upgrade the MongoDB addon to version 0.9.1 and create a new cluster with the following commands: ` helm repo add kubeblocks-addons https://apecloud.github.io/helm-charts

helm upgrade -i kb-addon-mongodb kubeblocks-addons/mongodb -n kb-system --version 0.9.1

helm install your-cluster-name kubeblocks-addons/mongodb-cluster --version 0.9.1 --set hostnetwork=enable `

Thank you for your reply, but it seems that the host network cannot meet my needs
In fact, I hope to configure a separate domain for each pod in a replicaset MongoDB, such as [example. mongo-01. com, example. mongo-02...], and could connect to MongoDB through multiple domains.

The main issue is that the host in the replicaset's conf is the headless domain of the service. the MongoDB driver will listens to the cluster topology and attempts to connect to these headless domains. I have attached the specific rs.conf() below. The initialization of the replica cluster and the member's joining and leaveing of cluster are managed by Lorry, seemingly only using headless domain names as hosts. This will result in the inability to connect to MongoDB outside the k8s cluster.

func (c *Cluster) GetMemberAddrWithPort(member Member) string {
	addr := c.GetMemberAddr(member)
	return fmt.Sprintf("%s:%s", addr, member.DBPort)
}

func (c *Cluster) GetMemberAddr(member Member) string {
	if member.UseIP {
		return member.PodIP
	}
	clusterDomain := viper.GetString(constant.KubernetesClusterDomainEnv)
	clusterCompName := ""
	index := strings.LastIndex(member.Name, "-")
	if index > 0 {
		clusterCompName = member.Name[:index]
	}
	return fmt.Sprintf("%s.%s-headless.%s.svc.%s", member.Name, clusterCompName, c.Namespace, clusterDomain)
}
mongo-jy-test09-mongodb [direct: primary] admin> rs.conf()
{
  _id: 'mongo-jy-test09-mongodb',
  version: 14,
  term: 12,
  members: [
    {
      _id: 0,
      host: 'mongo-jy-test09-mongodb-0.mongo-jy-test09-mongodb-headless.mongo-jy-test09.svc.cluster.local:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long("0"),
      votes: 1
    },
    {
      _id: 1,
      host: 'mongo-jy-test09-mongodb-1.mongo-jy-test09-mongodb-headless.mongo-jy-test09.svc.cluster.local:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long("0"),
      votes: 1
    },
    {
      _id: 2,
      host: 'mongo-jy-test09-mongodb-2.mongo-jy-test09-mongodb-headless.mongo-jy-test09.svc.cluster.local:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 2,
      tags: {},
      secondaryDelaySecs: Long("0"),
      votes: 1
    }
  ],
  protocolVersion: Long("1"),
  writeConcernMajorityJournalDefault: true,
  settings: {
    chainingAllowed: true,
    heartbeatIntervalMillis: 2000,
    heartbeatTimeoutSecs: 10,
    electionTimeoutMillis: 10000,
    catchUpTimeoutMillis: -1,
    catchUpTakeoverDelayMillis: 30000,
    getLastErrorModes: {},
    getLastErrorDefaults: { w: 1, wtimeout: 0 },
    replicaSetId: ObjectId("67161fede62d649f8c72cf60")
  }
}

@xuriwuyun
Copy link
Contributor

Yes, you're right. The MongoDB cluster topology is managed by Lorry, which supports headless services and host networking to configure the replicaset. However, it cannot recognize or use domains outside of Kubernetes. Is it possible to use the host network to initialize the MongoDB replicaset and use an external domain that points to the host IP?

@Pieerot
Copy link
Author

Pieerot commented Nov 4, 2024

Yes, you're right. The MongoDB cluster topology is managed by Lorry, which supports headless services and host networking to configure the replicaset. However, it cannot recognize or use domains outside of Kubernetes. Is it possible to use the host network to initialize the MongoDB replicaset and use an external domain that points to the host IP?

Thank you for your suggestion, I will try this solution.
I have upgrade kbcli and kubebnlocks to v0.9.1. But I encountered some problems when creating a mongo cluster using the following command.
helm install hosttest kubeblocks-addons/mongodb-cluster --version 0.9.1 --set mode=replicaset --set replicas=3 --set hostnetwork=enabled
The hostnetwork doesn't seem to be working。

hosttest-mongodb [direct: primary] admin> rs.config()
{
  _id: 'hosttest-mongodb',
  version: 5,
  term: 1,
  members: [
    {
      _id: 0,
      host: 'hosttest-mongodb-0.hosttest-mongodb-headless.mongotest091.svc:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 2,
      tags: {},
      secondaryDelaySecs: Long('0'),
      votes: 1
    },
    {
      _id: 1,
      host: 'hosttest-mongodb-1.hosttest-mongodb-headless.mongotest091.svc:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long('0'),
      votes: 1
    },
    {
      _id: 2,
      host: 'hosttest-mongodb-2.hosttest-mongodb-headless.mongotest091.svc:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long('0'),
      votes: 1
    }
  ],
  protocolVersion: Long('1'),
  writeConcernMajorityJournalDefault: true,
  settings: {
    chainingAllowed: true,
    heartbeatIntervalMillis: 2000,
    heartbeatTimeoutSecs: 10,
    electionTimeoutMillis: 10000,
    catchUpTimeoutMillis: -1,
    catchUpTakeoverDelayMillis: 30000,
    getLastErrorModes: {},
    getLastErrorDefaults: { w: 1, wtimeout: 0 },
    replicaSetId: ObjectId('6728bb5712f158c7b75453d9')
  }
}

I noticed that the hostnetwork only works when useLegacyCompDef=true, so I tried to set useLegacyCompDef=true, but the precheck of the created cluster failed with the following message.

Status:
  Conditions:
    Last Transition Time:  2024-11-04T12:26:51Z
    Message:               ClusterVersion.apps.kubeblocks.io "6.0.16" not found
    Reason:                PreCheckFailed
    Status:                False
    Type:                  ProvisioningStarted

What should I do next? Looking forward to your reply.

By the way, I believe that using a host network can only meet the needs of testing and development, and there may be port conflicts or other risks in production practice. If there is another more graceful solution, it would be very helpful to me.

@xuriwuyun
Copy link
Contributor

I apologize for the issue and appreciate your response regarding the problem. The MongoDB addon was recently upgraded to a new API, and the host network is not functioning normally. We have fixed this in the new chart. Please upgrade using the same commands as before. The host ports are managed by KubeBlocks, which ensures that there are no conflicts with resources it controls.
`
helm repo add kubeblocks-addons https://apecloud.github.io/helm-charts

helm upgrade -i kb-addon-mongodb kubeblocks-addons/mongodb -n kb-system --version 0.9.1

helm install your-cluster-name kubeblocks-addons/mongodb-cluster --version 0.9.1 --set hostnetwork=enabled,mode=replicaset
`
I think a better solution would be to have Kubeblocks support clusters created with domains specified for each replica. However, there is a challenge with domain management outside of Kubernetes in situations such as scale-in and scale-out. Kubeblocks cannot manage resources outside of Kubernetes, and not everyone has a management system for domains.

Another possible solution is to use the headless domain and configure IP mapping for this domain outside Kubernetes. This way, the domain name remains consistent inside and outside the cluster, though the IP addresses differ. (This might work, but i am not sure..)

@github-actions github-actions bot added this to the Release 0.9.2 milestone Nov 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants