Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
850 views
in Technique[技术] by (71.8m points)

kubernetes - How to expose kube-dns service for queries outside cluster?

I'm trying to expose the "kube-dns" service to be available to be queried outside of the Kubernetes cluster. In order to do this I edited the "Service" definition to change "type" from "ClusterIP" to "NodePort" which seemed to work fine.

However, when I attempt to query on the node port, I'm able to get a TCP session (testing with Telnet) but can't seem to get any response from the DNS server (testing with dig).

I've had a look through the logs on each of the containers on the "kube-dns" Pod but can't see anything untoward. Additionally, querying the DNS from within the cluster (from a running container) appears to work without any issues.

Has anyone tried to expose the kube-dns service before? If so, are there any additional setup steps or do you have any debugging advice for me?

The service definition is as follows:

$ kubectl get service kube-dns -o yaml --namespace kube-system
apiVersion: v1
kind: Service
metadata:
...
spec:
  clusterIP: 10.0.0.10
  ports:
  - name: dns
    nodePort: 31257
    port: 53
    protocol: UDP
    targetPort: 53
  - name: dns-tcp
    nodePort: 31605
    port: 53
    protocol: TCP
    targetPort: 53
  selector:
    k8s-app: kube-dns
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Are you querying on the tcp port or the udp port?

I changed my kube-dns to be a NodePort service:

$ kubectl describe services kube-dns --namespace kube-system
Name:           kube-dns
Namespace:      kube-system
Labels:         k8s-app=kube-dns
            kubernetes.io/cluster-service=true
            kubernetes.io/name=KubeDNS
Selector:       k8s-app=kube-dns
Type:           NodePort
IP:         10.171.240.10
Port:           dns 53/UDP
NodePort:       dns 30100/UDP
Endpoints:      10.168.0.6:53
Port:           dns-tcp 53/TCP
NodePort:       dns-tcp 30490/TCP
Endpoints:      10.168.0.6:53
Session Affinity:   None

and then queried on the udp port from outside of the cluster and everything appeared to work:

$ dig -p 30100 @10.240.0.4 kubernetes.default.svc.cluster.local

; <<>> DiG 9.9.5-9+deb8u6-Debian <<>> -p 30100 @10.240.0.4 kubernetes.default.svc.cluster.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45472
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;kubernetes.default.svc.cluster.local. IN A

;; ANSWER SECTION:
kubernetes.default.svc.cluster.local. 30 IN A   10.171.240.1

;; Query time: 3 msec
;; SERVER: 10.240.0.4#30100(10.240.0.4)
;; WHEN: Thu May 26 18:27:32 UTC 2016
;; MSG SIZE  rcvd: 70

Right now, Kubernetes does not allow NodePort services to share the same port for tcp & udp (see Issue #20092). That makes this a little funky for something like DNS.

EDIT: The bug was fixed in Kubernetes 1.3.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...