Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
807 views
in Technique[技术] by (71.8m points)

kubernetes - Does ClusterIP service distributes requests between replica pods?

Do you guys know if a ClusterIP service distributes the workload between the target deployment replicas?

I have 5 replicas of a backend with a ClusterIP service selecting them. I also have another 5 replicas of nginx pod pointing to the this back end deployment. But when I run a heavy request the back end stops responding other requests until it finishes the heavy one.

Update

Here is my configuration:

Note: I've replaced some info that are related to the company.

Content provider deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name:  frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: webapp
        tier: frontend
    spec:
      containers:
      - name:  python-gunicorn
        image:  <my-user>/webapp:1.1.2
        command: ["/env/bin/gunicorn", "--bind", "0.0.0.0:8000", "main:app", "--chdir", "/deploy/app", "--error-logfile", "/var/log/gunicorn/error.log", "--timeout", "7200"]
        resources:
          requests:
            # memory: "64Mi"
            cpu: "0.25"
          limits:
            # memory: "128Mi"
            cpu: "0.4"
        ports:
        - containerPort: 8000
        imagePullPolicy: Always
        livenessProbe:
          httpGet:
            path: /login
            port: 8000
          initialDelaySeconds: 30
          timeoutSeconds: 1200
      imagePullSecrets:
        # NOTE: the secret has to be created at the same namespace level on which this deployment was created
        - name: dockerhub

Content provider service:

apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: webapp
    tier: frontend
spec:
  # type: LoadBalancer
  ports:
  - port: 8000
    targetPort: 8000
  selector:
    app: webapp
    tier: frontend

Nginx deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 5
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
      - name: secret-volume
        secret:
          secretName: nginxsecret
      - name: configmap-volume
        configMap:
          name: nginxconfigmap
      containers:
      - name: nginxhttps
        image: ymqytw/nginxhttps:1.5
        command: ["/home/auto-reload-nginx.sh"]
        ports:
        - containerPort: 443
        - containerPort: 80
        livenessProbe:
          httpGet:
            path: /index.html
            port: 80
          initialDelaySeconds: 30
          timeoutSeconds: 1200
        resources:
          requests:
            # memory: "64Mi"
            cpu: "0.1"
          limits:
            # memory: "128Mi"
            cpu: "0.25"
        volumeMounts:
        - mountPath: /etc/nginx/ssl
          name: secret-volume
        - mountPath: /etc/nginx/conf.d
          name: configmap-volume

Nginx service:

apiVersion: v1
kind: Service
metadata:
  name: nginxsvc
  labels:
    app: nginxsvc
spec:
  type: LoadBalancer
  ports:
  - port: 80
    protocol: TCP
    name: http
  - port: 443
    protocol: TCP
    name: https
  selector:
    app: nginx

Nginx config file:

server {
    server_name     local.mydomain.com;
    rewrite ^(.*) https://local.mydomain.com$1 permanent;
}

server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;

        listen 443 ssl;

        root /usr/share/nginx/html;
        index index.html;

        keepalive_timeout    70;
        server_name www.local.mydomain.com local.mydomain.com;
        ssl_certificate /etc/nginx/ssl/tls.crt;
        ssl_certificate_key /etc/nginx/ssl/tls.key;

        location / {
            proxy_pass  http://localhost:8000;
            proxy_connect_timeout       7200;
            proxy_send_timeout          7200;
            proxy_read_timeout          7200;
            send_timeout                7200;
    }
}
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Yes, service type ClusterIP uses kube-proxy's iptables rules to distribute the requests roughly evenly in a round robin manner.

The documentation says:

By default, the choice of backend is round robin.

Although, the round robin distribution of requests may be affected by things like:

  1. Busy backends
  2. Sticky Sessions
  3. Connection-based (if a backend pod has established a TCP session or a secure tunnel with the user hitting the ClusterIP multiple times)
  4. Custom host-level / node-level iptables rules outside kubernetes

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...