Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
408 views
in Technique[技术] by (71.8m points)

deployment - How do I set up linkage between Docker containers so that restarting won't break it?

I have a few Docker containers running like:

  • Nginx
  • Web app 1
  • Web app 2
  • PostgreSQL

Since Nginx needs to connect to the web application servers inside web app 1 and 2, and the web apps need to talk to PostgreSQL, I have linkages like this:

  • Nginx --- link ---> Web app 1
  • Nginx --- link ---> Web app 2
  • Web app 1 --- link ---> PostgreSQL
  • Web app 2 --- link ---> PostgreSQL

This works pretty well at first. However, when I develop a new version of web app 1 and web app 2, I need to replace them. What I do is remove the web app containers, set up new containers and start them.

For the web app containers, their IP addresses at first would be something like:

  • 172.17.0.2
  • 172.17.0.3

And after I replace them, they will have new IP addresses:

  • 172.17.0.5
  • 172.17.0.6

Now, those exposed environment variables in the Nginx container are still pointing to the old IP addresses. Here comes the problem. How do I replace a container without breaking linkage between containers? The same issue will also happen to PostgreSQL. If I want to upgrade the PostgreSQL image version, I certainly need to remove it and run the new one, but then I need to rebuild the whole container graph, so this is not ideal for real-life server operation.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The effect of --link is static, so it will not work for your scenario (there is currently no re-linking, although you can remove links).

We have been using two different approaches at dockerize.it to solve this, without links or ambassadors (although you could add ambassadors too).

1) Use dynamic DNS

The general idea is that you specify a single name for your database (or any other service) and update a short-lived DNS server with the actual IP as you start and stop containers.

We started with SkyDock. It works with two docker containers, the DNS server and a monitor that keeps it updated automatically. Later we moved to something more custom using Consul (also using a dockerized version: docker-consul).

An evolution of this (which we haven't tried) would be to setup etcd or similar and use its custom API to learn the IPs and ports. The software should support dynamic reconfiguration too.

2) Use the docker bridge ip

When exposing the container ports you can just bind them to the docker0 bridge, which has (or can have) a well known address.

When replacing a container with a new version, just make the new container publish the same port on the same IP.

This is simpler but also more limited. You might have port conflicts if you run similar software (for instance, two containers can not listen on the 3306 port on the docker0 bridge), etcétera… so our current favorite is option 1.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...