Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
620 views
in Technique[技术] by (71.8m points)

c - Linux Socket: How to detect disconnected network in a client program?

I am debugging a c based linux socket program. As all the examples available in websites, I applied the following structure:

sockfd= socket(AF_INET, SOCK_STREAM, 0);

connect(sockfd, (struct sockaddr *) &serv_addr, sizeof(serv_addr));

send_bytes = send(sockfd, sock_buff, (size_t)buff_bytes, MSG_DONTWAIT);

I can detect the disconnection when the remove server closes its server program. But if I unplug the ethernet cable, the send function still return positive values rather than -1.

How can I check the network connection in a client program assuming that I can not change server side?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

But if I unplug the ethernet cable, the send function still return positive values rather than -1.

First of all you should know send doesn't actually send anything, it's just a memory-copying function/system call. It copies data from your process to the kernel - sometime later the kernel will fetch that data and send it to the other side after packaging it in segments and packets. Therefore send can only return an error if:

  • The socket is invalid (for example bogus file descriptor)
  • The connection is clearly invalid, for example it hasn't been established or has already been terminated in some way (FIN, RST, timeout - see below)
  • There's no more room to copy the data

The main point is that send doesn't send anything and therefore its return code doesn't tell you anything about data actually reaching the other side.

Back to your question, when TCP sends data it expects a valid acknowledgement in a reasonable amount of time. If it doesn't get one, it resends. How often does it resend ? Each TCP stack does things differently, but the norm is to use exponential backoffs. That is, first wait 1 second, then 2, then 4 and so on. On some stacks this process can take minutes.

The main point is that in the case of an interruption TCP will declare a connection dead only after a seriously large period of silence (on Linux it does something like 15 retries - more than 5 minutes).

One way to solve this is to implement some acknowledgement mechanism in your application. You could for example send a request to the server "reply within 5 seconds or I'll declare this connection dead" and then recv with a timeout.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...