I'm trying to figure out how to work around a problem in Windows. I'm using C# (net5.0), but if you know the answer in C or C++ that shouldn't be a real problem, because I can call functions in DLLs without an issue.
While testing UDP handling with multicast on Windows 10, I found a problem: when my buffer is too short for the payload coming in, System.Net.Sockets.Socket.ReceiveMessageFrom
fills the buffer to its maximum capacity, doesn't set the SocketOptions.Fragmented
or SocketOptions.Multicast
flags (and setting either before the call is made leads to a "not supported" exception), and the IPPacketInformation.Address
field is null. ReceiveMessageFrom
's return value is the size of my buffer, not the size of the packet. I cannot seem to get the necessary allocation size (even with SocketOptions.Peek
) no matter what I do.
When my buffer is long enough for the payload coming in, (System.Net.Sockets.Socket)s.ReceiveMessageFrom
fills the buffer, sets SocketOptions.Multicast
, and sets the IPPacketInformation.Address
field to the multicast group IP that it received from. The return value in that case is the amount of data actually received, when my buffer is larger than the data received.
On Linux, I can set SocketOptions.Fragmented
and it will work correctly: the too-small buffer is filled, but the return value of ReceiveMessageFrom
is set to the actual size of the incoming data. Combined with SocketOptions.Peek
, this allows me to allocate a new buffer large enough to hold it, and retrieve the full data. (This is very much akin to calling e.g. Windows Registry functions with a 0-length buffer, and being told how big of a buffer you'll actually need.)
The alternative to Linux's way would be to try to allocate a buffer as large as the interface will allow, but System.Net.NetworkInformation.IPInterfaceProperties
doesn't have a .Mtu
member, while System.Net.NetworkInformation.IPv6InterfaceProperties
does. I can't figure out how to get the maximum size of a frame, which is necessary because some drivers support a feature called "jumbo frames" that can be upwards of 64kb.)
To whit: My multicastsender program sends packets that are 26 bytes long, containing the entire uppercase US alphabet from A to Z.
My multicastreceiver program is where I have been making the changes. When I set my receive buffer in that program to less than 26 bytes long, I get the problematic behavior. When I set it to 26 bytes or more long, I get the correct behavior.
This question is not about OS-level or Winsock-level buffers. I will tune those separately, if they need to be. I am specifically trying to ensure that the data that I retrieve with ReceiveMessageFrom
is not truncated. (When the OS level doesn't have enough buffer space for the data to be queued, it simply drops the entire packet. It does not write a partial packet to the queue. My application is receiving partial data from the call to ReceiveMessageFrom
, and it's not indicating that there is anything that was truncated. I need to figure out how to work around this.)
I am not okay with losing packet space by encoding the size of the data in the area reserved for the data itself, as that will take at least 2 bytes, and I already need to squeeze a lot in here. The UDP header already has a Length field, and that field contains what I need, but I have no access to it.
Thanks for your help!