Irregular times for UdpClient.ReceiveAsync with dotnet core running on Linux

I’m writing a UDP client that should read packets as fast as possible.

The client is compiled for .net core 3.1 and is running on Linux. I only have 1 connection that gets many packets per second. I’m measuring the time it takes for the method ReceiveAsync to run, using a StopWatch. The variability of the running time is very large (up to 1000X), so how can I

a. reduce the time it takes for a single call to run? (I’ve tried the synchronous Receive method, BeginReceive and ReceiveAsync after reading recommendations here)

b. have the running time be more consistent. I doesn’t seem to correlate with the no of bytes I receive?

Example code:

var myClient = new UdpClient();
myClient.Client.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, config.LocalAdapterIndex);
myClient.Client.Blocking = false;
myClient.JoinMulticastGroup(address, IPAddress.Parse(config.LocalAdpaterAddress));
_EndPoint = new IPEndPoint(address, port);
myClient.Client.ReceiveBufferSize = 64 * 1024;
Stopwatch stopwatch = new Stopwatch();
    UdpReceiveResult result = await _ReceiveClient.ReceiveAsync();
    if (i >= latenciesCount)
        Console.WriteLine($"Latencies are {latencies.Average():n} max {latencies.Max():n} min {latencies.Min():n}");
        i = 0;
        latencies[i] = stopwatch.ElapsedTicks * 100;

This forum is not very active, so I think you might struggle to get an answer here. I’d suggest you to instead create an issue about this at the dotnet/runtime repo.

When doing that, I think it would also help if you:

  • specify what the actual times are (from your post, it’s not clear to me if we’re talking nanoseconds, seconds, or something in between)
  • provide a full repro that can be used by someone to observe the behavior themselves
1 Like
.NET Foundation Website | Blog | Projects | Code of Conduct