rps: avoid one atomic in enqueue_to_backlog
authorEric Dumazet <eric.dumazet@gmail.com>
Thu, 6 May 2010 23:51:21 +0000 (23:51 +0000)
committerDavid S. Miller <davem@davemloft.net>
Tue, 18 May 2010 00:18:50 +0000 (17:18 -0700)
If CONFIG_SMP=y, then we own a queue spinlock, we can avoid the atomic
test_and_set_bit() from napi_schedule_prep().

We now have same number of atomic ops per netif_rx() calls than with
pre-RPS kernel.

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
net/core/dev.c

index 988e429..cdcb9cb 100644 (file)
@@ -2432,8 +2432,10 @@ enqueue:
                        return NET_RX_SUCCESS;
                }
 
-               /* Schedule NAPI for backlog device */
-               if (napi_schedule_prep(&sd->backlog)) {
+               /* Schedule NAPI for backlog device
+                * We can use non atomic operation since we own the queue lock
+                */
+               if (!__test_and_set_bit(NAPI_STATE_SCHED, &sd->backlog.state)) {
                        if (!rps_ipi_queued(sd))
                                ____napi_schedule(sd, &sd->backlog);
                }