Chapter 4. Performance Tuning

This chapter discusses performance tuning topics for IRIX and SGI ProPack for Linux.

For good TCP performance, the socket buffers used by applications must be at least as large as the bandwidth-round trip delay product between the two endpoints. Normally, larger socket buffers are called for with 10-Gbit Ethernet than when lower-bandwidth network interface cards are used.

IRIX Performance Tuning

This section discusses the following:

There is no one tuning configuration that is best for all environments. This section gives guidelines for the following cases:

  • A point-to-point configuration, in which IRIX is transmitting to or receiving from IRIX

  • A multiclient configuration, in which an IRIX system is connected via 10-Gbit Ethernet to a switch that fans out to multiple clients via 1-Gbit Ethernet

Jumbo Frames and IRIX

In general, an MTU of 9000 bytes (known as jumbo frames) gives the best TCP throughput performance and scaling results. Avoid an MTU of 1500 bytes if possible.

Read/Write Size and IRIX

In a point-to-point configuration, use a buffer length of 513920 bytes.

In a multiclient configuration, use a buffer length of 64240 bytes.

Socket Buffer Size and IRIX

Socket buffer size is set either:

  • By the application

  • Via the tcp_sendspace and tcp_recvspace tunable parameters

In a point-to-point configuration, use one of the following socket buffer sizes shown in Table 4-1.

Table 4-1. Socket Buffer Size in a Point-to-Point Configuration


Socket Buffer Size

1500 bytes

2048 KB

9000 bytes

4096 KB

In a multiclient configuration, use a socket buffer size of 2048 KB regardless of the MTU size.

tcp_delwake_count is a tunable parameter used to delay the wakeup of the receiving process for TCP input segments until a certain amount of data has been placed on the socket queue. This count is the number of bytes to be accumulated in the socket receive buffer before the receiving process is awakened. Change this value according to the configuration:

  • In a point-to-point configuration, use 5840

  • In a multiclient configuration, use 11680

Multibuffer Mode and IRIX

The IRIX multibuffer mode is dynamically enabled when the MTU is greater than 1500 bytes.

SGI ProPack Performance Tuning

This section discusses the following:

Socket Read and Write Buffer Sizes for SGI ProPack

The largest-allowed socket read and write buffer sizes are controlled by the following files:

  • Read: /proc/sys/net/core/rmem_max

  • Write: /proc/sys/net/core/wmem_max

Jumbo Frames and SGI ProPack

Using a large maximum transmission unit (MTU) is necessary for the best 10-Gbit Ethernet performance. Generally, the bigger the MTU, the better. The driver supports MTUs as large as 9600 bytes.

Read/Write Size and SGI ProPack

Applications should read large buffers from and write large buffers to the network for the best throughput and to reduce CPU utilization.

For example, an application that uses recv (2) calls with 32-KB buffers will generally have better throughput than if the application were to use twice as many recv calls with 16-KB buffers.

TCP/IP Socket Buffer Size and SGI ProPack

In SGI ProPack, /proc/sys/net/core/rmem_max and /proc/sys/net/core/wmem_max are both set to at least 524288 bytes, which is usually large enough to provide good performance. Reducing rmem_max and wmem_max will limit the amount of memory available for each socket's buffers, and can result in degraded network throughput. Unless it is required to limit memory usage, SGI recommends that you do not reduce these below the default SGI ProPack values.

If you to adjust the socket buffers, use sysctl(8) command.