Maximizing the Performance a Gigabit Ethernet NIC Interface
This article describes how to get the greatest benefits from your Sun Gigabit Ethernet network interface card (NIC) interface and a few valuable tools to help you achieve that.
Gigabit Ethernet connections create the greatest stress by far to Sun systems. Therefore, to get the maximum benefit from your gigabit Ethernet NIC interface you need to be aware of the added complications of Auto-negotiation as well as the new ways to ensure that you get the maximum performance from both the gigabit Ethernet interface and the system.
There are two parts to getting the maximum performance from your gigabit Ethernet NIC and the system: first, you need to understand the system itself; second, you need to know the traffic profile through the gigabit Ethernet NIC.
Two key parameters to Sun systems are important for maximizing gigabit Ethernet performance: the number of CPUs in the system and the access time for memory. Establishing the number of CPUs is relatively simple. The memory access time is often hidden, but a simple rule is the larger the system the longer the memory access time. These factors become important for tuning transmitting (Tx) DMA thresholds and deciding how much load balancing of incoming receiving (Rx) traffic is meaningful.
The traffic profile has many dimensions also, including any one of the following characteristics or any combination of them: Rx intensive, Tx intensive or Equal, small packets, large packets, or latency sensitive.
The combination of system parameters and the traffic profile makes it very difficult to enumerate all the possibilities and provide one set of tuning parameters that will address every combination equally and fairly.
Therefore, we can only take the alternative approach of listing the readily available tunable parameters along with an explanation of how and when to use them to get the best results based on your system and application needs.
Each NIC has kernel statistics that provide a means of measuring the traffic profile. You can use this information to adjust ndd and /etc/system parameters to get the best performance from the NIC.
For more sophisticated features like CPU load balancing, there are some other tools that allow you to look at the system behavior and determine if tuning can better utilize the system as well as the NIC, given the system and the application providing the traffic profile.
"Network Driver Configuration Parameters" describes the details of the three methods you can use for configuring the driver parameters.
"Ethernet Physical Layer Troubleshooting" discusses the physical layer because that layer is the most important with respect to creating the link between two systems.
"Ethernet Performance Troubleshooting" discusses the data link layer, where most problems are performance related.
This article assumes you are an experienced systems administrator, accustomed to working with gigabit Ethernet NIC interfaces.
Network Driver Configuration Parameters
Since this article discusses the network driver configuration parameters it's important to note the details of the network driver configuration methods.
There are three methods you can use for configuring the driver parameters: ndd, driver.conf, or /etc/system.
The ndd method is a dynamic form of configuration where you simply invoke the ndd command in a command line
hostname# ndd -set /dev/ge instance 0 hostname# ndd -set /dev/ge adv_autoneg_cap 1
or through an interactive session.
hostname# ndd /dev/ge name to get/set ? instance value ? 0 name to get/set ? adv_autoneg_cap value ? 1 name to get/set ?
The ndd method is excellent for adjusting parameters during normal operation, but the configuration is lost once the system is rebooted. You can avoid this configuration loss by applying the chosen parameter in the driver.conf file of the driver you want to configure.
The driver.conf file for the device being configured must reside next to the driver being configured in the file system. For example, the Gigabit Ethernet driver has the following path:
The following example shows the path for the GigaSwift driver in two different platforms:
For Solaris 9 x86 /kernel/drv/ce /kernel/drv/ce.conf For Solaris sparc /platform/sun4u/kernel/drv/ce /platform/sun4u/kernel/drv/ce.conf
Modifying the parameters in the driver.conf file can be done with two goals in mind: configuring parameters in a global manner, where all interface instances in the machine using the same driver get the same parameter value, or on a per instance basis, where a parameter value applies to only one instance.
The global configuration method for the ge.conf file will appear as follows, applying the ndd configuration previously shown:
adv_autoneg_cap = 1;
Note that the previously shown ndd example only applied instance 1, so the global configuration may be overkill. Determine whether per instance or the global method is more appropriate for your needs.
name="ge" parent="/pci@1f,0/pci@1,1" unit-address = "1" adv_autoneg_cap = 1;
The per instance method does require you to get the 'parent' and 'unit-address' properties associated with the instance your configuring. This can be achieved by looking at the lines associated with that instance in the path_to_inst file.
The /etc/system configuration method allows you to initialize global variables in the device driver. It has no direct association with ndd and driver.conf setting unless explicitly implemented in the driver. In cases where a driver parameter has been defined for use in either /etc/system or driver.conf, you should choose to use the preferred driver.conf method instead.
Parameters set in the /etc/system always require a system reboot to take effect, the following example shows how an /etc/system variable is set up.
hostname# vi /etc/system ... set ge:ge_intr_mode = 1 ....
The remainder of the document will discuss ndd, so you should assume that any of the following parameters described as an ndd parameter should only be initialized using the driver.conf file. Any of following parameters described as /etc/system parameters get initialized using only modifications to /etc/system parameters and do require a reboot.