NVIDIA 680i: The Best Core 2 Chipset?
by Gary Key & Wesley Fink on November 8, 2006 4:45 AM EST- Posted in
- CPUs
DualNet
DualNet's suite of options actually brings a few enterprise type network technologies to the general desktop such as teaming, load balancing, and fail-over along with hardware based TCP/IP acceleration. Teaming will double the network link by combining the two integrated Gigabit Ethernet ports into a single 2-Gigabit Ethernet connection. This brings the user improved link speeds while providing fail-over redundancy. TCP/IP acceleration reduces CPU utilization rates by offloading CPU-intensive packet processing tasks to hardware using a dedicated processor for accelerating traffic processing combined with optimized driver support.
While all of this sounds impressive, the actual impact for the general computer user is minimal. On the other hand, a user setting up a game server/client for a LAN party or implementing a home gateway machine will find these options very valuable. Overall, features like DualNet are better suited for the server and workstation market. We believe these options are being provided (we are not complaining) since the NVIDIA professional workstation/server chipsets are based upon the same core logic.
NVIDIA now integrates dual Gigabit Ethernet MACs using the same physical chip. This allows the two Gigabit Ethernet ports to be used individually or combined depending on the needs of the user. The previous NF4 boards offered the single Gigabit Ethernet MAC interface with motherboard suppliers having the option to add an additional Gigabit port via an external controller chip. This too often resulted in two different driver sets, with various controller chips residing on either the PCI Express or PCI bus and typically worse performance than a well-implemented dual-PCIe Gigabit Ethernet solution. .
Teaming
Teaming allows both of the Gigabit Ethernet ports in NVIDIA DualNet configurations to be used in parallel to set up a 2-Gigabit Ethernet backbone. Multiple computers can to be connected simultaneously at full gigabit speeds while load balancing the resulting traffic. When Teaming is enabled, the gigabit links within the team maintain their own dedicated MAC address while the combined team shares a single IP address.
Transmit load balancing uses the destination (client) IP address to assign outbound traffic to a particular gigabit connection within a team. When data transmission is required, the network driver uses this assignment to determine which gigabit connection will act as the transmission medium. This ensures that all connections are balanced across all the gigabit links in the team. If at any point one of the links is not being utilized, the algorithm dynamically adjusts the connection to ensure an optimal connection. Receive load balancing uses a connection steering method to distribute inbound traffic between the two gigabit links in the team. When the gigabit ports are connected to different servers, the inbound traffic is distributed between the links in the team.
The integrated fail-over technology ensures that if one link goes down, traffic is instantly and automatically redirected to the remaining link. If a file is being downloaded as an example, the download will continue without loss of packet or corruption of data. Once the lost link has been restored, the grouping is re-established and traffic begins to transmit on the restored link.
NVIDIA quotes on average a 40% performance improvement in throughput can be realized when using teaming although this number can go higher. In their multi-client demonstration, NVIDIA was able to achieve a 70% improvement in throughput utilizing six client machines. In our own internal test we realized about a 36% improvement in throughput utilizing our video streaming benchmark while playing Serious Sam II across three client machines. For those without a Gigabit network, DualNet has the capability to team two 10/100 Fast Ethernet connections. Once again, this is a feature set that few desktop users will truly be able to exploit at the current time. However, we commend NVIDIA for forward thinking in this area as we see this type of technology being useful in the near future.
TCP/IP Acceleration
NVIDIA TCP/IP Acceleration is a networking solution that includes both a dedicated processor for accelerating networking traffic processing and optimized drivers. The current nForce 590SLI and nForce 680i SLI MCP chipsets have TCP/IP acceleration and hardware offload capability built in to both native Gigabit Ethernet Controllers. This capability will typically lower the CPU utilization rate when processing network data at gigabit speeds.
In software solutions, the CPU is responsible for processing all aspects of the TCP protocol: Checksumming, ACK processing, and connection lookup. Depending upon network traffic and the types of data packets being transmitted this can place a significant load upon the CPU. In the above example all packet data is processed and then checksummed inside the MCP instead of being moved to the CPU for software-based processing that improves overall throughout and CPU utilization.
NVIDIA dropped the ActiveArmor slogan for the nForce 500 release and it is no different for the nForce 600i series. Thankfully the ActiveArmor firewall application was jettisoned to deep space as NVIDIA pointed out that the basic features provided by ActiveArmor will be a part of Microsoft Vista. We also feel NVIDIA was influenced to drop ActiveArmor due to the reported data corruption issues with the nForce4 caused in part by overly aggressive CPU utilization settings, customer support headaches, issues with Microsoft, and quite possibly hardware "flaws" in the original nForce MCP design.
We have found a higher degree of stability with the new TCP/IP acceleration design but this stability comes at a price. If TCP/IP acceleration is enabled via the control panel, then certain network traffic will bypass third party firewall applications. We noticed CPU utilization rates near 14% with the TCP/IP offload engine enabled and rates near 26% without it.
DualNet's suite of options actually brings a few enterprise type network technologies to the general desktop such as teaming, load balancing, and fail-over along with hardware based TCP/IP acceleration. Teaming will double the network link by combining the two integrated Gigabit Ethernet ports into a single 2-Gigabit Ethernet connection. This brings the user improved link speeds while providing fail-over redundancy. TCP/IP acceleration reduces CPU utilization rates by offloading CPU-intensive packet processing tasks to hardware using a dedicated processor for accelerating traffic processing combined with optimized driver support.
While all of this sounds impressive, the actual impact for the general computer user is minimal. On the other hand, a user setting up a game server/client for a LAN party or implementing a home gateway machine will find these options very valuable. Overall, features like DualNet are better suited for the server and workstation market. We believe these options are being provided (we are not complaining) since the NVIDIA professional workstation/server chipsets are based upon the same core logic.
NVIDIA now integrates dual Gigabit Ethernet MACs using the same physical chip. This allows the two Gigabit Ethernet ports to be used individually or combined depending on the needs of the user. The previous NF4 boards offered the single Gigabit Ethernet MAC interface with motherboard suppliers having the option to add an additional Gigabit port via an external controller chip. This too often resulted in two different driver sets, with various controller chips residing on either the PCI Express or PCI bus and typically worse performance than a well-implemented dual-PCIe Gigabit Ethernet solution. .
Teaming
Teaming allows both of the Gigabit Ethernet ports in NVIDIA DualNet configurations to be used in parallel to set up a 2-Gigabit Ethernet backbone. Multiple computers can to be connected simultaneously at full gigabit speeds while load balancing the resulting traffic. When Teaming is enabled, the gigabit links within the team maintain their own dedicated MAC address while the combined team shares a single IP address.
Transmit load balancing uses the destination (client) IP address to assign outbound traffic to a particular gigabit connection within a team. When data transmission is required, the network driver uses this assignment to determine which gigabit connection will act as the transmission medium. This ensures that all connections are balanced across all the gigabit links in the team. If at any point one of the links is not being utilized, the algorithm dynamically adjusts the connection to ensure an optimal connection. Receive load balancing uses a connection steering method to distribute inbound traffic between the two gigabit links in the team. When the gigabit ports are connected to different servers, the inbound traffic is distributed between the links in the team.
The integrated fail-over technology ensures that if one link goes down, traffic is instantly and automatically redirected to the remaining link. If a file is being downloaded as an example, the download will continue without loss of packet or corruption of data. Once the lost link has been restored, the grouping is re-established and traffic begins to transmit on the restored link.
NVIDIA quotes on average a 40% performance improvement in throughput can be realized when using teaming although this number can go higher. In their multi-client demonstration, NVIDIA was able to achieve a 70% improvement in throughput utilizing six client machines. In our own internal test we realized about a 36% improvement in throughput utilizing our video streaming benchmark while playing Serious Sam II across three client machines. For those without a Gigabit network, DualNet has the capability to team two 10/100 Fast Ethernet connections. Once again, this is a feature set that few desktop users will truly be able to exploit at the current time. However, we commend NVIDIA for forward thinking in this area as we see this type of technology being useful in the near future.
TCP/IP Acceleration
NVIDIA TCP/IP Acceleration is a networking solution that includes both a dedicated processor for accelerating networking traffic processing and optimized drivers. The current nForce 590SLI and nForce 680i SLI MCP chipsets have TCP/IP acceleration and hardware offload capability built in to both native Gigabit Ethernet Controllers. This capability will typically lower the CPU utilization rate when processing network data at gigabit speeds.
In software solutions, the CPU is responsible for processing all aspects of the TCP protocol: Checksumming, ACK processing, and connection lookup. Depending upon network traffic and the types of data packets being transmitted this can place a significant load upon the CPU. In the above example all packet data is processed and then checksummed inside the MCP instead of being moved to the CPU for software-based processing that improves overall throughout and CPU utilization.
NVIDIA dropped the ActiveArmor slogan for the nForce 500 release and it is no different for the nForce 600i series. Thankfully the ActiveArmor firewall application was jettisoned to deep space as NVIDIA pointed out that the basic features provided by ActiveArmor will be a part of Microsoft Vista. We also feel NVIDIA was influenced to drop ActiveArmor due to the reported data corruption issues with the nForce4 caused in part by overly aggressive CPU utilization settings, customer support headaches, issues with Microsoft, and quite possibly hardware "flaws" in the original nForce MCP design.
We have found a higher degree of stability with the new TCP/IP acceleration design but this stability comes at a price. If TCP/IP acceleration is enabled via the control panel, then certain network traffic will bypass third party firewall applications. We noticed CPU utilization rates near 14% with the TCP/IP offload engine enabled and rates near 26% without it.
60 Comments
View All Comments
fenacv - Tuesday, January 8, 2008 - link
http://www.pricebat.ca/EVGA-122-CK-NF67-T1-LGA-775...">http://www.pricebat.ca/EVGA-122-CK-NF67...-SLI-ATX...If you don't really care the prefermace, I found it's onsale just buy one only 138 bucks. It's cheap.
TheBeagle - Friday, December 8, 2006 - link
I'm wondering if these touted new 680i boards are vaporware, especially the Gigabyte GA-N680SLI-DQ6 board. Ever since you first alerted us to the fact that the 680i chipset was replacing the 590 version, I've been waiting to see this whole new array of motherboards. However, aside from a few boards (ASUS and a few others) the major board manufacturers haven't been forthcoming with these products. Maybe this is just going to be some sort of a big Christmas present that Santa delivers on the holiday. If you guys at AnandTech have some info on this, I'd sure like to hear about. Thanksmbf - Wednesday, November 29, 2006 - link
I, for one, am going to seriously miss the native hardware firewall of the nForce3 and nForce4 chipsets, so I'm anything but "thankful" for seeing it "jettisoned into deep space." Actually, this was one of the coolest features of the nForce chipsets and truly innovative.nVidia's stance as to removing it because the functionality is built into Windows Vista doesn't ring true. A software solution can never work as efficiently and transparently as a hardware solution. And what of the people having no intention to switch to Windows Vista, and there are many reasons for not wanting to. They're practically left out in the cold.
I second the opinion that nVidia probably botched the hardware in some form or other, although the hardware firewall works quite well on my nForce3 250gb based system, once you get familiar with its quirks. This actually doesn't bode well for nVidia's "inventiveness" and "forward-thinking" (think DualNet), since chances are nVidia will drop support completely rather than work out the bugs that inevitably will be there. Removing the hardware firewall is the best example of this.
Also, and this is a bit off-topic in regard to the rest of this topic, wasn't there supposed to be ECC memory support in the new northbridge for the 680i chipset? I remember reading about the northbridge also being used in the new nForce Pro series chipsets. Another feature that has been removed in the mean time?
skrewler2 - Monday, November 13, 2006 - link
How was the Tuniq Tower 120 on the board? I've heard lots of people complaining about backplates not fitting right on this board because the back of the mobo has lots of capacitors... Did you need to do any modding or did it just work?Gary Key - Monday, November 13, 2006 - link
I used the Scythe Infinity in my testing, Wes used the Tuniq. I did try the Tuniq and it was okay with an extra pad on the backplate that negated any damage to the capacitors.mlau - Monday, November 13, 2006 - link
Did you test a recent Linux kernel on this board?Which components are supported (I don't care about "raid"),
and how buggy are the HPET, (IO-)APIC and ACPI implementations?
Governator - Saturday, November 11, 2006 - link
First off, very well done article guys, but I've a question on the layouts with regards to PCI slots so far with the Asus and Evga; are we to expect similar layouts with upcoming boards from other manufacturers? I ask because I'm planning on a water cooled SLI setup upon a 680i and am planning on an X-Fi card but not sure if I'll be able to use the middle PCI slot, TIA...Gov
Gary Key - Sunday, November 12, 2006 - link
Most of the 680i boards have the same basic layout. On the Asus Striker board you should be able to use the X-FI with most watercooled SLI setups as an example. It will all depend on your setup but you can kiss the middle PCIe slot good-bye. ;)Governator - Saturday, November 18, 2006 - link
Hi Gary, sorry I meant to reply sooner but thanks for this. I'm hoping I'll be in good shape with the fact that I'll be using the new 8800GTX water block codeveloped by BFG Tech from Danger Den which appears that it'll only take up one slot allowing for that bottom PCI slot to go to the X-Fi card, thoughts? TIA ;)deathwalker - Friday, November 10, 2006 - link
I wonder if there are Matx mobo's in the future for the 600 series chipsets.