Spectrum Network Tool Average ratng: 3,8/5 1272 reviews

The type of activity on your home network can impact your speed test results. If there are other active Internet users(and other devices) streaming videos, playing.

Managing today’s complex hybrid IT infrastructure is a tough job; both private and public cloud infrastructure creates complexity for IT services to run and operate. Every network experiences faults or breakdowns in these services, and it is a major challenge for network admins to provide flawless communication all the time. To resolve issues quickly, a network admin must know when, where, and why a problem has arisen, and which process will cure the root of an issue.

In most computer networks, we use network tools to manage, analyze and monitor network operations. And these tools constantly monitor network operations; when an issue occurs, a notification will be generated and sent out to concerned teams. Let’s begin the journey with the examination of network managements tools first. Here we will review the top 10 network management tool of various functional areas, starting with one of the most widely used and very popular network management tools, “IBM Netcool Network Management.

1. IBM Netcool Network Management (Ideal for large-scale multi-vendor environments)

IBM Netcool Network Management can effectively configure, integrate, and remediate your network with centralized network visibility and availability. Its reporting feature encompasses vast, dispersed network environments which helps to maintain network’s feasibility.

The IBM Netcool Network Management tool is smart enough to identify and resolve network issues quickly and it also deals with automatic backup configuration and restore configuration in case of rollbacks. This tool offers real-time anomalies detection to provide efficient network deployment and change management in multivendor network deployment. This utility gives you an opportunity to integrate with IBM Tivoli OMNIbus to provide unmatched visibility and automation to enhance management capabilities.

Price: IBM Netcool NM has various license types and their pricing can be checked at http://estore.gemini-systems.com/ibm/software-license/tivoli-management-software/tivoli-omnibus-and-network-manager/.

Website:http://www-03.ibm.com/software/products/en/netcool-network-management

Pros:

  • Comprehensive event management solution
  • Centralized network visibility and reporting
  • Efficient event and configuration management

Cons:

  • Complex customization of reporting (as it involves lots of functionalities)
  • Updating issue with earlier versions but not with V9.2.2 (or above)

2. SolarWinds Kiwi CatTools(A must have for all small or low-budget companies)

This tool comes from the leading network management and analysis company, “SolarWinds”; Kiwi CatTools supports automatic backup and update activities for multivendor network devices. such as Nortel routers, Cisco switches, and ASA firewalls with central GUI-based network management. While resolving a network issue, you can compare past and present configurations to find the changes quickly. Kiwi CatTools can be installed on Microsoft Servers or desktop Windows platforms so that you can manage your network’s configurations from your computer as well.

Tab “Devices” offers a list of common network hardware to fill your network details such as the IP address and ports connectivity. The free version of CatTools manages five devices, while there is no device limit for paid version.

Price: one seat license: $750.00 (USD) for 12 months and $ 949.00 (USD) for 24 months

Website:http://www.kiwisyslog.com/products/kiwi-cattools/product-overview.aspx

Pros:

  • Roll-back facility with recent backups
  • Automated e-mail notifications of configuration changes
  • Generates reports (port, MAC, ARP and version details)

Cons:

  • Updating to higher version sometime needs uninstall and reinstall

3. SolarWinds Network Configuration Manager aka Orion NCM (good one for medium to large-scale multi-vendor networks)

SolarWinds NCM (network configuration manager) offers more scalability and functionalities than Kiwi CatTools. Orion NCM supports script-based correction and provides an easy way to push configuration changes out to all network equipment. This tool supports a various product range of network devices from Cisco, Juniper, HP, Dell, Nortel, and many more. Orion NCM enhances network security using Cisco IOS and ASA vulnerability scanning and NIST FISMA, DISA STIG, and DSS PCI compliance assessments. This tool also allows management of a steady workflows approach to approve updates and safely delegate work to others with roles and permissions

Website:http://www.solarwinds.com/network-configuration-manager.aspx

Pricing:

There are seven types of licenses available:

For further queries :– Call +61 2 8412 4910 to speak with a Sales Engineer

Pros:

  • IOS vulnerability scanning
  • Automated remediation scripts enforcement
  • Effective change management and monitoring
  • Quick backup and restore

Cons:

  • Hard to find any issue, fine in almost every functional area

4. CA-Spectrum(recommended where scalability has no limits)

Spectrum was originally developed at Cabletron Systems, Inc., in 1991 and later acquired by CA. Spectrum delivers large-scale network optimization and management solutions along with continuous availability and root cause analysis to improve network infrastructure and services. This tool helps to reduce network administration time and cost by supporting tens of thousands of devices, including increased scalability, automated fault management, and proactive change management over a single platform. Utilizing this tool, network administrators can better understand issues to manage configuration changes, unanticipated events, and outages. The invaluable root cause analysis feature helps administrators to identify and correct the causes of these issues and prevent future recurrences as well.

Price: The price is available on request, so you may need to contact at 1-800-225-5224 (US).

Website:http://www.ca.com/us/opscenter/ca-spectrum.aspx

Pros:

  • Centralized console to manage network and systems
  • Automated layer 2/3 root cause analysis
  • Multivendor-multi-technology fault management
  • Uninterrupted high availability (to challenge failovers)

Cons:

5. Huawei iManager (U2000) (Ultimate utility to manage carrier networks)

Unified Network Management System (U2000) manages transport, access, and IP equipment in an efficient way and its unified management and visual policy helps to reduce network’s operation and maintenance cost in a very effective manner. The all-new U2000 inherits all functions of its predecessors, such as T2000, N2000 BMS, and N2000 DMS, including the support of single-domain management to multi-domain management evolution. Huawei U2000 works smoothly from single-domain management to multi-domain management to meet operators’ needs for rapid growth of services. Moreover, Huawei has partnered with leading operating support system (OSS) vendors in accelerating OSS interconnection. At present, more than 200 operators worldwide are using the U2000 series of products.

Price: Various license types are available; you may need to contact sales executive @ Huawei.

Website: http://huawei.com/us/products/oss/fbb-om-product-series/imanager-u2000/index.htm

Pros:

  • Quick and accurate fault management
  • Visual IP network management
  • Quick OSS (operating support system) interconnection
  • Reduces operational expenditure

Cons:

  • Takes time to understand its functionalities (as it has lots of features)

6. OpenNMS

(A freeware tool, recommend for small & medium enterprise networks)

OpenNMS is a freeware network monitoring and network management utility developed and supported by a community of user and developers of “The OpenNMS” Group.

Being a freeware utility, the OpenNMS application manages to get placed in the list of top network management tools, and the latest version of this tool supports scalable and distributed network management models with more emphasis on fault and performance management. OpenNMS deals with the vast variety of network management, including notification-based event management (in the form of SNMP traps, syslog messages, etc.), advanced provisioning of new devices, and a service assurance feature (to measure network outage and to generate availability reports).

Website:http://www.opennms.com/ or http://www.opennms.org

Download link:http://www.opennms.org/get-opennms/

Pros:

  • User-friendly event management
  • Better integrity with external software (such as OTRS, Jira and RANCID)
  • Service monitoring

Cons:

  • It has no major issues, but automation is not up to the mark
  • Limited reporting (good for small enterprises not for large)

7. IPplan(A freeware web-based IP management utility, must have for easy IP management)

IPplan is a web-based (php 4) freeware tool used for IP address and DNS management and tracking. IPplan can be installed on any OS that supports PHP execution; this tool simplifies the administration of IP address space in multi-networks domain to prevent overlapping. This tool is not limited only to IP address management but also offers high-end features such as audit logs and statistics, importing network definitions from routing tables, multiple administrators with different access profiles (per group, allowing access per customer, per network, etc.), track record of SWIP/RIPE/APNIC registration, and DNS administration.

Website:http://iptrack.sourceforge.net/

Download Link:http://sourceforge.net/projects/iptrack/files/

Top rated assault rifle. Pros:

  • Script based triggers for backend DNS
  • Well defined statistics and audit logs
  • Multi-access profiles
  • Easy IP tracking

Cons:

  • Limited functionalities
  • Slow response (sometimes)

8. Ceragon NetMaster(Recommended to manage a variety of microwave networks)

Ceragon NetMaster offers large-scale microwave network management and provides standardized unified real-time network availability to maintain uninterrupted flows of network services. This single utility is capable to manage all current and legacy radio terminology, including FibeAir IP-10 and FibeAir IP-20 product series, as well as third-party network elements. NetMaster helps field engineers and network maintenance staff to maintain link specific network services and traffic flow from any location or any device.

Price: Various license types are available, you may need to contact sales executive @ Ceragon.

Website:https://www.ceragon.com/products-ceragon/management-systems/netmaster

Pros:

  • Unified network visualization
  • Effective and useful for field engineers and network maintenance staff

Cons:

9. HP Network Automation(Complete automated management solutions for large-scale multi-vendor network set-up)

HP Network Automation software provides complete network management solutions from provisioning to policy-based change management; it regulates and automates configuration changes over distributed worldwide multivendor networks. HP Network Automation can also be integrated with HP Network Node Manager i (NNMi) software to diagnose network fault, availability, and performance with new configuration. Network Automation is a one-of-a-kind network management solution to provide a complete automated approach across the multivendor network environment. HP Network Automation software comes in two editions, Premium and Ultimate.

Network Automation Premium edition includes:

  • Network automation server
  • Network automation satellites
  • Distributed architecture options

And Network Automation Ultimate edition includes all Premium edition feature plus a policy compliance feature (proactive policy enforcement to pass audit and compliance requirements).

Price: Not disclosed by company, you will have to contact company’s sales executive.

Website: http://www8.hp.com/us/en/software-solutions/network-automation/try-now.html

Pros:

  • Automates complete operational lifecycle of network devices
  • Proactive policy enforcement to pass audit and compliance requirements
  • Increases network stability by preventing misleading and inconsistence changes
  • Excellent user-friendly API
  • Multi-vendor device support (including 3Com, Alcatel-Lucent, Cisco, Juniper and many more )

Cons:

  • Nothing to speak of (it’s an excellent product)

10. CiscoWorks LMS (Ideal for Cisco LAN management)

CiscoWorks LAN Management Solution (LMS) is a standardized utility to provide effective management of configuration, fault administration, and event handling of Cisco networks. CiscoWorks LMS offers a centralized solution to improve network efficiency, visualization, and event management across all LAN environments. It provides best-in-class VLAN management, port traffic management with logs, and reporting functionalities. The following list introduces the major components of CiscoWorks LMS and can’t be purchased separately;

CiscoWorks Device Fault Manager (for fault detection, analysis, and reporting)

CiscoWorks Campus Manager (for configuration, management, and topology mapping)

CiscoWorks Resource Manager Essentials (for network inventory, configurations changes, etc.)

CiscoWorks Internetwork Performance Monitor (to measure network response time and availability)

CiscoWorks CiscoView (manages GUI interaction of Cisco devices)

CiscoWorks Common Services (for login, user role definitions, access privileges etc.)

Price: not disclosed by company; click the link below to request pricing details.

Website:http://www.cisco.com/c/en/us/products/cloud-systems-management/prime-lan-management-solution/index.html

Pros:

  • Cisco assurance
  • Bundled with lots of LAN functionalities
  • Event customization and real time network inventory

Cons:

  • Not for WAN optimization
  • Limited to manage Cisco devices

Conclusion:

I hope this article will be appreciated so that I will be able to offer more in this segment. You can write me @comment section below for providing any query/feedback; I will try my best to resolve your queries. And don’t forget to spread the link of this article on your Facebook, Twitter, and LinkedIn accounts so the maximum of people can get this exclusive piece of information. Keep reading @ Instanseschool.com and you can join our Facebook group, http://www.facebook.com/intenseschool to get updates on new posts.

References

Apart from my experience, my team, corporate clients and colleagues helped me a lot to design this articles, and the following web pages provided me all the latest functionalities of these tools.

Introduction

IBM Spectrum Scale™, based on technology from IBM General Parallel File System (hereinafter referred to as IBM Spectrum Scale or GPFS™), is a high performance software defined file management solution that simplifies data management, scalable to petabytes of data and billion of files, and delivers high performance access to data from multiple servers.

The IBM Spectrum Scale clustered file system enables petabytes-scale capacity global namespace that can be accessed simultaneously from multiple nodes and can be deployed in multiple configurations (e.g. NSD client-server, SAN). The file system can be accessed using multiple protocols (e.g. native NSD protocol, NFS, SMB/CIFS, Object). IBM Spectrum Scale stability and performance is highly dependent on the underlying networking infrastructure. To assess the stability and performance of the underlying network, IBM Spectrum Scale provides tools such as mmnetverify [1] and nsdperf [4].

The IBM Spectrum Scale nsdperf is a useful tool to assess the cluster network performance. This blog provides an overview of the nsdperf tool and its usage.

Throughout this document, all references to “GPFS” refer to the “IBM Spectrum Scale” product.

nsdperf overview

The mmnetverify tool can be used to assess the network health and verify common network issues in a Spectrum Scale cluster setup, detailed in the mmnetverify blog [2]. However, mmnetverify tool cannot be used to assess the aggregate parallel network bandwidth between multiple client and server nodes. Furthermore, mmnetverify tool does not yet support network bandwidth assessment using the RDMA protocol.

The nsdperf tool enables to define a set of nodes as clients and as servers and run a coordinated network test that simulates the GPFS Network Shared Disk (NSD) protocol traffic. All network communication is done using the TCP socket connections or RDMA verbs (InfiniBand/iWARP). This tool is stand alone and does not use the GPFS daemon so it is a good way to test network IO without involving disk IO.

Existing network performance programs, such as iperf [3] are good at measuring throughput between a pair of nodes. However, to use these programs on large numbers of cluster nodes requires considerable effort to coordinate startup and to gather results from all nodes. Also, the traffic pattern with many point-to-point streams may give much different results from the GPFS NSD pattern of clients sending messages round-robin to the servers.

Therefore, if iperf is producing good throughput numbers, but GPFS file I/O is slow, the problem might still be due to the network rather than with GPFS. The nsdperf program can be used for effective network performance assessment pertaining to IBM Spectrum Scale network topology. When Spectrum Scale software is installed on a node, the nsdperf source is installed at /usr/lpp/mmfs/samples/net directory.

It is highly recommended to perform cluster network performance assessment using the nsdperf tool prior to the Spectrum Scale cluster deployment to ensure that the underlying network meets the expected performance requirements. Furthermore, in the event of production performance issues, it will be recommended to quiesce file system I/O (when permissible) and verify that the underlying network performance using the nsdperf tool is optimal.

To complement the nsdperf tool (to aid with Spectrum Scale cluster network performance assessment), the IBM Spectrum Scale gpfsperf benchmark [5] can be used to measure the end-to-end file system performance (from a Spectrum Scale node) for several common file access patterns. The gpfsperf benchmark can be run on single node as well as across multiple nodes. There are two independent ways to achieve parallelism in the gpfsperf program. More than one instance of the program can be run on multiple nodes using Message Passing Interface (MPI) to synchronize their execution, or a single instance of the program can execute several threads in parallel on a single node. These two techniques can also be combined. When Spectrum Scale software is installed on a node, the gpfsperf source is installed at /usr/lpp/mmfs/samples/perf directory. The detailed instructions to build and execute gpfsperf benchmark is provided in the README file in the /usr/lpp/mmfs/samples/perf directory.

Building nsdperf

The detailed instructions to build nsdperf is provided in README file in the /usr/lpp/mmfs/samples/net directory. This section provides high level build procedure.

On GNU/Linux or on Windows systems running Cygwin/MinGW:
g++ -O2 -o nsdperf -lpthread -lrt nsdperf.C

To build with RDMA support (GNU/Linux only):
g++ -O2 -DRDMA -o nsdperf-ib -lpthread -lrt -libverbs -lrdmacm nsdperf.C

The nsdperf built with RDMA support may be saved with different naming scheme (e.g., using –ib suffix) to denote the RDMA capability. The nsdperf built with RDMA support can also be used to assess TCP/IP network bandwidth in addition to the RDMA network bandwidth.

NOTE: Since the nsdperf (in server mode) needs to be launched across multiple nodes, the nsdperf binary need to be present in all the participating nodes in the same path/location. To achieve this, following are some recommendations:
• Build this tool in single node (of similar CPU architecture – e.g. x86_64, ppc64) and copy the nsdperf binary to a global shared namespace (accessible via NFS or GPFS) such that nsdperf is accessible from common path.
• Alternatively, the nsdperf binary may be built on all the nodes in the /usr/lpp/mmfs/samples/net directory using parallel shell such as mmdsh (e.g, mmdsh –N all “cd /usr/lpp/mmfs/samples/net; g++ -O2 -DRDMA -o nsdperf-ib -lpthread -lrt -libverbs -lrdmacm nsdperf.C”).

nsdperf Usage

The nsdperf command line options is as follows and this is detailed in README file in the /usr/lpp/mmfs/samples/net directory (on node with Spectrum Scale installed).

Usage: nsdperf-ib [-d] [-h] [-i FNAME] [-p PORT] [-r RDMAPORTS] [-t NRCV][-s] [-w NWORKERS] [-6] [CMD…]

Options:
-d Include debug output
-h Print help message
-i FNAME Read commands from file FNAME
-p PORT TCP port to use (default 6668)
-r RDMAPORTS RDMA devices and ports to use (default is all active ports)
-t NRCV Number of receiver threads (default nCPUs, min 2)
-s Act as a server
-w NWORKERS Number of message worker threads (default 32)
-6 Use IPv6 rather than IPv4

Generally, the most often used nsdperf command-line option is “-s”, which is used to launch nsdperf in server mode. The nsdperf in server mode needs to be run on all the nodes of cluster (that needs to be involved in the nsdperf testing where network bandwidth between the NSD client and server needs to be assessed). For example:

mmdsh –N <participating_nodes> ‘<complete_path_to>nsdperf -s </dev/null > /dev/null 2>&1 &’

After the nsdperf servers are running, the network bandwidth assessment between NSD client and servers can be performed by running nsdperf, without “-s”, from an administrative node (e.g. login node, gateway node, or any cluster node permitting interactive job execution), and entering nsdperf commands.

<complete_path_to>nsdperf

The “test” command sends a message to all the client nodes to begin write and read network performance testing (detailed in the following sections) to the server nodes. The size of the messages can be specified using the nsdperf “buffsize” parameter. It will be good to start with nsdperf “buffSize NBYTES” equal to the GPFS file-system block size to assess the network bandwidth capability. When sequential I/O is performed on the GPFS file system, the NSD client(s) transmit IO sizes in units of file-system block size to the NSD servers.

Throughput numbers are reported in MB/sec, where MB is 1,000,000 bytes. The CPU busy time during the test period is also reported (currently only supported on Linux and AIX systems) and this is detailed in nsdperf README file in the /usr/lpp/mmfs/samples/net directory (on node with Spectrum Scale installed). The numbers reported are the average percentage of non-idle time for all client nodes, and average for all server nodes.

The available nsdperf test types are write, read, nwrite, swrite, sread, rw.

write
Clients write round-robin to all servers. Each client tester thread is in a loop, writing a data buffer to one server, waiting for a reply, and then moving on to the next server.

read
Clients read round-robin from all servers. Each client thread sends a request to a server, waits for the data buffer, and then moves on to the next server.

nwrite
This is the same as the write test, except that it uses a GPFS NSD style of writing, with a four-way handshake. The client tester thread first sends a 16-byte NSD write request to the server. The server receives the request, and sends back a read request for the data. The client replies to this with a data buffer. When the server has received the data, it replies to the original NSD write request, and the client gets this and moves on to the next server.

swrite
Each client tester thread writes repeatedly to a single server, rather than sending data round-robin to all servers. To get useful results, the “threads” command should be used to make the number of tester threads be an even multiple of the number of server nodes.

sread
Each tester thread reads from only one server.

rw
This is a bi-directional test, where half of the client tester threads run the read test and half of them do the write test.

At the minimum, the network bandwidth assessment should be performed using write, read and nwrite tests. The nwrite test is pertinent test when Spectrum Scale cluster is deployed over the Infiniband network and the GPFS verbsRdma configuration parameter is enabled.

Special considerations for Spectrum Scale clusters with Infiniband

nsdperf command-line option “-r” needs to be provided with value same as the GPFS verbsPorts parameter (mmlsconfig grep verbsPorts). The format of the RDMAPORTS argument of the “-r” option is a comma or space separated list of device names and port numbers separated by colon or slash (e.g. “mlx5_0/1,mlx5_1/1”). When multiple ports are specified, RDMA connections will be established for each port and outbound messages will be sent round-robin through the connections. If a port number is not specified, then all active ports on the device will be used.

After the nsdperf servers are running, on an administrative node (e.g. login node, gateway node, or any cluster node permitting interactive job execution) run nsdperf with the same “-r” value used for nsdperf in server mode (-s).

• For example, if GPFS verbsPorts is set to “mlx5_0/1” then nsdperf in server mode (-s) need to have RDMAPORTS (-r) set to “mlx5_0/1”) similar to below:

mmdsh –N <participating_nodes> ‘<complete_path_to_>nsdperf-ib -s -r mlx5_0/1 </dev/null > /dev/null 2>&1 &’

nsdperf administrative command (without –s option) need to have RDMAPORTS (-r) set to “mlx5_0/1”) similar to below:

<complete_path_to>nsdperf-ib -r mlx5_0/1

nsdperf Examples

The following section provides nsdperf examples to assess the network bandwidth for multiple client-server scenarios across TCP/IP as well as RDMA protocol. In the example below, the client and server are interconnected using FDR Infiniband (FDR-IB), with one 1 x FDR-IB link per node. The ib0 suffix to the nodename denotes the IP address corresponding to the IP over Infiniband interface (IPoIB).

Comments inlined in the nsdperf examples, denoted by “#” in the start of the line. The following sections assumes that the nsdperf (nsdperf-ib) binary is installed on all the nodes in “/opt/benchmarks/” directory.

Single Client and Single Server (with detailed comments)
In the example below, the network bandwidth between node c71f1c7p1ib0 and c71f1c9p1ib0 over the TCP/IP and RDMA network is assessed.
The “nsdperf in server mode” is started in the client and server node:

mmdsh -N c71f1c7p1ib0,c71f1c9p1ib0 “/opt/benchmarks/nsdperf-ib -s </dev/null>/dev/null 2>&1 &”

Then, execute nsdperf from an administrative node (e.g. login node, gateway node, or any cluster node permitting interactive job execution):
# /opt/benchmarks/nsdperf-ib

# Designate the nodes as clients using “client” parameter
nsdperf-ib> client c71f1c7p1ib0
Connected to c71f1c7p1ib0

# Designate the nodes as servers using “server” parameter
nsdperf-ib> server c71f1c9p1ib0
Connected to c71f1c9p1ib0

# Set the run time to 30 seconds for the tests using “ttime” parameter
nsdperf-ib> ttime 30
Test time set to 30 seconds

# Perform the desired nsdperf network tests using “test” parameter.

# TCP/IP network mode – Use “status” command to verify client node connectivity to the server # node
nsdperf-ib> status
test time: 30 sec
data buffer size: 4194304
TCP socket send/receive buffer size: 0
tester threads: 4
parallel connections: 1
RDMA enabled: no

clients:
c71f1c7p1ib0 (10.168.117.199) -> c71f1c9p1ib0
servers:
c71f1c9p1ib0 (10.168.117.205)

# Perform performance tests from clients to servers.
# The “test” command sends a message to all clients nodes to begin network performance testing
# to the server nodes. By default, write and read tests are performed.

nsdperf-ib> test
1-1 write 3170 MB/sec (756 msg/sec), cli 2% srv 3%, time 30, buff 4194304
1-1 read 3060 MB/sec (728 msg/sec), cli 3% srv 2%, time 30, buff 4194304

# Based on the results, TCP/IP bandwidth is limited by 1 x IPoIB link (1-1) between the client
# and server [refer to APPENDIX A].
# By default, each client node uses four tester threads. Each of these threads will independently
# send and receive messages to the server node. The thread counts may be scaled using “threads”
# parameter

# Enable RDMA for sending data blocks using “rdma” parameter
nsdperf-ib> rdma on
RDMA is now on

# Perform the desired nsdperf network tests using “test”. Default is write and read tests

# RDMA network mode – Use “status” command to verify client node RDMA connectivity to
# the server node
nsdperf-ib> status
test time: 30 sec
data buffer size: 4194304
TCP socket send/receive buffer size: 0
tester threads: 4
parallel connections: 1
RDMA enabled: yes

clients:
c71f1c7p1ib0 (10.168.117.199) -> c71f1c9p1ib0
mlx5_0:1 10a2:1f00:032d:1de4
servers:
c71f1c9p1ib0 (10.168.117.205)
mlx5_0:1 40a1:1f00:032d:1de4

# Perform RDMA performance tests using “test”. Default is write and read tests
nsdperf-ib> test
1-1 write 6450 MB/sec (1540 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA
1-1 read 6450 MB/sec (1540 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA

# Based on results, RDMA bandwidth is limited by 1 x FDR-IB link (1-1) between the client and
# server [refer to APPENDIX B].
# By default, each client node uses four tester threads. Each of these threads will independently
# send and receive messages to the server node. The thread counts may be scaled using “threads”
# parameter

# Shut down nsdperf (in server mode) on all client and server nodes using “killall” command
nsdperf-ib> killall

# Exit from program using “quit” command
nsdperf-ib> quit

Multiple Clients and Multiple Servers

In the example below, the network bandwidth between multiple client nodes c71f1c9p1ib0, c71f1c10p1ib0 and multiple server nodes c71f1c7p1ib0, c71f1c8p1ib0 over the TCP/IP and RDMA network is assessed.

The “nsdperf in server mode” is started in the client as well as server nodes:
mmdsh -N c71f1c7p1ib0,c71f1c8p1ib0,c71f1c9p1ib0,c71f1c10p1ib0 “/opt/benchmarks/nsdperf-ib -s </dev/null >/dev/null 2>&1 &”
Then, execute nsdperf from an administrative node (e.g. login node, gateway node, or any cluster node permitting interactive job execution):
# /opt/ benchmarks/nsdperf-ib

# Designate the nodes as servers using the “server” parameter
nsdperf-ib> server c71f1c7p1ib0 c71f1c8p1ib0
Connected to c71f1c7p1ib0
Connected to c71f1c8p1ib0

# Designate the nodes as clients using the “client” parameter
nsdperf-ib> client c71f1c9p1ib0 c71f1c10p1ib0
Connected to c71f1c9p1ib0
Connected to c71f1c10p1ib0

# Set run time to 30 seconds for the tests using the “ttime” parameter
nsdperf-ib> ttime 30
Test time set to 30 seconds

# Perform the desired nsdperf network tests using the “test” parameter.

# TCP/IP network mode – Use “status” command to verify client node connectivity to the server node
nsdperf-ib> status
test time: 30 sec
data buffer size: 4194304
TCP socket send/receive buffer size: 0
tester threads: 4
parallel connections: 1
RDMA enabled: no

clients:
c71f1c9p1ib0 (10.168.117.205) -> c71f1c7p1ib0 c71f1c8p1ib0
c71f1c10p1ib0 (10.168.117.208) -> c71f1c7p1ib0 c71f1c8p1ib0
servers:
c71f1c7p1ib0 (10.168.117.199)
c71f1c8p1ib0 (10.168.117.202)

# Perform performance tests from clients to servers.
nsdperf-ib> test
2-2 write 8720 MB/sec (2080 msg/sec), cli 3% srv 5%, time 30, buff 4194304
2-2 read 10200 MB/sec (2440 msg/sec), cli 5% srv 3%, time 30, buff 4194304

# Based on the results, TCP/IP bandwidth is limited by 2 x IPoIB link (2-2) between the clients
# and servers

# Enable RDMA for sending data blocks using “rdma” parameter
nsdperf-ib> rdma on
RDMA is now on

# Perform the desired nsdperf network tests using “test”. Default is write and read tests

# RDMA network mode – Use “status” command to verify client node RDMA connectivity to
# the server node
nsdperf-ib> status
test time: 30 sec
data buffer size: 4194304
TCP socket send/receive buffer size: 0
tester threads: 4
parallel connections: 1
RDMA enabled: yes

clients:
c71f1c9p1ib0 (10.168.117.205) -> c71f1c7p1ib0 c71f1c8p1ib0
mlx5_0:1 40a1:1f00:032d:1de4
c71f1c10p1ib0 (10.168.117.208) -> c71f1c7p1ib0 c71f1c8p1ib0
mlx5_0:1 e0a5:1f00:032d:1de4
servers:
c71f1c7p1ib0 (10.168.117.199)
mlx5_0:1 10a2:1f00:032d:1de4
c71f1c8p1ib0 (10.168.117.202)
mlx5_0:1 e0a1:1f00:032d:1de4

# Perform RDMA performance tests using “test”. Default is write and read tests
nsdperf-ib> test
2-2 write 12900 MB/sec (3080 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA
2-2 read 12900 MB/sec (3080 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA

# Based on the results, RDMA bandwidth is limited by 2 x FDR-IB link (2-2) between the clients
# and servers

# Shut down nsdperf (in server mode) on all client and server nodes using “killall” command
nsdperf-ib> killall

# Exit from program using “quit” command
nsdperf-ib> quit

Supplement nsdperf tests and commands

This section details supplement nsdperf tests (e.g. nwrite) and commands (e.g. buffsize, hist).

In the example below, the network bandwidth between multiple client nodes c71f1c9p1ib0, c71f1c10p1ib0 and multiple server nodes c71f1c7p1ib0, c71f1c8p1ib0 over the RDMA network is assessed.

The “nsdperf in server mode” is started in the client as well as server nodes:
mmdsh -N c71f1c7p1ib0,c71f1c8p1ib0,c71f1c9p1ib0,c71f1c10p1ib0 “/opt/benchmarks/nsdperf-ib -s </dev/null >/dev/null 2>&1 &”

Then, execute nsdperf from an administrative node (e.g. login node, gateway node, or any cluster node permitting interactive job execution):

# /opt/ benchmarks/nsdperf-ib
nsdperf-ib> server c71f1c7p1ib0 c71f1c8p1ib0
Connected to c71f1c7p1ib0
Connected to c71f1c8p1ib0
nsdperf-ib> client c71f1c9p1ib0 c71f1c10p1ib0
Connected to c71f1c9p1ib0
Connected to c71f1c10p1ib0

Developer Response,Hi Bill - I'm sorry to hear about the issues you had with the app. If you can send more information about your account to us at feedback@digitalpharmacist.com, we will do our best to help. Pocket bhangra app free download.

# Set the run time to 30 seconds for the tests
nsdperf-ib> ttime 30
Test time set to 30 seconds

# Enable RDMA for sending data blocks
nsdperf-ib> rdma on
RDMA is now on

# Perform the desired nsdperf network tests using the “test” parameter.

# RDMA network mode – Use “status” command to verify client node RDMA connectivity to
# the server node
nsdperf-ib> status
test time: 30 sec
data buffer size: 4194304
TCP socket send/receive buffer size: 0
tester threads: 4
parallel connections: 1
RDMA enabled: yes

clients:
c71f1c9p1ib0 (10.168.117.205) -> c71f1c7p1ib0 c71f1c8p1ib0
mlx5_0:1 40a1:1f00:032d:1de4
c71f1c10p1ib0 (10.168.117.208) -> c71f1c7p1ib0 c71f1c8p1ib0
mlx5_0:1 e0a5:1f00:032d:1de4
servers:
c71f1c7p1ib0 (10.168.117.199)
mlx5_0:1 10a2:1f00:032d:1de4
c71f1c8p1ib0 (10.168.117.202)
mlx5_0:1 e0a1:1f00:032d:1de4

# Perform the desired nsdperf network tests using the “test” parameter.
nsdperf-ib> test
2-2 write 12900 MB/sec (3080 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA
2-2 read 12900 MB/sec (3080 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA

# Perform individual network tests (e.g. nwrite)
nsdperf-ib> test write
2-2 write 12900 MB/sec (3080 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA
nsdperf-ib> test read
2-2 read 12900 MB/sec (3080 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA
nsdperf-ib> test nwrite
2-2 nwrite 12900 MB/sec (3080 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA

# Based on the results, RDMA bandwidth is limited by 2 x FDR-IB link (2-2) between the clients
# and servers

# The hist parameter can be turned “on” to print the network response time histograms
nsdperf-ib> hist on
Histogram printing is now on

nsdperf-ib> test write
2-2 write 12900 MB/sec (3080 msg/sec), cli 1% srv 1%, time 30, buff 4194304, RDMA

c71f1c9p1ib0 block transmit times (average 2.598 msec, median 3 msec)
msec nevents
1 2
2 1211
3 35724
4 2

c71f1c10p1ib0 block transmit times (average 2.598 msec, median 3 msec)
msec nevents
2 263
3 36674
4 1

# Based on the response time histogram, each of the clients have similar average and median
# response time. This can be useful to isolate any slow performing clients.

# Set the buffsize to 1 byte to assess the network latency for small messages
nsdperf-ib> buffsize 1
Buffer size set to 1 bytes

nsdperf-ib> test write
2-2 write 1.27 MB/sec (74800 msg/sec), cli 4% srv 4%, time 30, buff 1, RDMA

c71f1c9p1ib0 block transmit times (average 0.1124 msec, median 0 msec)
msec nevents
0 850036
1 21
2 4
3 5

c71f1c10p1ib0 block transmit times (average 0.1012 msec, median 0 msec)
msec nevents
0 944850
1 9

# Based on the response time histogram, each of the clients have similar average and median
# response time. This can be useful to isolate any slow performing clients.

# Shut down nsdperf (in server mode) on all client and server nodes
nsdperf-ib> killall

# Exit from program
nsdperf-ib> quit

Summary

IBM Spectrum scale is a complete software defined storage solution that delivers simplicity, scalability, high-speed access to data, and supports advanced storage management features such as compression, tiering, replication, and encryption. The Spectrum Scale nsdperf tool enables effective assessment of the network bandwidth between the NSD client and server nodes pertaining to IBM Spectrum Scale network topology over TCP/IP as well as over the RDMA network.

References

[1] mmnetverify command:
https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.2/com.ibm.spectrum.scale.v4r22.doc/bl1adm_mmnetverify.htm
[2] mmnetverify blog:
https://developer.ibm.com/storage/2017/02/24/diagnosing-network-problems-ibm-spectrum-scale-mmnetverify/
[3] iperf:
https://en.wikipedia.org/wiki/Iperf
[4] nsdperf:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/nsdperf%20README
[5] gpfsperf:
http://www-01.ibm.com/support/docview.wss?uid=isg15readmebbb63bf9mples_perf
[6] Netpipe network performance tool:
http://bitspjoule.org/netpipe/
[7] Infiniband Verbs Performance Tests:
https://github.com/lsgunth/perftest

APPENDIX A – TCP/IP Bandwidth using NetPipe network performance tool [6]

# TCP/IP bandwidth between single client and server over 1 x IPoIB link over FDR-IB

# ./NPtcp -h c71f1c7p1ib0

Now starting the main loop
0: 1 bytes 4750 times –> 0.38 Mbps in 20.31 usec
1: 2 bytes 4924 times –> 0.75 Mbps in 20.30 usec
2: 3 bytes 4926 times –> 1.13 Mbps in 20.22 usec
3: 4 bytes 3296 times –> 1.51 Mbps in 20.21 usec
.
.
38: 512 bytes 2350 times –> 183.77 Mbps in 21.26 usec
39: 515 bytes 2361 times –> 184.35 Mbps in 21.31 usec
40: 765 bytes 2368 times –> 270.09 Mbps in 21.61 usec
41: 768 bytes 3085 times –> 274.40 Mbps in 21.35 usec
42: 771 bytes 3128 times –> 271.06 Mbps in 21.70 usec
43: 1021 bytes 1553 times –> 357.98 Mbps in 21.76 usec
44: 1024 bytes 2295 times –> 357.89 Mbps in 21.83 usec
.
.
62: 8192 bytes 2039 times –> 2544.68 Mbps in 24.56 usec
63: 8195 bytes 2036 times –> 2552.65 Mbps in 24.49 usec
64: 12285 bytes 2042 times –> 3467.64 Mbps in 27.03 usec
65: 12288 bytes 2466 times –> 3160.44 Mbps in 29.66 usec
66: 12291 bytes 2247 times –> 3176.01 Mbps in 29.53 usec
67: 16381 bytes 1129 times –> 4074.60 Mbps in 30.67 usec
68: 16384 bytes 1630 times –> 3950.37 Mbps in 31.64 usec
69: 16387 bytes 1580 times –> 3937.13 Mbps in 31.75 usec
70: 24573 bytes 1575 times –> 5122.64 Mbps in 36.60 usec
71: 24576 bytes 1821 times –> 5855.86 Mbps in 32.02 usec
72: 24579 bytes 2082 times –> 5837.66 Mbps in 32.12 usec
73: 32765 bytes 1038 times –> 6921.58 Mbps in 36.12 usec
74: 32768 bytes 1384 times –> 6922.15 Mbps in 36.12 usec
.
.
92: 262144 bytes 510 times –> 20407.76 Mbps in 98.00 usec
93: 262147 bytes 510 times –> 20393.54 Mbps in 98.07 usec
94: 393213 bytes 509 times –> 22948.76 Mbps in 130.73 usec
95: 393216 bytes 509 times –> 22862.71 Mbps in 131.22 usec
96: 393219 bytes 508 times –> 22942.16 Mbps in 130.76 usec
97: 524285 bytes 254 times –> 25609.46 Mbps in 156.19 usec
98: 524288 bytes 320 times –> 26861.95 Mbps in 148.91 usec
99: 524291 bytes 335 times –> 26679.58 Mbps in 149.93 usec
100: 786429 bytes 333 times –> 29743.08 Mbps in 201.73 usec
101: 786432 bytes 330 times –> 29771.74 Mbps in 201.53 usec
102: 786435 bytes 330 times –> 29770.78 Mbps in 201.54 usec
103: 1048573 bytes 165 times –> 31021.22 Mbps in 257.89 usec
104: 1048576 bytes 193 times –> 31296.62 Mbps in 255.62 usec
105: 1048579 bytes 195 times –> 31623.55 Mbps in 252.98 usec
106: 1572861 bytes 197 times –> 33723.61 Mbps in 355.83 usec
107: 1572864 bytes 187 times –> 33690.15 Mbps in 356.19 usec
108: 1572867 bytes 187 times –> 33794.73 Mbps in 355.09 usec
109: 2097149 bytes 93 times –> 34186.09 Mbps in 468.03 usec
110: 2097152 bytes 106 times –> 34317.35 Mbps in 466.24 usec
111: 2097155 bytes 107 times –> 34429.04 Mbps in 464.72 usec
112: 3145725 bytes 107 times –> 34026.96 Mbps in 705.32 usec
113: 3145728 bytes 94 times –> 32615.07 Mbps in 735.86 usec
114: 3145731 bytes 90 times –> 32587.29 Mbps in 736.48 usec
115: 4194301 bytes 45 times –> 34507.09 Mbps in 927.34 usec
116: 4194304 bytes 53 times –> 34796.86 Mbps in 919.62 usec
117: 4194307 bytes 54 times –> 34831.03 Mbps in 918.72 usec
118: 6291453 bytes 54 times –> 35168.87 Mbps in 1364.84 usec
119: 6291456 bytes 48 times –> 34783.13 Mbps in 1379.98 usec
120: 6291459 bytes 48 times –> 34932.40 Mbps in 1374.08 usec
121: 8388605 bytes 24 times –> 34648.09 Mbps in 1847.14 usec
122: 8388608 bytes 27 times –> 33725.32 Mbps in 1897.68 usec
123: 8388611 bytes 26 times –> 33529.14 Mbps in 1908.79 usec

APPENDIX B – Infiniband Bandwidth using Infiniband Verbs Performance Tests [7]

# RDMA bandwidth between single client and server over 1 x FDR-IB link

# ib_write_bw -a c71f1c7p1ib0
—————————————————————————————
RDMA_Write BW Test
Dual-port : OFF Device : mlx5_0
Number of qps : 1 Transport type : IB
Connection type : RC Using SRQ : OFF
TX depth : 128
CQ Moderation : 100
Mtu : 4096[B] Link type : IB
Max inline data : 0[B] rdma_cm QPs : OFF
Data ex. method : Ethernet
—————————————————————————————
local address: LID 0x2a QPN 0x110d1 PSN 0x990306 RKey 0x0066f3 VAddr 0x003fff94800000
remote address: LID 0x07 QPN 0x019a PSN 0x13641c RKey 0x0058c1 VAddr 0x003fff7c800000
—————————————————————————————
#bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 2 5000 8.21 8.17 4.281720
4 5000 16.55 16.51 4.327333
8 5000 33.10 33.00 4.325431
16 5000 66.21 66.00 4.325506
32 5000 132.41 131.92 4.322621
64 5000 264.83 264.01 4.325560
128 5000 529.66 527.80 4.323771
256 5000 1059.31 1052.54 4.311206
512 5000 2118.63 2105.57 4.312206
1024 5000 4237.26 4206.14 4.307092
2048 5000 6097.54 6093.93 3.120093
4096 5000 6211.19 6195.85 1.586137
8192 5000 6220.81 6211.62 0.795088
16384 5000 6220.81 6219.45 0.398045
32768 5000 6223.22 6222.82 0.199130
65536 5000 6225.62 6224.76 0.099596
131072 5000 6225.59 6225.56 0.049804
262144 5000 6226.28 6226.24 0.024905
524288 5000 6226.05 6226.03 0.012452
1048576 5000 6226.43 6226.41 0.006226
2097152 5000 6226.14 6226.14 0.003113
4194304 5000 6226.47 6226.46 0.001557
8388608 5000 6226.23 6226.22 0.000778
—————————————————————————————