`

Linux System and Performance Monitoring(Network篇)

阅读更多

Linux System and Performance Monitoring(Network篇) 
Date: 2009.07.21 
Author: Darren Hoch 
译: Tonnyom[AT]hotmail.com
8.0 Network 监控介绍
在所有的子系统监控中,网络是最困难的.这主要是由于网络概念很抽象.当监控系统上的网络性能,这有太多因素.这些因素包括了延迟,冲突,拥挤和数据包丢失.
这个章节讨论怎么样检查Ethernet(译注:网卡),IP,TCP的性能.
8.1 Ethernet Configuration Settings(译注:网卡配置的设置)
除非很明确的指定,几乎所有的网卡都是自适应网络速度.当一个网络中有很多不同的网络设备时,会各自采用不同的速率和工作模式.
多数商业网络都运行在100 或 1000BaseTX.使用ethtool 可以确定这个系统是处于那种速率.
以下的例子中,是一个有100BaseTX 网卡的系统,自动协商适应至10BaseTX 的情况.
# ethtool eth0 
Settings for eth0: 
Supported ports: [ TP MII ] 
Supported link modes: 10baseT/Half 10baseT/Full 
100baseT/Half 100baseT/Full 
Supports auto-negotiation: Yes 
Advertised link modes: 10baseT/Half 10baseT/Full 
100baseT/Half 100baseT/Full 
Advertised auto-negotiation: Yes 
Speed: 10Mb/s 
Duplex: Half 
Port: MII 
PHYAD: 32 
Transceiver: internal 
Auto-negotiation: on 
Supports Wake-on: pumbg 
Wake-on: d 
Current message level: 0x00000007 (7) 
Link detected: yes
以下示范例子中,如何强制网卡速率调整至100BaseTX:
# ethtool -s eth0 speed 100 duplex full autoneg off
# ethtool eth0 
Settings for eth0: 
Supported ports: [ TP MII ] 
Supported link modes: 10baseT/Half 10baseT/Full 
100baseT/Half 100baseT/Full 
Supports auto-negotiation: Yes 
Advertised link modes: 10baseT/Half 10baseT/Full 
100baseT/Half 100baseT/Full 
Advertised auto-negotiation: No 
Speed: 100Mb/s 
Duplex: Full 
Port: MII 
PHYAD: 32 
Transceiver: internal 
Auto-negotiation: off 
Supports Wake-on: pumbg 
Wake-on: d 
Current message level: 0x00000007 (7) 
Link detected: yes
8.2 Monitoring Network Throughput(译注:网络吞吐量监控)
接口之间的同步并不意味着仅仅有带宽问题.重要的是,如何管理并优化,这2台主机之间的交换机,网线,或者路由器.测试网络吞吐量最好的方式就是,在这2个系统之间互相发送数据传输并统计下来,比如延迟和速度.
8.2.0 使用iptraf 查看本地吞吐量
iptraf 工具(http://iptraf.seul.org),提供了每个网卡吞吐量的仪表盘.
#iptraf -d eth0
Figure 1: Monitoring for Network Throughput 
从输出中可看到,该系统发送传输率(译注:Outgoing rates)为 61 mbps,这对于100 mbps网络来说,有点慢.
8.2.1 使用netperf 查看终端吞吐量
不同于iptraf 被动的在本地监控流量,netperf 工具可以让管理员,执行更加可控的吞吐量监控.对于确定从客户端工作站到一个高负荷的服务器端(比如file 或web server),它们之间有多少吞吐量是非常有帮助的.netperf 工具运行的是client/server 模式.
完成一个基本可控吞吐量测试,首先netperf server 必须运行在服务器端系统上:
server# netserver 
Starting netserver at port 12865 
Starting netserver at hostname 0.0.0.0 port 12865 and family AF_UNSPEC
netperf 工具可能需要进行多重采样.多数基本测试就是一次标准的吞吐量测试.以下例子就是,一个LAN(译注:局域网) 环境下,从client 上执行一次30秒的TCP 吞吐量采样:
从输出可看出,该网络的吞吐量大致在89 mbps 左右.server(192.168.1.215) 与client 在同一LAN 中.这对于100 mbps网络来说,性能非常好.
client# netperf -H 192.168.1.215 -l 30 
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
192.168.1.230 (192.168.1.230) port 0 AF_INET 
Recv Send Send 
Socket Socket Message Elapsed 
Size Size Size Time Throughput 
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 30.02 89.46
从LAN 切换到具备54G(译注:Wireless-G是未来54Mbps无线网联网标准)无线网络路由器中,并在10 英尺范围内测试时.该吞吐量就急剧的下降.在最大就为54 MBits的可能下,笔记本电脑可实现总吞吐量就为14 MBits.
client# netperf -H 192.168.1.215 -l 30 
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
192.168.1.215 (192.168.1.215) port 0 AF_INET 
Recv Send Send 
Socket Socket Message Elapsed 
Size Size Size Time Throughput 
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 30.10 14.09
如果在50英尺范围内呢,则进一步会下降至5 MBits.
# netperf -H 192.168.1.215 -l 30 
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
192.168.1.215 (192.168.1.215) port 0 AF_INET 
Recv Send Send 
Socket Socket Message Elapsed 
Size Size Size Time Throughput 
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 30.64 5.05
如果从LAN 切换到互联网上,则吞吐量跌至1 Mbits下了.
# netperf -H litemail.org -p 1500 -l 30 
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
litemail.org (72.249.104.148) port 0 AF_INET 
Recv Send Send 
Socket Socket Message Elapsed 
Size Size Size Time Throughput 
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 31.58 0.93
最后是一个VPN 连接环境,这是所有网络环境中最槽糕的吞吐量了.
# netperf -H 10.0.1.129 -l 30 
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
10.0.1.129 (10.0.1.129) port 0 AF_INET 
Recv Send Send 
Socket Socket Message Elapsed 
Size Size Size Time Throughput 
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 31.99 0.51
另外,netperf 可以帮助测试每秒总计有多少的TCP 请求和响应数.通过建立单一TCP 连接并顺序地发送多个请求/响应(ack 包来回在1个byte 大小).有点类似于RDBMS 程序在执行多个交易或者邮件服务器在同一个连接管道中发送邮件.
以下例子在30 秒的持续时间内,模拟TCP 请求/响应:
client# netperf -t TCP_RR -H 192.168.1.230 -l 30 
TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET 
to 192.168.1.230 (192.168.1.230) port 0 AF_INET 
Local /Remote 
Socket Size Request Resp. Elapsed Trans. 
Send Recv Size Size Time Rate 
bytes Bytes bytes bytes secs. per sec
16384 87380 1 1 30.00 4453.80 
16384 87380
在输出中看出,这个网络支持的处理速率为每秒4453 psh/ack(包大小为1 byte).这其实是理想状态下,因为实际情况时,多数requests(译注:请求),特别是responses(译注:响应),都大于1 byte.
现实情况下,netperf 一般requests 默认使用2K大小,responses 默认使用32K大小:
client# netperf -t TCP_RR -H 192.168.1.230 -l 30 -- -r 2048,32768 
TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
192.168.1.230 (192.168.1.230) port 0 AF_INET 
Local /Remote 
Socket Size Request Resp. Elapsed Trans. 
Send Recv Size Size Time Rate 
bytes Bytes bytes bytes secs. per sec
16384 87380 2048 32768 30.00 222.37 
16384 87380
这个处理速率减少到了每秒222.
8.2.2 使用iperf 评估网络效率
基于都是需要在2端检查连接情况下,iperf 和netperf 很相似.不同的是,iperf 更深入的通过windows size和QOS 设备来检查TCP/UDP 的效率情况.这个工具,是给需要优化TCP/IP stacks以及测试这些stacks 效率的管理员们量身定做的.
iperf 作为一个二进制程序,可运行在server 或者client 任一模式下.默认使用50001 端口.
首先启动server 端(192.168.1.215):
server# iperf -s -D 
Running Iperf Server as a daemon 
The Iperf daemon process ID : 3655 
------------------------------------------------------------ 
Server listening on TCP port 5001 
TCP window size: 85.3 KByte (default) 
------------------------------------------------------------
在以下例子里,一个无线网络环境下,其中client 端重复运行iperf,用于测试网络的吞吐量情况.这个环境假定处于被充分利用状态,很多主机都在下载ISO images文件.
首先client 端连接到server 端(192.168.1.215),并在总计60秒时间内,每5秒进行一次带宽测试的采样.
client# iperf -c 192.168.1.215 -t 60 -i 5 
------------------------------------------------------------ 
Client connecting to 192.168.1.215, TCP port 5001 
TCP window size: 25.6 KByte (default) 
------------------------------------------------------------ 
[ 3] local 192.168.224.150 port 51978 connected with 
192.168.1.215 port 5001 
[ ID] Interval Transfer Bandwidth 
[ 3] 0.0- 5.0 sec 6.22 MBytes 10.4 Mbits/sec 
[ ID] Interval Transfer Bandwidth 
[ 3] 5.0-10.0 sec 6.05 MBytes 10.1 Mbits/sec 
[ ID] Interval Transfer Bandwidth 
[ 3] 10.0-15.0 sec 5.55 MBytes 9.32 Mbits/sec 
[ ID] Interval Transfer Bandwidth 
[ 3] 15.0-20.0 sec 5.19 MBytes 8.70 Mbits/sec 
[ ID] Interval Transfer Bandwidth 
[ 3] 20.0-25.0 sec 4.95 MBytes 8.30 Mbits/sec 
[ ID] Interval Transfer Bandwidth 
[ 3] 25.0-30.0 sec 5.21 MBytes 8.74 Mbits/sec 
[ ID] Interval Transfer Bandwidth 
[ 3] 30.0-35.0 sec 2.55 MBytes 4.29 Mbits/sec 
[ ID] Interval Transfer Bandwidth 
[ 3] 35.0-40.0 sec 5.87 MBytes 9.84 Mbits/sec 
[ ID] Interval Transfer Bandwidth 
[ 3] 40.0-45.0 sec 5.69 MBytes 9.54 Mbits/sec 
[ ID] Interval Transfer Bandwidth 
[ 3] 45.0-50.0 sec 5.64 MBytes 9.46 Mbits/sec 
[ ID] Interval Transfer Bandwidth 
[ 3] 50.0-55.0 sec 4.55 MBytes 7.64 Mbits/sec 
[ ID] Interval Transfer Bandwidth 
[ 3] 55.0-60.0 sec 4.47 MBytes 7.50 Mbits/sec 
[ ID] Interval Transfer Bandwidth 
[ 3] 0.0-60.0 sec 61.9 MBytes 8.66 Mbits/sec
这台主机的其他网络传输,也会影响到这部分的带宽采样.所以可以看到总计60秒时间内,都在4 - 10 MBits 上下起伏.
除了TCP 测试之外,iperf 的UDP 测试主要是评估包丢失和抖动.
接下来的iperf 测试,是在同样的54Mbit G标准无线网络中.在早期的示范例子中,目前的吞吐量只有9 Mbits.
# iperf -c 192.168.1.215 -b 10M 
WARNING: option -b implies udp testing 
------------------------------------------------------------ 
Client connecting to 192.168.1.215, UDP port 5001 
Sending 1470 byte datagrams 
UDP buffer size: 107 KByte (default) 
------------------------------------------------------------ 
[ 3] local 192.168.224.150 port 33589 connected with 192.168.1.215 port 5001 
[ ID] Interval Transfer Bandwidth 
[ 3] 0.0-10.0 sec 11.8 MBytes 9.90 Mbits/sec 
[ 3] Sent 8420 datagrams 
[ 3] Server Report: 
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams 
[ 3] 0.0-10.0 sec 6.50 MBytes 5.45 Mbits/sec 0.480 ms 3784/ 8419 (45%) 
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order
从输出中可看出,在尝试传输10M 的数据时,实际上只产生了5.45M.却有45% 的包丢失.
8.3 Individual Connections with tcptrace
tcptrace 工具提供了对于某一具体连接里,详细的TCP 相关信息.该工具使用libcap 来分析某一具体TCP sessions.该工具汇报的信息,有时很难在某一TCP stream被发现.这些信息包括了有:
1,TCP Retransmissions(译注:IP 转播) - 所有数据大小被发送所需的包总额 
2,TCP Windows Sizes - 连接速度慢与小的windows sizes 有关 
3,Total throughput of the connection - 连接的吞吐量 
4,Connection duration - 连接的持续时间
8.3.1 案例学习 - 使用tcptrace
tcptrace 工具可能已经在部分Linux 发布版中有安装包了,该文作者通过网站,下载的是源码安装包:http://dag.wieers.com/rpm/packages/tcptrace.tcptrace 需要libcap 基于文件输入方式使用.在tcptrace 没有选项的情况下,默认每个唯一的连接过程都将被捕获.
以下例子是,使用libcap 基于输入文件为bigstuff:
# tcptrace bigstuff 
1 arg remaining, starting with 'bigstuff' 
Ostermann's tcptrace -- version 6.6.7 -- Thu Nov 4, 2004
146108 packets seen, 145992 TCP packets traced 
elapsed wallclock time: 0:00:01.634065, 89413 pkts/sec analyzed 
trace file elapsed time: 0:09:20.358860 
TCP connection info: 
1: 192.168.1.60:pcanywherestat - 192.168.1.102:2571 (a2b) 404> 450< 
2: 192.168.1.60:3356 - ftp.strongmail.net:21 (c2d) 35> 21< 
3: 192.168.1.60:3825 - ftp.strongmail.net:65023 (e2f) 5> 4< 
(complete) 
4: 192.168.1.102:1339 - 205.188.8.194:5190 (g2h) 6> 6< 
5: 192.168.1.102:1490 - cs127.msg.mud.yahoo.com:5050 (i2j) 5> 5< 
6: py-in-f111.google.com:993 - 192.168.1.102:3785 (k2l) 13> 14< 
 
上面的输出中,每个连接都有对应的源主机和目的主机.tcptrace 使用-l 和-o 选项可查看某一连接更详细的数据.
以下的结果,就是在bigstuff 文件中,#16 连接的相关统计数据:
# tcptrace -l -o1 bigstuff 
1 arg remaining, starting with 'bigstuff' 
Ostermann's tcptrace -- version 6.6.7 -- Thu Nov 4, 2004
146108 packets seen, 145992 TCP packets traced 
elapsed wallclock time: 0:00:00.529361, 276008 pkts/sec analyzed 
trace file elapsed time: 0:09:20.358860 
TCP connection info: 
32 TCP connections traced: 
TCP connection 1: 
host a: 192.168.1.60:pcanywherestat 
host b: 192.168.1.102:2571 
complete conn: no (SYNs: 0) (FINs: 0) 
first packet: Sun Jul 20 15:58:05.472983 2008 
last packet: Sun Jul 20 16:00:04.564716 2008 
elapsed time: 0:01:59.091733 
total packets: 854 
filename: bigstuff 
a->b: b->a: 
total packets: 404 total packets: 450 
ack pkts sent: 404 ack pkts sent: 450 
pure acks sent: 13 pure acks sent: 320 
sack pkts sent: 0 sack pkts sent: 0 
dsack pkts sent: 0 dsack pkts sent: 0 
max sack blks/ack: 0 max sack blks/ack: 0 
unique bytes sent: 52608 unique bytes sent: 10624 
actual data pkts: 391 actual data pkts: 130 
actual data bytes: 52608 actual data bytes: 10624 
rexmt data pkts: 0 rexmt data pkts: 0 
rexmt data bytes: 0 rexmt data bytes: 0 
zwnd probe pkts: 0 zwnd probe pkts: 0 
zwnd probe bytes: 0 zwnd probe bytes: 0 
outoforder pkts: 0 outoforder pkts: 0 
pushed data pkts: 391 pushed data pkts: 130 
SYN/FIN pkts sent: 0/0 SYN/FIN pkts sent: 0/0 
urgent data pkts: 0 pkts urgent data pkts: 0 pkts 
urgent data bytes: 0 bytes urgent data bytes: 0 bytes 
mss requested: 0 bytes mss requested: 0 bytes 
max segm size: 560 bytes max segm size: 176 bytes 
min segm size: 48 bytes min segm size: 80 bytes 
avg segm size: 134 bytes avg segm size: 81 bytes 
max win adv: 19584 bytes max win adv: 65535 bytes 
min win adv: 19584 bytes min win adv: 64287 bytes 
zero win adv: 0 times zero win adv: 0 times 
avg win adv: 19584 bytes avg win adv: 64949 bytes 
initial window: 160 bytes initial window: 0 bytes 
initial window: 2 pkts initial window: 0 pkts 
ttl stream length: NA ttl stream length: NA 
missed data: NA missed data: NA 
truncated data: 36186 bytes truncated data: 5164 bytes 
truncated packets: 391 pkts truncated packets: 130 pkts 
data xmit time: 119.092 secs data xmit time: 116.954 secs 
idletime max: 441267.1 ms idletime max: 441506.3 ms 
throughput: 442 Bps throughput: 89 Bps
8.3.2 案例学习 - 计算转播率
几乎不可能确定说哪个连接会有严重不足的转播问题,只是需要分析,使用tcptrace 工具可以通过过滤机制和布尔表达式来找出出问题的连接.一个很繁忙的网络中,会有很多的连接,几乎所有的连接都会有转播.找出其中最多的一个,这就是问题的关键.
下面的例子里,tcptrace 将找出那些转播大于100 segments(译注:分段数)的连接:
# tcptrace -f'rexmit_segs>100' bigstuff 
Output filter: ((c_rexmit_segs>100)OR(s_rexmit_segs>100)) 
1 arg remaining, starting with 'bigstuff' 
Ostermann's tcptrace -- version 6.6.7 -- Thu Nov 4, 2004
146108 packets seen, 145992 TCP packets traced 
elapsed wallclock time: 0:00:00.687788, 212431 pkts/sec analyzed 
trace file elapsed time: 0:09:20.358860 
TCP connection info: 
16: ftp.strongmail.net:65014 - 192.168.1.60:2158 (ae2af) 18695> 9817<
在这个输出中,是#16 这个连接里,超过了100 转播.现在,使用以下命令查看关于这个连接的其他信息:
# tcptrace -l -o16 bigstuff 
arg remaining, starting with 'bigstuff' 
Ostermann's tcptrace -- version 6.6.7 -- Thu Nov 4, 2004
146108 packets seen, 145992 TCP packets traced 
elapsed wallclock time: 0:00:01.355964, 107752 pkts/sec analyzed 
trace file elapsed time: 0:09:20.358860 
TCP connection info: 
32 TCP connections traced: 
================================ 
TCP connection 16: 
host ae: ftp.strongmail.net:65014 
host af: 192.168.1.60:2158 
complete conn: no (SYNs: 0) (FINs: 1) 
first packet: Sun Jul 20 16:04:33.257606 2008 
last packet: Sun Jul 20 16:07:22.317987 2008 
elapsed time: 0:02:49.060381 
total packets: 28512 
filename: bigstuff 
ae->af: af->ae:
unique bytes sent: 25534744 unique bytes sent: 0 
actual data pkts: 18695 actual data pkts: 0 
actual data bytes: 25556632 actual data bytes: 0 
rexmt data pkts: 1605 rexmt data pkts: 0 
rexmt data bytes: 2188780 rexmt data bytes: 0
计算转播率: 
rexmt/actual * 100 = Retransmission rate
1605/18695* 100 = 8.5%
这个慢连接的原因,就是因为它有8.5% 的转播率.
8.3.3 案例学习 - 计算转播时间
tcptrace 工具有一系列的模块展示不同的数据,按照属性,其中就有protocol(译注:协议),port(译注:端口),time等等.Slice module使得你可观察在一段时间内的TCP 性能.你可以在一系列的转发过程中,查看其他性能数据,以确定找出瓶颈.
以下例子示范了,tcptrace 是怎样使用slice 模式的:
# tcptrace –xslice bigfile
以上命令会创建一个slice.dat 文件在现在的工作目录中.这个文件内容,包含是每15秒间隔内转播的相关信息:
# ls -l slice.dat 
-rw-r--r-- 1 root root 3430 Jul 10 22:50 slice.dat 
# more slice.dat 
date segs bytes rexsegs rexbytes new active 
--------------- -------- -------- -------- -------- -------- -------- 
22:19:41.913288 46 5672 0 0 1 1 
22:19:56.913288 131 25688 0 0 0 1 
22:20:11.913288 0 0 0 0 0 0 
22:20:26.913288 5975 4871128 0 0 0 1 
22:20:41.913288 31049 25307256 0 0 0 1 
22:20:56.913288 23077 19123956 40 59452 0 1 
22:21:11.913288 26357 21624373 5 7500 0 1 
22:21:26.913288 20975 17248491 3 4500 12 13 
22:21:41.913288 24234 19849503 10 15000 3 5 
22:21:56.913288 27090 22269230 36 53999 0 2 
22:22:11.913288 22295 18315923 9 12856 0 2 
22:22:26.913288 8858 7304603 3 4500 0 1
8.4 结论
监控网络性能由以下几个部分组成:
1,检查并确定所有网卡都工作在正确的速率. 
2,检查每块网卡的吞吐量,并确认其处于服务时的网络速度. 
3,监控网络流量的类型,并确定适当的流量优先级策略.
分享到:
评论

相关推荐

    Linux System and Performance Monitoring

    分CPU篇,memory篇,i/o篇,network篇 讲述如何对系统性能进行监测。 讲得很透彻 , 而且还很全面。 理论结合实际 , 其中案例分析都很好。不花哨 , 采用的工具及命令都是最基本的 , 有助于实际操作 。

    UNIX and Linux System Administration Handbook 5th Ed

    UNIX® and Linux® System Administration Handbook, Fifth Edition, is today’s definitive guide to installing, configuring, and maintaining any UNIX or Linux system, including systems that supply core ...

    UNIX And Linux System Administration Handbook, 5th Edition

    Updated for new distributions and cloud environments, this comprehensive guide covers best practices for every facet of system administration, including storage management, network design and ...

    Pro JavaScript Performance Monitoring and Visualization

    Best practices are changing or becoming redefined continually because of changes and optimizations at the interpreter level, and differences in system configuration, and network speeds. This is ...

    Zabbix 4 Network Monitoring 3rd Edition

    Zabbix 4 Network Monitoring is the perfect starting point for monitoring the performance of your network devices and applications with Zabbix. Even if you've never used a monitoring solution before, ...

    Zabbix Enterprise Network Monitoring Made Easy PacktPub (2017)

    extensively used to not only measure your system's performance, but also to forecast capacity issues. This is where Zabbix, one of the most popular monitoring solutions for networks and applications, ...

    Network Analysis Using Wireshark 2 Cookbook

    The book expands on some of the subjects explored in the first version, including TCP performance, network security, Wireless LAN, and how to use Wireshark for cloud and virtual system monitoring....

    Linux: Powerful Server Administration

    This Learning Path is intended for system administrators with a basic understanding of Linux operating systems and written with the novice-to-intermediate Linux user in mind. To get the most of this ...

    Linux Shell Scripting Cookbook - Third Edition

    From there, you'll learn text processing, web interactions, network and system monitoring, and system tuning. Software engineers will learn how to examine system applications, how to use modern ...

    Big.Data.Benchmarks.Performance.Optimization.and.Emerging.Hardware

    This book also invites three papers from several industrial partners, including two papers describing tools used in system benchmarking and monitoring and one paper discussing principles and ...

    Python Network Programming Cookbook, 2nd Edition - 2017

    Chapter 8, Network Monitoring and Security, introduces you to various techniques for capturing, storing, analyzing, and manipulating network packets. This encourages you to go further to investigate ...

    CentOS.7.Linux.Server.Cookbook.2nd.Ed.pdf

    Install and configure CentOS 7 Linux server system from scratch using normal and advanced methods Maintain a performance-based and secure server solution by deploying expert configuration advice and ...

    SQLServer Performance and Balanced System Design.ppt

    Introduction Benchmarking Tools & Monitoring Siebel Database Siebel Queries Siebel Query Repro Siebel & Statistics ... - Network Performance - IO - Memory - File Management Summary

    learning nagios 3.0

    Nagios is a tool for system and network monitoring. It constantly checks other machines and various services on those machines. The main purpose of system monitoring is to detect and report any system...

    vmware虚拟化解决方案

    Performance monitoring capabilities, including utilization graphs of CPU, memory, Disk I/O, and Network I/O provide the detail needed to analyze the performance of physical servers, and the virtual ...

    Cisco Press - OSPF Network Design Solutions, 2nd Edition

    Part II OSPF Routing and Network Design 161 Chapter 4 Design Fundamentals 163 Chapter 5 Routing Concepts and Configuration 225 Chapter 6 Redistribution 339 Chapter 7 Summarization 405 Part III OSPF ...

    Python System Monitor (Psymon)

    Python System Monitor (Psymon) is a cross-platform, task and performance monitor with the power of python Python System Monitor (Psymon) is a cross-platform, task and performance monitor. *...

    Solaris 10 System Administration Essentials

    10.3 Monitoring Network Performance 304 10.3.1 dladm Command 304 10.3.2 ifconfig Command 305 10.3.3 netstat Command 305 10.3.4 snoop Command 307 10.3.5 traceroute Command 308 Chapter 11 Solaris User ...

    Designing a Wireless network

    54 PM Page xvii Contents xvii Developing WPANs through the 802.15 Architecture 143 Bluetooth 144 HomeRF 147 High Performance Radio LAN 147 Mobile Wireless Technologies 148 ...

    TJA1080 FlexRay transceiver

    The TJA1080 actively monitors the system performance using dedicated error and status information (readable by any microcontroller), as well as internal voltage and temperature monitoring. The TJA1080...

Global site tag (gtag.js) - Google Analytics