Post Reply 
 
Thread Rating:
  • 0 Votes - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
KVM behaviour
06-22-2018, 06:24 PM
Post: #11
RE: KVM behaviour
When I try to improve the IO performance of disk on kvm, at first we need to know the IO perf before we improve. Using the statement below in kvm to test:
Code:
sysbench --test=fileio --num-threads=50 --file-total-size=2G --file-test-mode=rndrw prepare       #准备测试
sysbench --test=fileio --num-threads=50 --file-total-size=2G --file-test-mode=rndrw run             #开始测试
sysbench --test=fileio --num-threads=50 --file-total-size=2G --file-test-mode=rndrw cleanup       #清除测试文件

I got my kvm performance like below:
Code:
File operations:
    reads/s:                      44.10
    writes/s:                     27.82
    fsyncs/s:                     86.26

Throughput:
    read, MiB/s:                  0.69
    written, MiB/s:               0.43

General statistics:
    total time:                          10.3850s
    total number of events:              1643

Latency (ms):
         min:                                  0.01
         avg:                                311.92
         max:                               3056.51
         95th percentile:                   1258.08
         sum:                             512477.12

Threads fairness:
    events (avg/stddev):           32.8600/8.19
    execution time (avg/stddev):   10.2495/0.11

and after add "cache='none' io='native'" into virtual machine like below:
Code:
<devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source file='/var/lib/libvirt/images/newcentos72.img'/>
      <target dev='hda' bus='ide'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>

The use the three statements again, i got better IO performance, shows like below
Code:
File operations:
    reads/s:                      57.53
    writes/s:                     36.78
    fsyncs/s:                     119.40

Throughput:
    read, MiB/s:                  0.90
    written, MiB/s:               0.57

General statistics:
    total time:                          10.6013s
    total number of events:              2266

Latency (ms):
         min:                                  0.01
         avg:                                227.84
         max:                               2209.97
         95th percentile:                    926.33
         sum:                             516289.20

Threads fairness:
    events (avg/stddev):           45.3200/8.68
    execution time (avg/stddev):   10.3258/0.17

I learnt all above things from
" https://www.cnblogs.com/wclwcw/p/8535001.html " and " https://blog.csdn.net/dylloveyou/article...https://blog.csdn.net/dylloveyou/article/detail "

RR rayluk
Find all posts by this user
Quote this message in a reply
06-22-2018, 06:33 PM
Post: #12
RE: KVM behaviour
(06-22-2018 06:24 PM)changxy Wrote:  When I try to improve the IO performance of disk on kvm, at first we need to know the IO perf before we improve. Using the statement below in kvm to test:
Code:
sysbench --test=fileio --num-threads=50 --file-total-size=2G --file-test-mode=rndrw prepare       #准备测试
sysbench --test=fileio --num-threads=50 --file-total-size=2G --file-test-mode=rndrw run             #开始测试
sysbench --test=fileio --num-threads=50 --file-total-size=2G --file-test-mode=rndrw cleanup       #清除测试文件

I got my kvm performance like below:
Code:
File operations:
    reads/s:                      44.10
    writes/s:                     27.82
    fsyncs/s:                     86.26

Throughput:
    read, MiB/s:                  0.69
    written, MiB/s:               0.43

General statistics:
    total time:                          10.3850s
    total number of events:              1643

Latency (ms):
         min:                                  0.01
         avg:                                311.92
         max:                               3056.51
         95th percentile:                   1258.08
         sum:                             512477.12

Threads fairness:
    events (avg/stddev):           32.8600/8.19
    execution time (avg/stddev):   10.2495/0.11

and after add "cache='none' io='native'" into virtual machine like below:
Code:
<devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source file='/var/lib/libvirt/images/newcentos72.img'/>
      <target dev='hda' bus='ide'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>

The use the three statements again, i got better IO performance, shows like below
Code:
File operations:
    reads/s:                      57.53
    writes/s:                     36.78
    fsyncs/s:                     119.40

Throughput:
    read, MiB/s:                  0.90
    written, MiB/s:               0.57

General statistics:
    total time:                          10.6013s
    total number of events:              2266

Latency (ms):
         min:                                  0.01
         avg:                                227.84
         max:                               2209.97
         95th percentile:                    926.33
         sum:                             516289.20

Threads fairness:
    events (avg/stddev):           45.3200/8.68
    execution time (avg/stddev):   10.3258/0.17

I learnt all above things from
" https://www.cnblogs.com/wclwcw/p/8535001.html " and " https://blog.csdn.net/dylloveyou/article...https://blog.csdn.net/dylloveyou/article/detail "

RR rayluk

P1) We got our own tools for disk speed. Please look for fs_perf when check for disk IO.
P2) I think usability is more important then performance. Please finish the request in the headpost first. Later, we look for improvement.
P3) Again. Is virsh usable when connected from remote host?
Find all posts by this user
Quote this message in a reply
06-22-2018, 06:48 PM
Post: #13
RE: KVM behaviour
(06-22-2018 06:33 PM)rayluk Wrote:  
(06-22-2018 06:24 PM)changxy Wrote:  When I try to improve the IO performance of disk on kvm, at first we need to know the IO perf before we improve. Using the statement below in kvm to test:
Code:
sysbench --test=fileio --num-threads=50 --file-total-size=2G --file-test-mode=rndrw prepare       #准备测试
sysbench --test=fileio --num-threads=50 --file-total-size=2G --file-test-mode=rndrw run             #开始测试
sysbench --test=fileio --num-threads=50 --file-total-size=2G --file-test-mode=rndrw cleanup       #清除测试文件

I got my kvm performance like below:
Code:
File operations:
    reads/s:                      44.10
    writes/s:                     27.82
    fsyncs/s:                     86.26

Throughput:
    read, MiB/s:                  0.69
    written, MiB/s:               0.43

General statistics:
    total time:                          10.3850s
    total number of events:              1643

Latency (ms):
         min:                                  0.01
         avg:                                311.92
         max:                               3056.51
         95th percentile:                   1258.08
         sum:                             512477.12

Threads fairness:
    events (avg/stddev):           32.8600/8.19
    execution time (avg/stddev):   10.2495/0.11

and after add "cache='none' io='native'" into virtual machine like below:
Code:
<devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source file='/var/lib/libvirt/images/newcentos72.img'/>
      <target dev='hda' bus='ide'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>

The use the three statements again, i got better IO performance, shows like below
Code:
File operations:
    reads/s:                      57.53
    writes/s:                     36.78
    fsyncs/s:                     119.40

Throughput:
    read, MiB/s:                  0.90
    written, MiB/s:               0.57

General statistics:
    total time:                          10.6013s
    total number of events:              2266

Latency (ms):
         min:                                  0.01
         avg:                                227.84
         max:                               2209.97
         95th percentile:                    926.33
         sum:                             516289.20

Threads fairness:
    events (avg/stddev):           45.3200/8.68
    execution time (avg/stddev):   10.3258/0.17

I learnt all above things from
" https://www.cnblogs.com/wclwcw/p/8535001.html " and " https://blog.csdn.net/dylloveyou/article...https://blog.csdn.net/dylloveyou/article/detail "

RR rayluk

P1) We got our own tools for disk speed. Please look for fs_perf when check for disk IO.
P2) I think usability is more important then performance. Please finish the request in the headpost first. Later, we look for improvement.
P3) Again. Is virsh usable when connected from remote host?

A1) Could you please tell me which tools are we use?
A3) I can use virsh at tetra, but not tbg15, I think as long as you run virsh under root, it works.
Find all posts by this user
Quote this message in a reply
06-22-2018, 06:50 PM
Post: #14
RE: KVM behaviour
(06-22-2018 06:48 PM)changxy Wrote:  
(06-22-2018 06:33 PM)rayluk Wrote:  
(06-22-2018 06:24 PM)changxy Wrote:  When I try to improve the IO performance of disk on kvm, at first we need to know the IO perf before we improve. Using the statement below in kvm to test:
Code:
sysbench --test=fileio --num-threads=50 --file-total-size=2G --file-test-mode=rndrw prepare       #准备测试
sysbench --test=fileio --num-threads=50 --file-total-size=2G --file-test-mode=rndrw run             #开始测试
sysbench --test=fileio --num-threads=50 --file-total-size=2G --file-test-mode=rndrw cleanup       #清除测试文件

I got my kvm performance like below:
Code:
File operations:
    reads/s:                      44.10
    writes/s:                     27.82
    fsyncs/s:                     86.26

Throughput:
    read, MiB/s:                  0.69
    written, MiB/s:               0.43

General statistics:
    total time:                          10.3850s
    total number of events:              1643

Latency (ms):
         min:                                  0.01
         avg:                                311.92
         max:                               3056.51
         95th percentile:                   1258.08
         sum:                             512477.12

Threads fairness:
    events (avg/stddev):           32.8600/8.19
    execution time (avg/stddev):   10.2495/0.11

and after add "cache='none' io='native'" into virtual machine like below:
Code:
<devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source file='/var/lib/libvirt/images/newcentos72.img'/>
      <target dev='hda' bus='ide'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>

The use the three statements again, i got better IO performance, shows like below
Code:
File operations:
    reads/s:                      57.53
    writes/s:                     36.78
    fsyncs/s:                     119.40

Throughput:
    read, MiB/s:                  0.90
    written, MiB/s:               0.57

General statistics:
    total time:                          10.6013s
    total number of events:              2266

Latency (ms):
         min:                                  0.01
         avg:                                227.84
         max:                               2209.97
         95th percentile:                    926.33
         sum:                             516289.20

Threads fairness:
    events (avg/stddev):           45.3200/8.68
    execution time (avg/stddev):   10.3258/0.17

I learnt all above things from
" https://www.cnblogs.com/wclwcw/p/8535001.html " and " https://blog.csdn.net/dylloveyou/article...https://blog.csdn.net/dylloveyou/article/detail "

RR rayluk

P1) We got our own tools for disk speed. Please look for fs_perf when check for disk IO.
P2) I think usability is more important then performance. Please finish the request in the headpost first. Later, we look for improvement.
P3) Again. Is virsh usable when connected from remote host?

A1) Could you please tell me which tools are we use?

fs_perf is the name. cod://tbcheck/archive/hdnrun/hd_perf/fs_perf.sh to be exact

Quote:A3) I can use virsh at tetra, but not tbg15, I think as long as you run virsh under root, it works.
Ok then
Find all posts by this user
Quote this message in a reply
06-25-2018, 01:42 PM
Post: #15
RE: KVM behaviour
Network
N1) KVM doesn't has such problem, when I run 'systemctl restart network', my network still good.
N2) the BasicTCP keep warning that
Code:
./net-receiver.c: line 1: /bin: Is a directory
./net-receiver.c: line 2: keep-receiving: command not found
./net-receiver.c: line 3: keep-receiving: command not found
./net-receiver.c: line 4: keep-receiving: command not found
./net-receiver.c: line 5: */: No such file or directory
./net-receiver.c: line 9: static: command not found
./net-receiver.c: line 11: {port,: command not found
./net-receiver.c: line 14: syntax error near unexpected token `('
./net-receiver.c: line 14: `int main(int argc, char* argv[])'
Disk
D1) Since the virtual machine doesn't have 5G space for test. fs_perf.sh cannot work on my computer.
D2) After search the materials, I think it depends on which type disk is.If the disk is SSD, the return will be 0, if the disk is HDD, the return will be 1.

rr rayluk
Find all posts by this user
Quote this message in a reply
06-25-2018, 03:12 PM
Post: #16
RE: KVM behaviour
(06-25-2018 01:42 PM)changxy Wrote:  Network
N1) KVM doesn't has such problem, when I run 'systemctl restart network', my network still good.
Please record this

(06-25-2018 01:42 PM)changxy Wrote:  N2) the BasicTCP keep warning that
Code:
./net-receiver.c: line 1: /bin: Is a directory
./net-receiver.c: line 2: keep-receiving: command not found
./net-receiver.c: line 3: keep-receiving: command not found
./net-receiver.c: line 4: keep-receiving: command not found
./net-receiver.c: line 5: */: No such file or directory
./net-receiver.c: line 9: static: command not found
./net-receiver.c: line 11: {port,: command not found
./net-receiver.c: line 14: syntax error near unexpected token `('
./net-receiver.c: line 14: `int main(int argc, char* argv[])'
File a bug and fix it

(06-25-2018 01:42 PM)changxy Wrote:  Disk
D1) Since the virtual machine doesn't have 5G space for test. fs_perf.sh cannot work on my computer.

Add more disk to the VM for the test then.

(06-25-2018 01:42 PM)changxy Wrote:  D2) After search the materials, I think it depends on which type disk is.If the disk is SSD, the return will be 0, if the disk is HDD, the return will be 1.
Don't trust what you have searched this time. Its behaviour is strange in VM
Find all posts by this user
Quote this message in a reply
06-28-2018, 06:28 PM
Post: #17
RE: KVM behaviour
(06-25-2018 03:12 PM)rayluk Wrote:  
(06-25-2018 01:42 PM)changxy Wrote:  N2) the BasicTCP keep warning that
Code:
./net-receiver.c: line 1: /bin: Is a directory
./net-receiver.c: line 2: keep-receiving: command not found
./net-receiver.c: line 3: keep-receiving: command not found
./net-receiver.c: line 4: keep-receiving: command not found
./net-receiver.c: line 5: */: No such file or directory
./net-receiver.c: line 9: static: command not found
./net-receiver.c: line 11: {port,: command not found
./net-receiver.c: line 14: syntax error near unexpected token `('
./net-receiver.c: line 14: `int main(int argc, char* argv[])'
File a bug and fix it

Sorry, I just noticed you are running the c source code. Please compile it before running it.
Find all posts by this user
Quote this message in a reply
06-29-2018, 09:34 AM
Post: #18
RE: KVM behaviour
(06-28-2018 06:28 PM)rayluk Wrote:  
(06-25-2018 03:12 PM)rayluk Wrote:  
(06-25-2018 01:42 PM)changxy Wrote:  N2) the BasicTCP keep warning that
Code:
./net-receiver.c: line 1: /bin: Is a directory
./net-receiver.c: line 2: keep-receiving: command not found
./net-receiver.c: line 3: keep-receiving: command not found
./net-receiver.c: line 4: keep-receiving: command not found
./net-receiver.c: line 5: */: No such file or directory
./net-receiver.c: line 9: static: command not found
./net-receiver.c: line 11: {port,: command not found
./net-receiver.c: line 14: syntax error near unexpected token `('
./net-receiver.c: line 14: `int main(int argc, char* argv[])'
File a bug and fix it

Sorry, I just noticed you are running the c source code. Please compile it before running it.

Yes, I figured it out, run 'make' at first, now the problem is the keep-receiving and keep-sending, when the keep-receiving running, it keep receiving several times and got Error, and keep-sending cannot stop itself, it will keep sending and keep got errors.
Find all posts by this user
Quote this message in a reply
06-29-2018, 10:12 AM
Post: #19
RE: KVM behaviour
(06-29-2018 09:34 AM)changxy Wrote:  
(06-28-2018 06:28 PM)rayluk Wrote:  
(06-25-2018 03:12 PM)rayluk Wrote:  
(06-25-2018 01:42 PM)changxy Wrote:  N2) the BasicTCP keep warning that
Code:
./net-receiver.c: line 1: /bin: Is a directory
./net-receiver.c: line 2: keep-receiving: command not found
./net-receiver.c: line 3: keep-receiving: command not found
./net-receiver.c: line 4: keep-receiving: command not found
./net-receiver.c: line 5: */: No such file or directory
./net-receiver.c: line 9: static: command not found
./net-receiver.c: line 11: {port,: command not found
./net-receiver.c: line 14: syntax error near unexpected token `('
./net-receiver.c: line 14: `int main(int argc, char* argv[])'
File a bug and fix it

Sorry, I just noticed you are running the c source code. Please compile it before running it.

Yes, I figured it out, run 'make' at first, now the problem is the keep-receiving and keep-sending, when the keep-receiving running, it keep receiving several times and got Error, and keep-sending cannot stop itself, it will keep sending and keep got errors.

net_sender and net_receiver can do a better jobs on this.
Find all posts by this user
Quote this message in a reply
Post Reply 


Forum Jump: