まず、設定方法は次の通りです。
# zfs list NAME USED AVAIL REFER MOUNTPOINT tank1 2.07M 58.6G 156K /tank1 # zfs create -V 30G tank1/zvol # zfs get all tank1/zvol NAME PROPERTY VALUE SOURCE tank1/zvol type volume - tank1/zvol creation Fri Nov 23 12:14 2012 - tank1/zvol used 30.9G - tank1/zvol available 58.6G - tank1/zvol referenced 72K - tank1/zvol compressratio 1.00x - tank1/zvol reservation none default tank1/zvol volsize 30G local tank1/zvol volblocksize 8K - tank1/zvol checksum on default tank1/zvol compression off default tank1/zvol readonly off default tank1/zvol copies 1 default tank1/zvol refreservation 30.9G local tank1/zvol primarycache all default tank1/zvol secondarycache all default tank1/zvol usedbysnapshots 0 - tank1/zvol usedbydataset 72K - tank1/zvol usedbychildren 0 - tank1/zvol usedbyrefreservation 30.9G - tank1/zvol logbias latency default tank1/zvol dedup off default tank1/zvol mlslabel none default tank1/zvol sync standard default tank1/zvol refcompressratio 1.00x - tank1/zvol written 72K - # ls -l /dev/zvol/tank1/zvol lrwxrwxrwx 1 root root 9 Nov 23 12:15 /dev/zvol/tank1/zvol -> ../../zd0 # ls -l /dev/zd0 brw-rw---- 1 root disk 230, 0 Nov 23 12:15 /dev/zd0このように、/dev/zd0 が生成されます。
今回は、作成されたブロックデバイスを ext4 にして、性能測定を行いました。
# mkfs -t ext4 /dev/zvol/tank1/zvol mke2fs 1.41.12 (17-May-2010) Discarding device blocks: done Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=2 blocks, Stripe width=2 blocks 1966080 inodes, 7864320 blocks 393216 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 240 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 24 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. # mount /dev/zvol/tank1/zvol /mnt_tank1_zvol/ # df /mnt_tank1_zvol/ Filesystem 1K-blocks Used Available Use% Mounted on /dev/zd0 30963708 176064 29214780 1% /mnt_tank1_zvolいちおう、io scheduler を3種類 (cfq, noop, deadline) 試しましたが、deadline が最も良好でしたので、その結果のみ掲載します。
# echo deadline > /sys/block/sdd/queue/scheduler # cat /sys/block/sdd/queue/scheduler noop anticipatory [deadline] cfq # bonnie++ -u root -d /mnt_tank1_zvol/ ... Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP xxxx 15464M 940 96 40404 4 19568 3 2630 60 68826 5 4712 63 Latency 14844us 4526ms 4292ms 400ms 417ms 6134us Version 1.96 ------Sequential Create------ --------Random Create-------- xxxx -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 20098 18 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ Latency 108us 927us 473us 169us 25us 54us 1.96,1.96,xxxx,1,1353630877,15464M,,940,96,40404,4,19568,3,2630,60,68826,5,4712,63,16,,,,,20098,18,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,14844us,4526ms,4292ms,400ms,417ms,6134us,108us,927us,473us,169us,25us,54usベースの ZFS プール(tank1)の性能に対して、シーケンシャル WRITE で、約25% ほどの性能低下です。
0 件のコメント:
コメントを投稿