So I have an existing pool that was created several years ago on an old build of FreeNAS, and I wanted to check and see if the ashift was set correctly for 4K, meaning I want an ashift=12 (2^12=4096). What does a quick Google tell me to do?
[root@server] ~# zpool get all | grep ashift [root@server] ~#
Huh… nothing. That’s odd.
[root@server] ~# zpool get all NAME PROPERTY VALUE SOURCE Array1 size 2.72T - Array1 capacity 13% - Array1 altroot /mnt local Array1 health ONLINE - Array1 guid 4640375899101559431 default Array1 version - default Array1 bootfs - default Array1 delegation on default Array1 autoreplace off default Array1 cachefile /data/zfs/zpool.cache local Array1 failmode continue local Array1 listsnapshots off default Array1 autoexpand on local Array1 dedupditto 0 default Array1 dedupratio 1.00x - Array1 free 2.36T - Array1 allocated 364G - Array1 readonly off - Array1 comment - default Array1 expandsize - - Array1 freeing 0 default Array1 fragmentation 12% - Array1 leaked 0 default Array1 feature@async_destroy enabled local Array1 feature@empty_bpobj active local Array1 feature@lz4_compress active local Array1 feature@multi_vdev_crash_dump enabled local Array1 feature@spacemap_histogram active local Array1 feature@enabled_txg active local Array1 feature@hole_birth active local Array1 feature@extensible_dataset enabled local Array1 feature@embedded_data disabled local Array1 feature@bookmarks enabled local Array1 feature@filesystem_limits disabled local Array1 feature@large_blocks disabled local freenas-boot size 14.2G - freenas-boot capacity 5% - freenas-boot altroot - default freenas-boot health ONLINE - freenas-boot guid 11011409209729808822 default freenas-boot version - default freenas-boot bootfs freenas-boot/ROOT/9.10-STABLE-201604261518 local freenas-boot delegation on default freenas-boot autoreplace off default freenas-boot cachefile - default freenas-boot failmode wait default freenas-boot listsnapshots off default freenas-boot autoexpand off default freenas-boot dedupditto 0 default freenas-boot dedupratio 1.00x - freenas-boot free 13.5G - freenas-boot allocated 773M - freenas-boot readonly off - freenas-boot comment - default freenas-boot expandsize - - freenas-boot freeing 0 default freenas-boot fragmentation - - freenas-boot leaked 0 default freenas-boot feature@async_destroy enabled local freenas-boot feature@empty_bpobj active local freenas-boot feature@lz4_compress active local freenas-boot feature@multi_vdev_crash_dump disabled local freenas-boot feature@spacemap_histogram disabled local freenas-boot feature@enabled_txg disabled local freenas-boot feature@hole_birth disabled local freenas-boot feature@extensible_dataset disabled local freenas-boot feature@embedded_data disabled local freenas-boot feature@bookmarks disabled local freenas-boot feature@filesystem_limits disabled local freenas-boot feature@large_blocks disabled local
Looking at just the “zpool get all” output, looks like the ashift attribute is missing. I’m guessing that the ashift attribute wasn’t specified upon pool creation, leaving it up to the drive to report whether it is a 4K drive or not. Another round of Google reveals the “zdb -C” command. Let’s try that.
[root@server] ~# zdb -C | grep ashift ashift: 9
Huh. ashift=9 means 512… and that’s no good. But wait…
[root@server] ~# zdb -C freenas-boot: version: 5000 name: 'freenas-boot' state: 0 txg: 95367 pool_guid: 11011409209729808822 hostid: 2882373074 hostname: '' vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 11011409209729808822 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 14099280634272200775 path: '/dev/da0p2' whole_disk: 1 metaslab_array: 30 metaslab_shift: 27 ashift: 9 asize: 15370551296 is_log: 0 create_txg: 4 features_for_read:
zdb is only listing the USB boot drive. Back to Google and we get…
[root@server] ~# zdb -U /data/zfs/zpool.cache Array1: version: 5000 name: 'Array1' state: 0 txg: 8520213 pool_guid: 4640375899101559431 hostid: 2882373074 hostname: 'server.workgroup' vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 4640375899101559431 children[0]: type: 'raidz' id: 0 guid: 7207720561268687283 nparity: 1 metaslab_array: 35 metaslab_shift: 34 ashift: 12 asize: 2994157387776 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 2714791811722168437 path: '/dev/gptid/cca60fe6-5031-11e4-9120-001bb9ed2d38' whole_disk: 1 DTL: 204 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 6091442355483197824 path: '/dev/gptid/cd472cc3-5031-11e4-9120-001bb9ed2d38' whole_disk: 1 DTL: 163 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 1150763296991461133 path: '/dev/gptid/21338e8c-25ea-11e6-a898-3cd92b0298a8' whole_disk: 1 DTL: 377 create_txg: 4 features_for_read: com.delphix:hole_birth
There we go. Now let’s that down.
[root@server] ~# zdb -U /data/zfs/zpool.cache | grep ashift ashift: 12
And we have our answer.
Thanks for this. Very helpful
LikeLike
Thanks for sharing the process. Exactly what I needed to see.
LikeLike
[…] Checking ashift on existing pools […]
LikeLike
Why not use zpool list?
root@pve2:/var/lib/vz# zpool list -v -o ashift
ASHIFT
12
mirror 928G 424G 504G – – 11% 45.6% – ONLINE
ata-WDC_WD10EZEX-08WN4A0_WD-WCC6Y0VTJNJ3-part3 – – – – – – – – ONLINE
ata-ST1000DM003-1SB102_Z9AFJBLF-part3 – – – – – – – – ONLINE
12
nvme-THNSF5256GPUK_TOSHIBA_47BS145STAMT 238G 61.7G 176G – – 34% 25.9% – ONLINE
12
nvme-INTEL_SSDPEKKF256G8L_PHHP93560B4W256B 238G 133G 105G – – 36% 55.9% – ONLINE
LikeLike
Thank you for this. It’s old, but still good. For me “zpool get ashift” was reporting 0, so it tells me the value was auto-detected, but not which value was autodetected. With “zdb -C” I was able to get the actual value, which to my relief was 12.
LikeLike