From: Patrick O'Callaghan pocallaghan@gmail.com On Mon, 2022-03-07 at 23:35 -0500, R. G. Newbury wrote:
Has a 240G SSD for Fedora, and 2 500G nvme ssd's for storage (both in adapters as the MB has no M.2 slots).
Somewhat OT, but do you notice a difference between the SSD and the NVMe+adapter combos? I don't have M.2 slots either and wondered if it made sense for me (I have a good SATA3 SSD already). poc
I wondered the same thing. And also whether it made a difference which PCIE x16 slot contained which hardware (including the nvidia GT9700 video card. I ran a test transferring 3GB from RAM tmpfs to each drive *using rsync*. That was too small a test to show any substantive difference. Spurred by your request I ran it again writing 23GB to each storage unit. I used rsync on a folder containing many various size files as a real world test. Transfer 23G using rsync -a SSD Crucial (2012) Disk model: M4-CT256M4SSD2 Tue 08 Mar 2022 09:14:41 PM EST Tue 08 Mar 2022 09:19:38 PM EST Diff 4:57 NVME0 WD Black SN750 (2019) Disk model: WDS500G3X0C-00SJG0 Tue 08 Mar 2022 09:19:38 PM EST Tue 08 Mar 2022 09:22:45 PM EST Diff 3:07 HD WD Red (2012) Disk model: WDC WD10EFRX-68F Tue 08 Mar 2022 09:22:45 PM EST Tue 08 Mar 2022 09:28:56 PM EST Diff 6:11 NVME1 Tue 08 Mar 2022 09:28:56 PM EST Tue 08 Mar 2022 09:31:59 PM EST Disk model: Samsung SSD 970 EVO Plus 500GB Diff 3:03
So yes the NVME drives were substantially faster than the (now-old) SSD transferring in 60% of the time of the SSD. I expected a greater difference between the 2 NVME drives given that the Samsung 970 touts a substantially higher write capability. NVME0 is mounted in a Startech adapter: cost$13.00 Cdn about a year ago. NVME1 is mounted in a axGear adapter, purchased through Best Buy for a staggering $10.99 Cdn on sale, with free shipping! So the M.2 plus adapter cost me $102 including tax. Very happy. Recommended.
Only caveat is that you need a free long PCIE x16 slot (although each adapter/M.2 only uses x4 for the 4 channels that each M.2 uses. So slot sharing is not actually a problem as between multiple adapters. I am going to swap the video card around to see if that might make a difference although I doubt it will.
Available here until March 10 at $10.99: https://www.bestbuy.ca/en-ca/product/axgear-m-2-nvme-ssd-ngff-to-pci-e-adapt...
If you want more storage the Asus Ultra Quad at around $90 will handle 4 M.2 up to 2280 drives and will handle a RAID storage setup.
HTH Geoff
On Tue, 2022-03-08 at 22:53 -0500, R. G. Newbury wrote:
From: Patrick O'Callaghan pocallaghan@gmail.com On Mon, 2022-03-07 at 23:35 -0500, R. G. Newbury wrote:
Has a 240G SSD for Fedora, and 2 500G nvme ssd's for storage (both in adapters as the MB has no M.2 slots).
Somewhat OT, but do you notice a difference between the SSD and the NVMe+adapter combos? I don't have M.2 slots either and wondered if it made sense for me (I have a good SATA3 SSD already). poc
I wondered the same thing. And also whether it made a difference which PCIE x16 slot contained which hardware (including the nvidia GT9700 video card. I ran a test transferring 3GB from RAM tmpfs to each drive *using rsync*. That was too small a test to show any substantive difference. Spurred by your request I ran it again writing 23GB to each storage unit. I used rsync on a folder containing many various size files as a real world test. Transfer 23G using rsync -a SSD Crucial (2012) Disk model: M4-CT256M4SSD2 Tue 08 Mar 2022 09:14:41 PM EST Tue 08 Mar 2022 09:19:38 PM EST Diff 4:57 NVME0 WD Black SN750 (2019) Disk model: WDS500G3X0C-00SJG0 Tue 08 Mar 2022 09:19:38 PM EST Tue 08 Mar 2022 09:22:45 PM EST Diff 3:07 HD WD Red (2012) Disk model: WDC WD10EFRX-68F Tue 08 Mar 2022 09:22:45 PM EST Tue 08 Mar 2022 09:28:56 PM EST Diff 6:11 NVME1 Tue 08 Mar 2022 09:28:56 PM EST Tue 08 Mar 2022 09:31:59 PM EST Disk model: Samsung SSD 970 EVO Plus 500GB Diff 3:03
So yes the NVME drives were substantially faster than the (now-old) SSD transferring in 60% of the time of the SSD. I expected a greater difference between the 2 NVME drives given that the Samsung 970 touts a substantially higher write capability. NVME0 is mounted in a Startech adapter: cost$13.00 Cdn about a year ago. NVME1 is mounted in a axGear adapter, purchased through Best Buy for a staggering $10.99 Cdn on sale, with free shipping! So the M.2 plus adapter cost me $102 including tax. Very happy. Recommended.
Only caveat is that you need a free long PCIE x16 slot (although each adapter/M.2 only uses x4 for the 4 channels that each M.2 uses. So slot sharing is not actually a problem as between multiple adapters. I am going to swap the video card around to see if that might make a difference although I doubt it will.
Available here until March 10 at $10.99: https://www.bestbuy.ca/en-ca/product/axgear-m-2-nvme-ssd-ngff-to-pci-e-adapt...
If you want more storage the Asus Ultra Quad at around $90 will handle 4 M.2 up to 2280 drives and will handle a RAID storage setup.
Thanks for the detailed info. My current SSD is a Samsung 860 EVO (2TB) so for comparison I did this:
$ time dd if=/dev/zero bs=1G count=23 of=Big 23+0 records in 23+0 records out 24696061952 bytes (25 GB, 23 GiB) copied, 14.9873 s, 1.6 GB/s
real 0m15.087s user 0m0.000s sys 0m14.640s
However that's clearly not a reflection of actual I/O speed as the writes will have been cached.
poc
On 3/9/22 02:44, Patrick O'Callaghan wrote:
Thanks for the detailed info. My current SSD is a Samsung 860 EVO (2TB) so for comparison I did this:
$ time dd if=/dev/zero bs=1G count=23 of=Big 23+0 records in 23+0 records out 24696061952 bytes (25 GB, 23 GiB) copied, 14.9873 s, 1.6 GB/s
real 0m15.087s user 0m0.000s sys 0m14.640s
However that's clearly not a reflection of actual I/O speed as the writes will have been cached.
You can use "conv=fdatasync" to make sure all data is written before giving the final result.
On Wed, 2022-03-09 at 11:25 -0800, Samuel Sieb wrote:
On 3/9/22 02:44, Patrick O'Callaghan wrote:
Thanks for the detailed info. My current SSD is a Samsung 860 EVO (2TB) so for comparison I did this:
$ time dd if=/dev/zero bs=1G count=23 of=Big 23+0 records in 23+0 records out 24696061952 bytes (25 GB, 23 GiB) copied, 14.9873 s, 1.6 GB/s
real 0m15.087s user 0m0.000s sys 0m14.640s
However that's clearly not a reflection of actual I/O speed as the writes will have been cached.
You can use "conv=fdatasync" to make sure all data is written before giving the final result.
Didn't seem to make a difference:
$ time dd if=/dev/zero bs=1G count=23 conv=fdatasync of=Big 23+0 records in 23+0 records out 24696061952 bytes (25 GB, 23 GiB) copied, 15.3153 s, 1.6 GB/s
real 0m15.375s user 0m0.001s sys 0m14.932s
poc
On 3/10/22 02:47, Patrick O'Callaghan wrote:
On Wed, 2022-03-09 at 11:25 -0800, Samuel Sieb wrote:
On 3/9/22 02:44, Patrick O'Callaghan wrote:
Thanks for the detailed info. My current SSD is a Samsung 860 EVO (2TB) so for comparison I did this:
$ time dd if=/dev/zero bs=1G count=23 of=Big 23+0 records in 23+0 records out 24696061952 bytes (25 GB, 23 GiB) copied, 14.9873 s, 1.6 GB/s
real 0m15.087s user 0m0.000s sys 0m14.640s
However that's clearly not a reflection of actual I/O speed as the writes will have been cached.
You can use "conv=fdatasync" to make sure all data is written before giving the final result.
Didn't seem to make a difference:
$ time dd if=/dev/zero bs=1G count=23 conv=fdatasync of=Big 23+0 records in 23+0 records out 24696061952 bytes (25 GB, 23 GiB) copied, 15.3153 s, 1.6 GB/s
Are you using btrfs with compression enabled?
On Thu, 2022-03-10 at 11:13 -0800, Samuel Sieb wrote:
On 3/10/22 02:47, Patrick O'Callaghan wrote:
On Wed, 2022-03-09 at 11:25 -0800, Samuel Sieb wrote:
On 3/9/22 02:44, Patrick O'Callaghan wrote:
Thanks for the detailed info. My current SSD is a Samsung 860 EVO (2TB) so for comparison I did this:
$ time dd if=/dev/zero bs=1G count=23 of=Big 23+0 records in 23+0 records out 24696061952 bytes (25 GB, 23 GiB) copied, 14.9873 s, 1.6 GB/s
real 0m15.087s user 0m0.000s sys 0m14.640s
However that's clearly not a reflection of actual I/O speed as the writes will have been cached.
You can use "conv=fdatasync" to make sure all data is written before giving the final result.
Didn't seem to make a difference:
$ time dd if=/dev/zero bs=1G count=23 conv=fdatasync of=Big 23+0 records in 23+0 records out 24696061952 bytes (25 GB, 23 GiB) copied, 15.3153 s, 1.6 GB/s
Are you using btrfs with compression enabled?
Btrfs in the default F35 configuration, so no compression.
poc
On Thu, 2022-03-10 at 22:40 +0000, Patrick O'Callaghan wrote:
On Thu, 2022-03-10 at 11:13 -0800, Samuel Sieb wrote:
On 3/10/22 02:47, Patrick O'Callaghan wrote:
On Wed, 2022-03-09 at 11:25 -0800, Samuel Sieb wrote:
On 3/9/22 02:44, Patrick O'Callaghan wrote:
Thanks for the detailed info. My current SSD is a Samsung 860 EVO (2TB) so for comparison I did this:
$ time dd if=/dev/zero bs=1G count=23 of=Big 23+0 records in 23+0 records out 24696061952 bytes (25 GB, 23 GiB) copied, 14.9873 s, 1.6 GB/s
real 0m15.087s user 0m0.000s sys 0m14.640s
However that's clearly not a reflection of actual I/O speed as the writes will have been cached.
You can use "conv=fdatasync" to make sure all data is written before giving the final result.
Didn't seem to make a difference:
$ time dd if=/dev/zero bs=1G count=23 conv=fdatasync of=Big 23+0 records in 23+0 records out 24696061952 bytes (25 GB, 23 GiB) copied, 15.3153 s, 1.6 GB/s
Are you using btrfs with compression enabled?
Btrfs in the default F35 configuration, so no compression.
Oops. It actually has "compress=zstd:1" in the fstab line.
Apologies. That completely invalidates the numbers.
poc
On Thu, 10 Mar 2022 at 18:43, Patrick O'Callaghan pocallaghan@gmail.com wrote:
On Thu, 2022-03-10 at 22:40 +0000, Patrick O'Callaghan wrote:
On Thu, 2022-03-10 at 11:13 -0800, Samuel Sieb wrote:
On 3/10/22 02:47, Patrick O'Callaghan wrote:
On Wed, 2022-03-09 at 11:25 -0800, Samuel Sieb wrote:
On 3/9/22 02:44, Patrick O'Callaghan wrote:
Thanks for the detailed info. My current SSD is a Samsung 860 EVO (2TB) so for comparison I did this:
$ time dd if=/dev/zero bs=1G count=23 of=Big 23+0 records in 23+0 records out 24696061952 bytes (25 GB, 23 GiB) copied, 14.9873 s, 1.6 GB/s
real 0m15.087s user 0m0.000s sys 0m14.640s
However that's clearly not a reflection of actual I/O speed as the writes will have been cached.
You can use "conv=fdatasync" to make sure all data is written before giving the final result.
Didn't seem to make a difference:
$ time dd if=/dev/zero bs=1G count=23 conv=fdatasync of=Big 23+0 records in 23+0 records out 24696061952 bytes (25 GB, 23 GiB) copied, 15.3153 s, 1.6 GB/s
Are you using btrfs with compression enabled?
Btrfs in the default F35 configuration, so no compression.
Oops. It actually has "compress=zstd:1" in the fstab line.
Apologies. That completely invalidates the numbers.
Not completely invalid, they still say something about a real-world use case, (I work with optical remote sensing where many images have big blocks of "missing" data codes, e.g., clouds) but the interpretation changes. We have been using netcdf4 files with internal compression, but now I'm motivated to compare without compression on btrfs for "scratch" files that don't move on a network.
On Thu, 2022-03-10 at 19:47 -0400, George N. White III wrote:
Oops. It actually has "compress=zstd:1" in the fstab line.
Apologies. That completely invalidates the numbers.
Not completely invalid, they still say something about a real-world use case, (I work with optical remote sensing where many images have big blocks of "missing" data codes, e.g., clouds) but the interpretation changes. We have been using netcdf4 files with internal compression, but now I'm motivated to compare without compression on btrfs for "scratch" files that don't move on a network.
I'm calling it invalid because the data is a stream of zeroes, i.e. it's pretty much maximally compressible.
This might be more realistic, using /dev/urandom:
$ time dd if=/dev/urandom bs=1M count=23000 of=Big 23000+0 records in 23000+0 records out 24117248000 bytes (24 GB, 22 GiB) copied, 81.9688 s, 294 MB/s
real 1m22.106s user 0m0.040s sys 1m21.753s
poc
Note with NVME drives, well any NAND FLASH, you have to know what technology is in use when writing data to the drive. In particular we have noticed that some of the latest, large storage devices, use TLC (three bits per cell) based technology. Writing to TLC cells is relatively slow. So most NVME "drives" actually write PCIe -> RAM -> SLC first, then in the background SLC -> TLC. An area of the NAND flash is configured/used as SLC (1 bit per cell) which can be written at a fast speed. Then later (or when this SLC area is full) the "drive" starts moving this to TLC (probably the same SLC cells and now used in TLC mode).
The results of this is that you can see a fast burst for a few 100 MBytes, and then the drive slows dramatically depending on the "drive" type, the size of it, how full it is and how the manufacturer's firmware does this. This is fine for typical uses by for streaming large amounts of data or data tests this can rear its head. Typically modern MLC based drives don't see this drop off in write speed.
Terry On 11/03/2022 12:26, Patrick O'Callaghan wrote:
On Thu, 2022-03-10 at 19:47 -0400, George N. White III wrote:
Oops. It actually has "compress=zstd:1" in the fstab line.
Apologies. That completely invalidates the numbers.
Not completely invalid, they still say something about a real-world use case, (I work with optical remote sensing where many images have big blocks of "missing" data codes, e.g., clouds) but the interpretation changes. We have been using netcdf4 files with internal compression, but now I'm motivated to compare without compression on btrfs for "scratch" files that don't move on a network.
I'm calling it invalid because the data is a stream of zeroes, i.e. it's pretty much maximally compressible.
This might be more realistic, using /dev/urandom:
$ time dd if=/dev/urandom bs=1M count=23000 of=Big 23000+0 records in 23000+0 records out 24117248000 bytes (24 GB, 22 GiB) copied, 81.9688 s, 294 MB/s
real 1m22.106s user 0m0.040s sys 1m21.753s
poc _______________________________________________ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure