Hi,
Have we got to the stage where the default install options for Fedora 26 work fine with a SSD, or should I be tweaking something?
At the moment I've let it install with whatever parameters it does by itself. All I did was change from using LVM to EXT4 (I see absolutely no advantage to using LVM on a single-disc computer, and a lot of serious annoyances should I pull out a drive to read it on another computer). And went with the boot, system root, and swap partitions that it offers.
El 5/8/17 a las 9:36, Tim escribió:
Hi,
Have we got to the stage where the default install options for Fedora 26 work fine with a SSD, or should I be tweaking something?
At the moment I've let it install with whatever parameters it does by itself. All I did was change from using LVM to EXT4 (I see absolutely no advantage to using LVM on a single-disc computer, and a lot of serious annoyances should I pull out a drive to read it on another computer). And went with the boot, system root, and swap partitions that it offers.
Enable the fstrim timer: "sudo systemctl enable fstrim.timer" to get TRIM done periodically.
On Sat, 2017-08-05 at 15:47 +0200, José María Terry Jiménez wrote:
El 5/8/17 a las 9:36, Tim escribió:
Enable the fstrim timer: "sudo systemctl enable fstrim.timer" to get TRIM done periodically.
Careful when using encrypted SSD's - looks like this written by one of the authors of the cryptsetup man page:
http://asalor.blogspot.de/2011/08/trim-dm-crypt-problems.html
Wolfgang
On Sat, 2017-08-05 at 17:31 +0200, Wolfgang Pfeiffer wrote:
On Sat, 2017-08-05 at 15:47 +0200, José María Terry Jiménez wrote:
El 5/8/17 a las 9:36, Tim escribió:
Enable the fstrim timer: "sudo systemctl enable fstrim.timer" to get TRIM done periodically.
Careful when using encrypted SSD's - looks like this written by one of the authors of the cryptsetup man page:
http://asalor.blogspot.de/2011/08/trim-dm-crypt-problems.html
Addendum: it *looks like* the Fedora 26 trim doesn't touch encrypted partitions by default when starting a "/usr/sbin/fstrim -av" via /usr/lib/systemd/system/fstrim.service
Here's the output I got when trying it manually on my encrypted /home partition:
# fstrim -v /home fstrim: /home: the discard operation is not supported
So I hope fstrim didn't touch /home .... :)
If this is true then good Milan Broz is working for Redhat ... :)
Anyways: I'd recommend to be careful with trim on encrypted SSD's
Wolfgang
On 08/05/2017 06:12 PM, Wolfgang Pfeiffer wrote:
If this is true then good Milan Broz is working for Redhat ... :)
Anyways: I'd recommend to be careful with trim on encrypted SSD's
He is also the main contact for the dm-crypt Fedora package.
https://admin.fedoraproject.org/pkgdb/package/rpms/cryptsetup/
Jonny
On Sat, 2017-08-05 at 17:06 +0930, Tim wrote:
Hi,
Have we got to the stage where the default install options for Fedora 26 work fine with a SSD, or should I be tweaking something?
Not that I knew about problems regarding such an install. I had my first install of Fedora (24) to an msata SSD quite a few months ago, and I'm not aware of any problems that I thought could be related to SSD architecture. On a second SSD (a "real" SSD, not msata) in the same computer I have Debian Linux installed - I never even speculated about whether the fact the disks are SSD's could lead to problems ... :) (Except that I heard they break without too much announcing of it ... )
At the moment I've let it install with whatever parameters it does by itself. All I did was change from using LVM to EXT4 (I see absolutely no advantage to using LVM on a single-disc computer, and a lot of serious annoyances should I pull out a drive to read it on another computer). And went with the boot, system root, and swap partitions that it offers.
I think I let the Fedora installer wipe the SSD before partitiong it - also because MS Windows was on it before - just to make sure of a clean install. The only problem with this installation was that the installer at the end of the install said something like I wouldn't be able to boot the system. It was wrong: I could boot Fedora. Tho' I have two different -- although with different wordings for them - UEFI boot entries for Fedora. My guess: it's a UEFI bug, or a feature ... :)
And yes: I also didn't take the LVM partitioning options for the disk. Not for F24, and not, IINM, for Debian. On Fedora just SWAP, /home, / and /boot/efi.
HTH, Wolfgang
-- Trying out Thunderbird for mail. 5... 4... 3... 2... ONE! Email has gone
Boilerplate: All mail to this mailbox is automatically deleted, there is no point trying to privately email me, I only get to see the messages posted to the mailing list. _______________________________________________ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
On 8/5/17 5:36 PM, Tim wrote:
Hi,
Have we got to the stage where the default install options for Fedora 26 work fine with a SSD, or should I be tweaking something?
At the moment I've let it install with whatever parameters it does by itself. All I did was change from using LVM to EXT4 (I see absolutely no advantage to using LVM on a single-disc computer, and a lot of serious annoyances should I pull out a drive to read it on another computer). And went with the boot, system root, and swap partitions that it offers.
I have been using Windows Drive C, Fedora /boot and Ubuntu /boot, all on the same SSD since Fedora 24 and haven't had any issues with functionality of that setup.
regards,
Steve
On 08/07/2017 02:30 PM, Stephen Morris wrote:
On 8/5/17 5:36 PM, Tim wrote:
Hi,
Have we got to the stage where the default install options for Fedora 26 work fine with a SSD, or should I be tweaking something?
At the moment I've let it install with whatever parameters it does by itself. All I did was change from using LVM to EXT4 (I see absolutely no advantage to using LVM on a single-disc computer, and a lot of serious annoyances should I pull out a drive to read it on another computer). And went with the boot, system root, and swap partitions that it offers.
For many users, LVM is somewhat irrelevant. I use LVM because I often end up expanding filesystems by adding PVs to the VGs the LVs are built on (lots of acronyms there!), but my use cases are outside the normal desktop users.
The "annoyances" when moving the drive to another system can be reduced by naming the LVs logically--typically I use the host name as part of the LV and VG names. Therefore if I have to move an LV from host "bigdog" to host "hamster" there's little chance of a name collision.
I have been using Windows Drive C, Fedora /boot and Ubuntu /boot, all on the same SSD since Fedora 24 and haven't had any issues with functionality of that setup.
My recommendations for SSD:
Use ext4 filesystems (they don't poke the journal as much as JFS or Btrfs).
Reduce swappiness ("vm.swappiness=1" in /etc/sysctl.conf)
Put things that change a lot (swap, /var/log, /tmp) on rotating media or RAMdisk to reduce writes to SSD
Use fstrim periodically
And MOST important:
Back the sucker up REGULARLY AND OFTEN! SSDs tend to die suddenly and typically without much warning and it's quite difficult (if not impossible) to recover any data on them.
I like SSDs. I like their speed. I don't trust them much with critical data. I back up my SSDs every night to rotating media. Yes, I'm paranoid, but it only takes one unrecoverable SSD to make one as crazy as I am.
I have to leave now. The nice men in the white coats are here... ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - Consciousness: that annoying time between naps. - ----------------------------------------------------------------------
Tim:
Have we got to the stage where the default install options for Fedora work fine with a SSD, or should I be tweaking something?
Rick Stevens:
For many users, LVM is somewhat irrelevant. I use LVM because I often end up expanding filesystems by adding PVs to the VGs the LVs are built on (lots of acronyms there!), but my use cases are outside the normal desktop users.
Yes, I think most users will only have a single drive. And if they do add another drive, chances are that they don't want to pretend that they're one bigger drive. Not to mention the fun and games of dealing with a system when half of a large virtual drive dies and takes out all your data.
The "annoyances" when moving the drive to another system can be reduced by naming the LVs logically--typically I use the host name as part of the LV and VG names. Therefore if I have to move an LV from host "bigdog" to host "hamster" there's little chance of a name collision.
I've done that kind of thing in the past. But if you forget, name clashes aren't insurmountable (pun intended), but a pain to have to deal with. But even with unique naming, just mounting LVM is more hassle than other schemes. You can't just double click on a drive icon and have the system work it out for you, as if you'd plugged in a USB stick. Or, at least on *my* system, that's never worked. I had to use command line tools to discover the drive and the names of its parts, then manually mount the volume then group as two more steps.
My recommendations for SSD:
Use ext4 filesystems (they don't poke the journal as much as JFS or Btrfs).
I went for EXT4 as being closer to what I'm familiar with. I think only BTRFS and LVM were the other two options presented by the installer.
Reduce swappiness ("vm.swappiness=1" in /etc/sysctl.conf)
I'll have to look into that, I've read about it long ago, but forgotten all about it. The quick summary on Wikipedia gives a small range of example values, that does sound like it's at the extreme end, though I don't know much actual difference 1 verses 10 versus 60 does.
On my system I have
cat /proc/meminfo MemTotal: 4045280 kB MemFree: 1792672 kB MemAvailable: 2767436 kB Buffers: 106176 kB Cached: 1062820 kB SwapCached: 0 kB Active: 1410028 kB Inactive: 572900 kB Active(anon): 695352 kB Inactive(anon): 163560 kB Active(file): 714676 kB Inactive(file): 409340 kB Unevictable: 48 kB Mlocked: 48 kB SwapTotal: 8388604 kB SwapFree: 8388604 kB Dirty: 152 kB Writeback: 0 kB AnonPages: 813968 kB Mapped: 377180 kB Shmem: 44992 kB Slab: 158444 kB SReclaimable: 111848 kB SUnreclaim: 46596 kB KernelStack: 7952 kB PageTables: 39580 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 10411244 kB Committed_AS: 4347292 kB VmallocTotal: 34359738367 kB VmallocUsed: 0 kB VmallocChunk: 0 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB ShmemHugePages: 0 kB ShmemPmdMapped: 0 kB CmaTotal: 0 kB CmaFree: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 173696 kB DirectMap2M: 4020224 kB
So I don't know what amount of using swap I need to be able to do.
Put things that change a lot (swap, /var/log, /tmp) on rotating media or RAMdisk to reduce writes to SSD
I only have one disk in the system, so swap is on the SSD, I see the system automatically set up a tmpfs for /tmp, and /var/log is just an ordinary directory.
Use fstrim periodically
That I've briefly looked at, and it's confusing as to whether it's actually worth using it.
And MOST important:
Back the sucker up REGULARLY AND OFTEN! SSDs tend to die suddenly and typically without much warning and it's quite difficult (if not impossible) to recover any data on them.
I tend to not store important stuff on client computers. It makes updating easier, and I can carry on working on stuff on any PC, rather than be tied to the one I stored things on.
I like SSDs. I like their speed. I don't trust them much with critical data.
This is the first time I've used one. I was very surprised by the 13 second cold boot up of the new installation. I haven't had a computer come up that fast since my old Amiga 1200.
I wonder what the longevity is for SSDs that aren't being used (*), compared to hard drives.
* Such as backing up something and putting the SSD on a shelf.