Everyone:
If you have followed my threads about:
SMB failing with F27
and
system hanging and requring repeated restarts,
then you've seen people suggest replacing my 1 TB HDD with an SSD. I acquired a 1 TB SSD and then tried to clone the HDD to the SSD. The clone /failed/. Reason: the disk is already showing some bad sectors. The outputs of satactl and fsck make that undeniably clear.
On the advice of a professional installer, I have since acquired an additional SSD (capacity 120 GB) and am now acquiring a mounting bracket and some power and SATA data cables. I also downloaded the F27 KDE Plasma 5 Spin as an ".iso" image.
My plan is to install F27 "clean" on the two SSD's, mounting the 120 GB SSD at root ("/") and the 1 TB SSD at /home. I must then migrate my data, browser cookies (Google Chrome, Firefox), e-mail accounts/saved/messages/other settings (Thunderbird), and documents, pictures, music, videos, and various downloads from the HDD to the SSD.
This machine has 8 GB of memory on board.
I now ask the community for some suggestions.
First, for partitioning:
1. Should I even try to accept /automatic/ partitioning when the installer gets to that point?
2. Is 120 GB large enough for the information on the other directories besides /home?
3. Should I create a separate /boot partition on the smaller SSD, and if so, how large should I make it?
4. How large should the swap partition be, and where should I put it? (That is, on the 120 GB or the 1 TB drive)?
5. In general, should I place a partition for anything other than /home on the 1 TB SSD?
Now, as regards data migration: I have three user accounts to migrate, plus another directory on /home called "lost and found."
1. Should I even try to migrate "lost and found," and if so, how?
2. I have at least two choices for migrating data and settings from the various user accounts--three for some of them.
a. Connect the HDD to the SATA bus /after/ installing F27, and then force-copying everything out of each /home directory to its corresponding directory on the new configuration. (What command(s) would you recommend using, and with what options/switches/etc.?)
b. Connect a large external HDD through a USB interface, transfer all the data to it before modifying the hardware, then re-transfer it to the system after installing the SSD's and F27.
c. Migrate the data to its "temporary refuge" over a Samba network (possibly do-able for at least one account, and that's the biggest account) and then re-migrate to the new system?
Which choice would you recommend?
3. Is it worth migrating every single hidden file or folder? Or should I select only those folders that I know contain customization, account, or similar settings, plus my saved documents/pictures/music/videos, and migrate those?
Thanks in advance.
Temlakos
[snip] | I now ask the community for some suggestions.
I have done this type of set up on my systems before so what its worth I will share how I installed and where applicable, why.
| First, for partitioning: | | 1. Should I even try to accept /automatic/ partitioning when the installer gets to that point? No. In custom choose the 120 GB drive and auto choices may be fine but mine was to mount /boot, /swap, /tmp, and /. For reasons related to HDD and rpm's that was the order; for SDD not so much. The second drive I mounted on /Crypt [or some other name you want].
| 2. Is 120 GB large enough for the information on the other directories besides /home? Yes. In my experience this has been more than enough. /home was dealt with differently and the data ended up on /Crypt.
| 3. Should I create a separate /boot partition on the smaller SSD, and if so, how large should I make it? Yes. See 1 above. default size should be fine as it helps keep you honest and clean boot regularly.
| 4. How large should the swap partition be, and where should I put it? (That is, on the 120 GB or the 1 TB drive)? I always went with 2x RAM. See 1 above.
| 5. In general, should I place a partition for anything other than /home on the 1 TB SSD? This will explain how/why I put /home on the 120 [smaller drive]. Through the use of hard/soft links to folders in /Crypt I connected the data files I wanted to preserve on /Crypt. This use of links kept data writing to /Crypt and in so doing kept it separate from the OS drive. So /home/user1/Documents -->/Crypt/user1/Documents, /home/user1/Pictures --> /Crypt/user1/Pictures, etc. etc. This link was invisible to the user. The data files from software likewise can be linked, /home/user1/.thunderbird --> /Crypt/user1/.thunderbird; which was great for recovering the mail client and other softeware. This set-up was born of having put /home on /Crypt at first but if you migrated to a new distro or recovered from failure you tended to inherit artifacts which the new system choked on. This process proved to be a cleaner foundation from which to recover/reinstall. One had only reinstall a clean OS on the 120 then re-link, the data was never touched during the installation process. Proved so effective that I preferred do clean installs from OS iteration to the next as opposed to upgrading. There are some pros/cons to soft/hard links so research for the trade-offs.
| Now, as regards data migration: I have three user accounts to migrate, plus another directory on /home called "lost and found." | | 1. Should I even try to migrate "lost and found," and if so, how? Can't give an honest answer, I never bothered.
| 2. I have at least two choices for migrating data and settings from the various user accounts--three for some of them. Personally, I spent the $40 US on a external case for my old drive and moved piecemeal the items I wanted. When done I did a DoD wipe of the drive and reformatted for an external B/U drive.
| a. Connect the HDD to the SATA bus /after/ installing F27, and then force-copying everything out of each /home directory to its corresponding directory on the new configuration. (What command(s) would you recommend using, and with what options/switches/etc.?) This will take with it artifacts which could cause issues IMO.
| b. Connect a large external HDD through a USB interface, transfer all the data to it before modifying the hardware, then re-transfer it to the system after installing the SSD's and F27. Since you already have the two drives for the system an external case is a better option IMO. If you have spare hardware then you could mount the drive in a separate system and you have the beginnings of a NAS but that is another project. An external case would be the path of least resistance here IMO.
| c. Migrate the data to its "temporary refuge" over a Samba network (possibly do-able for at least one account, and that's the biggest account) and then re-migrate to the new system? Unless you are integrating with windows, I don't see the need for Samba, Linux has several protocols to serve this capacity. My fav is sftp on the internal network as it uses the users' existing credentials.
| Which choice would you recommend?
| 3. Is it worth migrating every single hidden file or folder? Or should I select only those folders that I know contain customization, account, or similar settings, plus my saved documents/pictures/music/videos, and migrate those? Hopefully the above answered this question. While seems a bit to do, the long term benefits proved this method was worth the trouble. Hope it helps you a bit.
| Thanks in advance. | -- Fred
On Sat, 16 Dec 2017 10:13:55 -0500 fred roller fredroller66@gmail.com wrote:
I have done this type of set up on my systems before so what its worth I will share how I installed and where applicable, why.
Good read. I'll keep it around for future reference.
| 5. In general, should I place a partition for anything other than /home on the 1 TB SSD? This will explain how/why I put /home on the 120 [smaller drive]. Through the use of hard/soft links to folders in /Crypt I connected the data files I wanted to preserve on /Crypt. This use of links kept data writing to /Crypt and in so doing kept it separate from the OS drive. So /home/user1/Documents -->/Crypt/user1/Documents, /home/user1/Pictures --> /Crypt/user1/Pictures, etc. etc. This link was invisible to the user. The data files from software likewise can be linked, /home/user1/.thunderbird --> /Crypt/user1/.thunderbird; which was great for recovering the mail client and other softeware. This set-up was born of having put /home on /Crypt at first but if you migrated to a new distro or recovered from failure you tended to inherit artifacts which the new system choked on. This process proved to be a cleaner foundation from which to recover/reinstall. One had only reinstall a clean OS on the 120 then re-link, the data was never touched during the installation process. Proved so effective that I preferred do clean installs from OS iteration to the next as opposed to upgrading. There are some pros/cons to soft/hard links so research for the trade-offs.
Mostly I replied because I wanted to give a thumbs up to doing things this way. I also do this, and it makes everything so much easier, and safer, and convenient.
On 12/16/2017 10:13 AM, fred roller wrote:
[snip] | I now ask the community for some suggestions.
I have done this type of set up on my systems before so what its worth I will share how I installed and where applicable, why.
| First, for partitioning: | | 1. Should I even try to accept /automatic/ partitioning when the installer gets to that point? No. In custom choose the 120 GB drive and auto choices may be fine but mine was to mount /boot, /swap, /tmp, and /. For reasons related to HDD and rpm's that was the order; for SDD not so much. The second drive I mounted on /Crypt [or some other name you want].
What do you recommend as the sizes of partitions /boot and /tmp? Obviously "/" will take up "all the rest." /swap will take up 16 GB. I used 50 GB for /boot. But I never broke out /tmp as a separate partition.
| 2. Is 120 GB large enough for the information on the other directories besides /home? Yes. In my experience this has been more than enough. /home was dealt with differently and the data ended up on /Crypt.
Good. Then I'll accept the 120 GB SSD as a good "system drive."
| 3. Should I create a separate /boot partition on the smaller SSD, and if so, how large should I make it? Yes. See 1 above. default size should be fine as it helps keep you honest and clean boot regularly.
I'll check out the default size--I think it was 50 GB to begin with. /tmp I don't know about.
| 4. How large should the swap partition be, and where should I put it? (That is, on the 120 GB or the 1 TB drive)? I always went with 2x RAM. See 1 above.
Agreed. I always did the same as well.
| 5. In general, should I place a partition for anything other than /home on the 1 TB SSD? This will explain how/why I put /home on the 120 [smaller drive]. Through the use of hard/soft links to folders in /Crypt I connected the data files I wanted to preserve on /Crypt. This use of links kept data writing to /Crypt and in so doing kept it separate from the OS drive. So /home/user1/Documents -->/Crypt/user1/Documents, /home/user1/Pictures --> /Crypt/user1/Pictures, etc. etc. This link was invisible to the user. The data files from software likewise can be linked, /home/user1/.thunderbird --> /Crypt/user1/.thunderbird; which was great for recovering the mail client and other softeware. This set-up was born of having put /home on /Crypt at first but if you migrated to a new distro or recovered from failure you tended to inherit artifacts which the new system choked on. This process proved to be a cleaner foundation from which to recover/reinstall. One had only reinstall a clean OS on the 120 then re-link, the data was never touched during the installation process. Proved so effective that I preferred do clean installs from OS iteration to the next as opposed to upgrading. There are some pros/cons to soft/hard links so research for the trade-offs.
Wouldn't mounting the big SSD as /home accomplish the same thing, i.e., keeping user data on a drive physically separate and apart from the system drive?
| Now, as regards data migration: I have three user accounts to migrate, plus another directory on /home called "lost and found." | | 1. Should I even try to migrate "lost and found," and if so, how? Can't give an honest answer, I never bothered.
I'll look through it, if I can get the permissions. Maybe "sudo" can get me in there.
| 2. I have at least two choices for migrating data and settings from the various user accounts--three for some of them. Personally, I spent the $40 US on a external case for my old drive and moved piecemeal the items I wanted. When done I did a DoD wipe of the drive and reformatted for an external B/U drive.
| a. Connect the HDD to the SATA bus /after/ installing F27, and then force-copying everything out of each /home directory to its corresponding directory on the new configuration. (What command(s) would you recommend using, and with what options/switches/etc.?) This will take with it artifacts which could cause issues IMO.
All right, then. I've rejected that plan.
| b. Connect a large external HDD through a USB interface, transfer all the data to it before modifying the hardware, then re-transfer it to the system after installing the SSD's and F27. Since you already have the two drives for the system an external case is a better option IMO. If you have spare hardware then you could mount the drive in a separate system and you have the beginnings of a NAS but that is another project. An external case would be the path of least resistance here IMO.
I can appreciate that up to a point. But for about $110 US plus tax, I just ordered a 4 TB USB portable HDD. That I plan to use as a backup for this system and possibly this /and/ a Windows system. So now my plan is to back up my user data and key user application configuration files to this external HDD (Western Digital Passport Ultra, for anyone keeping score on vendors), and /then/ modify the hardware.
| c. Migrate the data to its "temporary refuge" over a Samba network (possibly do-able for at least one account, and that's the biggest account) and then re-migrate to the new system? Unless you are integrating with windows, I don't see the need for Samba, Linux has several protocols to serve this capacity. My fav is sftp on the internal network as it uses the users' existing credentials.
Well, I wouldn't say I'm "integrating" with Windows--I'm not sure what you mean by that. But in any case I've already figured out that using a portable HDD for backup is the way forward. Particularly since portable HDD's give so much more bang for the buck than they once did.
| Which choice would you recommend?
| 3. Is it worth migrating every single hidden file or folder? Or should I select only those folders that I know contain customization, account, or similar settings, plus my saved documents/pictures/music/videos, and migrate those? Hopefully the above answered this question. While seems a bit to do, the long term benefits proved this method was worth the trouble. Hope it helps you a bit.
Thank you. Anyway, that's decided. The new hardware and accessories are either in my possession or on order.
Temlakos
On 12/16/2017 12:59 PM, stan wrote:
On Sat, 16 Dec 2017 10:13:55 -0500 fred roller fredroller66@gmail.com wrote:
I have done this type of set up on my systems before so what its worth I will share how I installed and where applicable, why.
Good read. I'll keep it around for future reference.
| 5. In general, should I place a partition for anything other than /home on the 1 TB SSD? This will explain how/why I put /home on the 120 [smaller drive]. Through the use of hard/soft links to folders in /Crypt I connected the data files I wanted to preserve on /Crypt. This use of links kept data writing to /Crypt and in so doing kept it separate from the OS drive. So /home/user1/Documents -->/Crypt/user1/Documents, /home/user1/Pictures --> /Crypt/user1/Pictures, etc. etc. This link was invisible to the user. The data files from software likewise can be linked, /home/user1/.thunderbird --> /Crypt/user1/.thunderbird; which was great for recovering the mail client and other softeware. This set-up was born of having put /home on /Crypt at first but if you migrated to a new distro or recovered from failure you tended to inherit artifacts which the new system choked on. This process proved to be a cleaner foundation from which to recover/reinstall. One had only reinstall a clean OS on the 120 then re-link, the data was never touched during the installation process. Proved so effective that I preferred do clean installs from OS iteration to the next as opposed to upgrading. There are some pros/cons to soft/hard links so research for the trade-offs.
Mostly I replied because I wanted to give a thumbs up to doing things this way. I also do this, and it makes everything so much easier, and safer, and convenient. _______________________________________________ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
Stan:
How exactly do you manage mounting the larger drive under a different name (whether /crypt or some other name) and setting up/maintaining the link structure? Seems to me you have to rebuild it every time you (a) reinstall the OS or (b) add or remove users. It also seems to me that mounting the larger drive as /home accomplishes the same goal. Why doesn't it?
Temlakos
On Sat, 16 Dec 2017 13:09:47 -0500 Temlakos temlakos@gmail.com wrote:
On 12/16/2017 12:59 PM, stan wrote:
On Sat, 16 Dec 2017 10:13:55 -0500 fred roller fredroller66@gmail.com wrote:
| 5. In general, should I place a partition for anything other than /home on the 1 TB SSD? This will explain how/why I put /home on the 120 [smaller drive]. Through the use of hard/soft links to folders in /Crypt I connected the data files I wanted to preserve on /Crypt. This use of links kept data writing to /Crypt and in so doing kept it separate from the OS drive. So /home/user1/Documents -->/Crypt/user1/Documents, /home/user1/Pictures --> /Crypt/user1/Pictures, etc. etc. This link was invisible to the user. The data files from software likewise can be linked, /home/user1/.thunderbird --> /Crypt/user1/.thunderbird; which was great for recovering the mail client and other softeware. This set-up was born of having put /home on /Crypt at first but if you migrated to a new distro or recovered from failure you tended to inherit artifacts which the new system choked on. This process proved to be a cleaner foundation from which to recover/reinstall. One had only reinstall a clean OS on the 120 then re-link, the data was never touched during the installation process. Proved so effective that I preferred do clean installs from OS iteration to the next as opposed to upgrading. There are some pros/cons to soft/hard links so research for the trade-offs.
Stan:
How exactly do you manage mounting the larger drive under a different name (whether /crypt or some other name) and setting up/maintaining the link structure? Seems to me you have to rebuild it every time you (a) reinstall the OS or (b) add or remove users. It also seems to me that mounting the larger drive as /home accomplishes the same goal. Why doesn't it?
Temlakos
I think you meant this question for Fred, but I'll respond to at least some of it.
[mounting the larger drive] That's just creating a mount point under /mnt and an entry in /etc/fstab. When the system starts, the partition is mounted. Sure, the link structure has to be created when you add a user. But that's all of 5 minutes work, at least for me. Create the mount point. Edit /etc/fstab to copy the setup line into the new system. ln -s [mount point] [home mount name] for each directory in the mount you want to mount in home As you can see, I use symbolic links. This reminds me that there is a caveat for doing things this way. Any cp or rsync has to be restricted to a single file system, or it will follow the links.
Fred answered your last question in the blurb above. But the TLDR is *cruft* and incompatibility. Data is always compatible with any program that can read it. But configurations for the tools that do read it might be different in different versions of the OS. So using an old home for a new version can lead to subtle problems. And safety. The data is never in danger during any install or upgrade as the partitions where it resides are never touched. The disk can even be unplugged during system maintenance with no issues.
Terry, Stan hit on the points perfectly; ty Stan, it is the artifacts you leave in /home/user that start causing anomalies, most go un-noticed except to a trained eye but can snowball over time. As for the re-linking, if you are comfortable a script can be made to relink but I found this more trouble than it is worth. It is too easy to type the first set of commands then up arrow key to repeat the process as needed. For me, catastrophic recovery took about 40-90 min under this set up; depending on anything new I wanted to implement.
As for default sizes I believe /boot was around 500 Mb, not much, so 1 Gb would suffice. Bearing in mind the OS is designed to take not much more than 15 Gb total, if the numbers still hold, then the size of /tmp depends on usage. At one point I gave /tmp 40-50 Gb because I was using heavy 30-50 Mb RAW image files in GIMP 8 or 9 at a time. So a large /tmp helped me there. Now days I check email and watch movies so 10-20% of remainder would easily suffice. If you can watch /tmp on your current system via the command "watch 'ls -lh /tmp'"during a typical usage period. See if your current quota is being used or staying mostly empty.
On Sat, Dec 16, 2017 at 2:17 PM, stan stanl-fedorauser@vfemail.net wrote:
On Sat, 16 Dec 2017 13:09:47 -0500 Temlakos temlakos@gmail.com wrote:
On 12/16/2017 12:59 PM, stan wrote:
On Sat, 16 Dec 2017 10:13:55 -0500 fred roller fredroller66@gmail.com wrote:
| 5. In general, should I place a partition for anything other than /home on the 1 TB SSD? This will explain how/why I put /home on the 120 [smaller drive]. Through the use of hard/soft links to folders in /Crypt I connected the data files I wanted to preserve on /Crypt. This use of links kept data writing to /Crypt and in so doing kept it separate from the OS drive. So /home/user1/Documents -->/Crypt/user1/Documents, /home/user1/Pictures --> /Crypt/user1/Pictures, etc. etc. This link was invisible to the user. The data files from software likewise can be linked, /home/user1/.thunderbird --> /Crypt/user1/.thunderbird; which was great for recovering the mail client and other softeware. This set-up was born of having put /home on /Crypt at first but if you migrated to a new distro or recovered from failure you tended to inherit artifacts which the new system choked on. This process proved to be a cleaner foundation from which to recover/reinstall. One had only reinstall a clean OS on the 120 then re-link, the data was never touched during the installation process. Proved so effective that I preferred do clean installs from OS iteration to the next as opposed to upgrading. There are some pros/cons to soft/hard links so research for the trade-offs.
Stan:
How exactly do you manage mounting the larger drive under a different name (whether /crypt or some other name) and setting up/maintaining the link structure? Seems to me you have to rebuild it every time you (a) reinstall the OS or (b) add or remove users. It also seems to me that mounting the larger drive as /home accomplishes the same goal. Why doesn't it?
Temlakos
I think you meant this question for Fred, but I'll respond to at least some of it.
[mounting the larger drive] That's just creating a mount point under /mnt and an entry in /etc/fstab. When the system starts, the partition is mounted. Sure, the link structure has to be created when you add a user. But that's all of 5 minutes work, at least for me. Create the mount point. Edit /etc/fstab to copy the setup line into the new system. ln -s [mount point] [home mount name] for each directory in the mount you want to mount in home As you can see, I use symbolic links. This reminds me that there is a caveat for doing things this way. Any cp or rsync has to be restricted to a single file system, or it will follow the links.
Fred answered your last question in the blurb above. But the TLDR is *cruft* and incompatibility. Data is always compatible with any program that can read it. But configurations for the tools that do read it might be different in different versions of the OS. So using an old home for a new version can lead to subtle problems. And safety. The data is never in danger during any install or upgrade as the partitions where it resides are never touched. The disk can even be unplugged during system maintenance with no issues. _______________________________________________ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
On Sat, Dec 16, 2017 at 01:06:16PM -0500, Temlakos wrote:
On 12/16/2017 10:13 AM, fred roller wrote:
[snip] | I now ask the community for some suggestions.
I have done this type of set up on my systems before so what its worth I will share how I installed and where applicable, why.
| First, for partitioning: | | 1. Should I even try to accept /automatic/ partitioning when the installer gets to that point? No. In custom choose the 120 GB drive and auto choices may be fine but mine was to mount /boot, /swap, /tmp, and /. For reasons related to HDD and rpm's that was the order; for SDD not so much. The second drive I mounted on /Crypt [or some other name you want].
What do you recommend as the sizes of partitions /boot and /tmp? Obviously "/" will take up "all the rest." /swap will take up 16 GB. I used 50 GB for /boot. But I never broke out /tmp as a separate partition.
overkill. I've seen mny recommend 1GB for /boot. I usually do 2-4GB. Just looked at 3 systems, highest /boot usage was < 350MB.
If no separate /tmp, it can autofs to 50% of swap.
You can have swap on multiple drives. Thus either greater total swap or more space on 120 drive for / or /tmp.
lost+found should be empty. It is used by fsck program to reattach orphaned files it finds (files with an allocated inode and data blocks but no directory entry). If found fsck attaches them in l+f named "#<inode number>". Root can then examine them and move or remove. l+f is created as part of file system formatting.
On 12/16/2017 11:17 AM, stan wrote:
[mounting the larger drive] That's just creating a mount point under /mnt and an entry in /etc/fstab.
Please note that using /mnt is just a convention, and you can create a mount point anyplace you want. As an example, I have a drive full on interesting stuff from back when I was using Windows, and it was my D: drive; it's now mounted at boot onto /D_Drive, because that's what looks right to me. If you have a big enough collection of music that you want it all on a dedicated partition, create a new partition, move your collection there and have it mounted at ~/Music.
On 12/16/2017 03:25 PM, Jon LaBadie wrote:
On Sat, Dec 16, 2017 at 01:06:16PM -0500, Temlakos wrote:
On 12/16/2017 10:13 AM, fred roller wrote:
[snip] | I now ask the community for some suggestions.
I have done this type of set up on my systems before so what its worth I will share how I installed and where applicable, why.
| First, for partitioning: | | 1. Should I even try to accept /automatic/ partitioning when the installer gets to that point? No. In custom choose the 120 GB drive and auto choices may be fine but mine was to mount /boot, /swap, /tmp, and /. For reasons related to HDD and rpm's that was the order; for SDD not so much. The second drive I mounted on /Crypt [or some other name you want].
What do you recommend as the sizes of partitions /boot and /tmp? Obviously "/" will take up "all the rest." /swap will take up 16 GB. I used 50 GB for /boot. But I never broke out /tmp as a separate partition.
overkill. I've seen mny recommend 1GB for /boot. I usually do 2-4GB. Just looked at 3 systems, highest /boot usage was < 350MB.
If no separate /tmp, it can autofs to 50% of swap.
You can have swap on multiple drives. Thus either greater total swap or more space on 120 drive for / or /tmp.
lost+found should be empty. It is used by fsck program to reattach orphaned files it finds (files with an allocated inode and data blocks but no directory entry). If found fsck attaches them in l+f named "#<inode number>". Root can then examine them and move or remove. l+f is created as part of file system formatting.
I just had a chance to review my partitions--after a fashion--using Dolphin (the KDE file manager). That review indicates I was using a 500 MB (or at the most 500 MiB) boot partition. The 50 GB was the part I reserved for the root partition (/). That left more than 872 GB for /home after accounting for /swap and / and /boot. My system automatically keeps only three kernel versions and a rescue kernel.
Interestingly, Dolphin shows me two apparent boot partitions. One of them is current--it shows the currently installed kernels. The other is clearly obsolete--goes back to F20. Maybe this is a sign that I need a clean install anyway.
Still trying to figure out how to store user data at a mount point different from classic /home.
Temlakos
On 12/16/2017 03:17 PM, fred roller wrote:
Terry, Stan hit on the points perfectly; ty Stan, it is the artifacts you leave in /home/user that start causing anomalies, most go un-noticed except to a trained eye but can snowball over time. As for the re-linking, if you are comfortable a script can be made to relink but I found this more trouble than it is worth. It is too easy to type the first set of commands then up arrow key to repeat the process as needed. For me, catastrophic recovery took about 40-90 min under this set up; depending on anything new I wanted to implement.
As for default sizes I believe /boot was around 500 Mb, not much, so 1 Gb would suffice. Bearing in mind the OS is designed to take not much more than 15 Gb total, if the numbers still hold, then the size of /tmp depends on usage. At one point I gave /tmp 40-50 Gb because I was using heavy 30-50 Mb RAW image files in GIMP 8 or 9 at a time. So a large /tmp helped me there. Now days I check email and watch movies so 10-20% of remainder would easily suffice. If you can watch /tmp on your current system via the command "watch 'ls -lh /tmp'"during a typical usage period. See if your current quota is being used or staying mostly empty.
On Sat, Dec 16, 2017 at 2:17 PM, stan <stanl-fedorauser@vfemail.net mailto:stanl-fedorauser@vfemail.net> wrote:
On Sat, 16 Dec 2017 13:09:47 -0500 Temlakos <temlakos@gmail.com <mailto:temlakos@gmail.com>> wrote: > On 12/16/2017 12:59 PM, stan wrote: > > On Sat, 16 Dec 2017 10:13:55 -0500 > > fred roller <fredroller66@gmail.com <mailto:fredroller66@gmail.com>> wrote: > >> | 5. In general, should I place a partition for anything other > >> than /home on the 1 TB SSD? > >> This will explain how/why I put /home on the 120 [smaller drive]. > >> Through the use of hard/soft links to folders in /Crypt I connected > >> the data files I wanted to preserve on /Crypt. This use of links > >> kept data writing to /Crypt and in so doing kept it separate from > >> the OS drive. So /home/user1/Documents > >> -->/Crypt/user1/Documents, /home/user1/Pictures > >> --> /Crypt/user1/Pictures, etc. etc. This link was invisible to > >> the user. The data files from software likewise can be > >> linked, /home/user1/.thunderbird --> /Crypt/user1/.thunderbird; > >> which was great for recovering the mail client and other > >> softeware. This set-up was born of having put /home on /Crypt at > >> first but if you migrated to a new distro or recovered from > >> failure you tended to inherit artifacts which the new system > >> choked on. This process proved to be a cleaner foundation from > >> which to recover/reinstall. One had only reinstall a clean OS on > >> the 120 then re-link, the data was never touched during the > >> installation process. Proved so effective that I preferred do > >> clean installs from OS iteration to the next as opposed to > >> upgrading. There are some pros/cons to soft/hard links so > >> research for the trade-offs. > Stan: > > How exactly do you manage mounting the larger drive under a different > name (whether /crypt or some other name) and setting up/maintaining > the link structure? Seems to me you have to rebuild it every time you > (a) reinstall the OS or (b) add or remove users. It also seems to me > that mounting the larger drive as /home accomplishes the same goal. > Why doesn't it? > > Temlakos I think you meant this question for Fred, but I'll respond to at least some of it. [mounting the larger drive] That's just creating a mount point under /mnt and an entry in /etc/fstab. When the system starts, the partition is mounted. Sure, the link structure has to be created when you add a user. But that's all of 5 minutes work, at least for me. Create the mount point. Edit /etc/fstab to copy the setup line into the new system. ln -s [mount point] [home mount name] for each directory in the mount you want to mount in home As you can see, I use symbolic links. This reminds me that there is a caveat for doing things this way. Any cp or rsync has to be restricted to a single file system, or it will follow the links. Fred answered your last question in the blurb above. But the TLDR is *cruft* and incompatibility. Data is always compatible with any program that can read it. But configurations for the tools that do read it might be different in different versions of the OS. So using an old home for a new version can lead to subtle problems. And safety. The data is never in danger during any install or upgrade as the partitions where it resides are never touched. The disk can even be unplugged during system maintenance with no issues. _______________________________________________
I would appreciate two things:
1. How can you write the linking commands so that they will execute automatically at startup, rather than your having to "sudo ln -s [source] [destination]" for every directory for every user every time you restart your system? (I'm likely to be shutting down and restarting every day and sometimes twice or three times a day, depending on whether I can solve the "KDE Plasma 5 system hang" problem with this new installation.)
2. Could you give me an example of such a linking system, with names changed to protect your privacy?
Thanks.
Temlakos
On 12/16/2017 03:50 PM, Temlakos wrote:
- How can you write the linking commands so that they will execute
automatically at startup, rather than your having to "sudo ln -s [source] [destination]" for every directory for every user every time you restart your system?
You don't have to redo the links every time you reboot. The links are to the mount points, which are there even when the partitions aren't mounted, so that when you boot, everything's ready to go. I know, as I have several mount points linked like that.
On Sat, 16 Dec 2017 18:50:57 -0500 Temlakos temlakos@gmail.com wrote:
- How can you write the linking commands so that they will execute
automatically at startup, rather than your having to "sudo ln -s [source] [destination]" for every directory for every user every time you restart your system? (I'm likely to be shutting down and restarting every day and sometimes twice or three times a day, depending on whether I can solve the "KDE Plasma 5 system hang" problem with this new installation.)
Once they are set they are there until you remove them. They're like directories, they are always there. Restarting and shutting down doesn't affect them.
- Could you give me an example of such a linking system, with names
changed to protect your privacy?
Suppose I have a directory called /mnt/data_drive/source where I keep source code on my data drive.
Then, in my home directory, I just have the link source, that I set up as ln -s /mnt/data_drive/source source Once that is in place, if I am in my home directory I can type cd source and it will take me to /mnt/data_drive/source, and if I am somewhere else I can type cd ~/source and it will do the same thing.
On 12/16/2017 08:07 PM, stan wrote:
On Sat, 16 Dec 2017 18:50:57 -0500 Temlakos temlakos@gmail.com wrote:
- How can you write the linking commands so that they will execute
automatically at startup, rather than your having to "sudo ln -s [source] [destination]" for every directory for every user every time you restart your system? (I'm likely to be shutting down and restarting every day and sometimes twice or three times a day, depending on whether I can solve the "KDE Plasma 5 system hang" problem with this new installation.)
Once they are set they are there until you remove them. They're like directories, they are always there. Restarting and shutting down doesn't affect them.
- Could you give me an example of such a linking system, with names
changed to protect your privacy?
Suppose I have a directory called /mnt/data_drive/source where I keep source code on my data drive.
Then, in my home directory, I just have the link source, that I set up as ln -s /mnt/data_drive/source source Once that is in place, if I am in my home directory I can type cd source and it will take me to /mnt/data_drive/source, and if I am somewhere else I can type cd ~/source and it will do the same thing.
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
Let me see if I understand the result:
I need to set up links to:
1. All folders that I want to hold on the data drive, including configuration files that I want to preserve from one iteration to the next--like .thunderbird, .firefox, .chrome, .adobe, and so on. These would be the top-level folders, the ones in the home directory, and not the subfolders.
2. Any file that, for whatever reason, is sitting in my home directory and that I haven't made up my mind to place into a folder, like Downloads or Pictures or Documents--whatever. (This might include password files, if I can get the old Password Manager program reinstalled. I have an rpm for that, but I don't know whether that would install or not.)
And I must do that for every user account.
And when I do that, any folder that I create on the "data disk," the system will find by starting from /home/[user-ident].
At least, I don't /think/ you're recommending setting up symlinks to every single file and subfolder in a user's account! Someone (Fred Roller, I think) said the process needs to be invisible to the user.
Question: would you preserve /all/ hidden application configuration files on the separate drive? Or do some things deserve to reside on the system drive and get overwritten with every clean install?
Temlakos
On Sun, 17 Dec 2017 08:31:05 -0500 Temlakos wrote:
And when I do that, any folder that I create on the "data disk," the system will find by starting from /home/[user-ident].
You might want to consider a "bind" mount for /home instead of lots of symlinks for each home directory. I have this in my fstab:
/zooty/home /home none rw,bind 0 0
On 12/17/2017 08:41 AM, Tom Horsley wrote:
On Sun, 17 Dec 2017 08:31:05 -0500 Temlakos wrote:
And when I do that, any folder that I create on the "data disk," the system will find by starting from /home/[user-ident].
You might want to consider a "bind" mount for /home instead of lots of symlinks for each home directory. I have this in my fstab:
/zooty/home /home none rw,bind 0 0 _______________________________________________ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
I just looked up bind mounts. The way they explained it at:
https://unix.stackexchange.com/questions/198590/what-is-a-bind-mount
bind mounts are copies. I don't want a copy; I just want user data to occupy a second, much larger disk. Fred and Stan seem to be saying some things ought to reside on a separate disk, but some things--like some (but not all) configuration files, plus a few artifacts that the system throws in from time to time--ought to stay on the system drive, so that a clean install will wipe them out, leaving usable user data untouched and unharmed.
Unless I'm missing something, if I set up a bind mount, I effectively limit myself to the unused capacity of the smaller system drive and cannot effectively use all the capacity of the larger "user data drive."
Temlakos
On Sat, Dec 16, 2017 at 5:58 AM, Temlakos temlakos@gmail.com wrote:
Everyone:
If you have followed my threads about:
SMB failing with F27
and
system hanging and requring repeated restarts,
then you've seen people suggest replacing my 1 TB HDD with an SSD. I acquired a 1 TB SSD and then tried to clone the HDD to the SSD. The clone /failed/. Reason: the disk is already showing some bad sectors. The outputs of satactl and fsck make that undeniably clear.
smarctl -l scterc /dev/
This will reveal if this drive supports SCT ERC. If it does, and it's not enabled, it can be set to an obscenely high value, and then a matching SCSI block device command timer might permit deep recovery by the drive's firmware. And then you can just rsync the data from one drive to another. I would not depend on SMB for this.
rsync -pogAXtlHrDx is the set of flags used by anaconda when doing live installs; this works locally or remotely
On the advice of a professional installer, I have since acquired an additional SSD (capacity 120 GB) and am now acquiring a mounting bracket and some power and SATA data cables. I also downloaded the F27 KDE Plasma 5 Spin as an ".iso" image.
My plan is to install F27 "clean" on the two SSD's, mounting the 120 GB SSD at root ("/") and the 1 TB SSD at /home. I must then migrate my data, browser cookies (Google Chrome, Firefox), e-mail accounts/saved/messages/other settings (Thunderbird), and documents, pictures, music, videos, and various downloads from the HDD to the SSD.
rsync -pogAXtlHrDxn /home/ foo@bar.local:/home/
- Should I even try to accept /automatic/ partitioning when the installer
gets to that point?
You can, although you probably do not want to pick both drives for this installation or it'll marry them together in one big LVM VG. You'll probably want to do this manually, but it really depends what your familiarity with LVM is, and what kinds of things you're going to do with this setup - what's your workflow, are you using VM's, if so do you use Boxes (you use KDE so probably not) or virt-manager. And so on.
If it were me, I'd probably keep the 120GB drive super simple and no LVM. As for the 1TB, I'd GPT partition it with one big partition, optionally LUKS/dmcrypt that partition, then use pvcreate on the opened LUKS device, add it to a VG, and then divvy up all that space as needed with LVM. This gives you the flexibility to leave some spare space for VM's, using LVM LV's for virt-manager backing. Or for space that won't be used for /home - like for example maybe you put /var there since /var/lib/libvirt/images can get big if you prefer using qcow2 or raw files for VM backing. Or if you're using containers. And so on. So the use case dictates the layout.
- Is 120 GB large enough for the information on the other directories
besides /home?
Yes. The one gotcha is if you happen to have a lot of OS ISOs, qcow2 or raw files for VM's in /var/lib/libvirt/images.
- Should I create a separate /boot partition on the smaller SSD, and if so,
how large should I make it?
Yes. 500MB is fine still for just three kernels, and you're not using kdump. If you are using kdump or you want more kernels install to revert to, then 1G should be enough. 1G is the new default in Fedora for automatic partitioning since Fedora 26.
- How large should the swap partition be, and where should I put it? (That
is, on the 120 GB or the 1 TB drive)?
It depends if you want to hibernate this machine, in which case it needs to be at least 1x RAM and many guides say it needs to be 2x because it's possible you have some swap in use, and then to create the hibernation image could take up to 1x RAM on top of whatever is already in swap. I personally do just 1x RAM for swap, and sometimes less for servers which never hibernate.
- In general, should I place a partition for anything other than /home on
the 1 TB SSD?
Depends on your use case. Depends on your familiarity with LVM. I think it's easier to depend on LVM, in particular thin provisioning, should I need throw away block devices for testing things. But you can certainly just 'fallocate' a file in /home, and put it on loopback and format it as a test block device.
Now, as regards data migration: I have three user accounts to migrate, plus another directory on /home called "lost and found."
- Should I even try to migrate "lost and found," and if so, how?
No this is created by mkfs, I would not worry about migrating it, that's a volume specific folder and in any case it should be empty. If it's not empty you've had some file system corruption and when it was partly repaired the orphaned files are put in this directory.
- I have at least two choices for migrating data and settings from the
various user accounts--three for some of them.
a. Connect the HDD to the SATA bus /after/ installing F27, and thenforce-copying everything out of each /home directory to its corresponding directory on the new configuration. (What command(s) would you recommend using, and with what options/switches/etc.?)
b. Connect a large external HDD through a USB interface, transfer allthe data to it before modifying the hardware, then re-transfer it to the system after installing the SSD's and F27.
c. Migrate the data to its "temporary refuge" over a Samba network(possibly do-able for at least one account, and that's the biggest account) and then re-migrate to the new system?
Which choice would you recommend?
I'm very skeptical of doing any of this in USB enclosures. Depending on chipset they can lie about the optimal IO size and alignment will be computed wrong by device-mapper based commands like LVM and cryptsetup. I have a bunch of Purex enclosures that are well behaved. I have another USB enclosure that lies and confuses dmcrypt so all the alignments are wrong by default (yes I've filed a bug so this should get fixed in a future version). Anyway it's probably not worth the hassle of doing this in a USB enclosure.
- Is it worth migrating every single hidden file or folder? Or should I
select only those folders that I know contain customization, account, or similar settings, plus my saved documents/pictures/music/videos, and migrate those?
You could lookup how to do a recursive list of files and sort by date and then separately sort by size. Anything huge is subject to deleting if you're not using it. If it's old, over a year, it's subject to deleting.
Otherwise don't get too deep in the weeds, just copy everything.
My links were:
All the default folders of the user: Documents Downloads Music Pictures Videos
My Virtualbox build directory .thunderbird a note pad directory an "isolibrary" directory I maintained
Most software like View Your Mind, Calibre and others gave the option to choose where to save a file which normally went in one of the above.
google I didn't concern myself with because they hold config info and once the user signs in then chrome takes care of itself usually. Config files I didn't concern myself with unless, like Thunderbird, had some specific databases I wanted preserved. Most of the rest was just flare and tweeks which, tbh, I didn't mind changing up at the time.
Main thing to remember is KISS. This is a simple way to have the normal drop points of data redirected to the larger drive. If you want a true separation per user then you might consider an ldap or other similar solution which maintain user data and options on a specific database and no matter where they log in then their information and choices are available. Mine was just a share on how I did things in a similar situation with drives and is practicable for maybe up to 6 or so users; because of the manual nature of linking.
On Sun, Dec 17, 2017 at 9:30 AM, Tom Horsley horsley1953@gmail.com wrote:
On Sun, 17 Dec 2017 08:54:06 -0500 Temlakos wrote:
bind mounts are copies.
They are copies of mountpoints, but not copies of the data. The /zooty/home directory is still there on my data disk, and the /home mount point also refers to /zooty/home. _______________________________________________ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
On 17-12-17 14:38:02, Chris Murphy wrote: ...
smarctl -l scterc /dev/
This will reveal if this drive supports SCT ERC. If it does, and it's not enabled, it can be set to an obscenely high value, and then a matching SCSI block device command timer might permit deep recovery by the drive's firmware. And then you can just rsync the data from one drive to another. I would not depend on SMB for this.
...
Was it you who told us of a script to cope with drives that don't support SCT ERC?
https://raid.wiki.kernel.org/index.php/Timeout_Mismatch
OP: generally, if a drive can't read a sector in a few seconds, it won't ever be able to read that sector. Possibly more data can be recovered (with some holes) using GNU ddrescue, or the alternative dd_rescue with dd_rhelp. Note that wither would be used to copy a whole partition or disk.
If one has a backup, using rsync and getting the failed files from the backup is probably faster than using either dd*rescue.
On Sun, 17 Dec 2017 08:31:05 -0500 Temlakos temlakos@gmail.com wrote:
Let me see if I understand the result:
I need to set up links to:
- All folders that I want to hold on the data drive, including
configuration files that I want to preserve from one iteration to the next--like .thunderbird, .firefox, .chrome, .adobe, and so on. These would be the top-level folders, the ones in the home directory, and not the subfolders.
Are you pulling my leg? If you want to preserve the old home, you create a directory called old_home on the new mount point and rsync the current home directory to it. If you don't want to cd to it, if you'll be using it frequently, you can create a link. Links are only for things you use all the time, so you have a shortcut to them.
- Any file that, for whatever reason, is sitting in my home
directory and that I haven't made up my mind to place into a folder, like Downloads or Pictures or Documents--whatever. (This might include password files, if I can get the old Password Manager program reinstalled. I have an rpm for that, but I don't know whether that would install or not.)
And I must do that for every user account.
And when I do that, any folder that I create on the "data disk," the system will find by starting from /home/[user-ident].
At least, I don't /think/ you're recommending setting up symlinks to every single file and subfolder in a user's account! Someone (Fred Roller, I think) said the process needs to be invisible to the user.
Question: would you preserve /all/ hidden application configuration files on the separate drive? Or do some things deserve to reside on the system drive and get overwritten with every clean install?
Personally, I don't do what you are trying to do. I use alternate partitions, and install alternately so I always have a working system (the previous one). For me, the old home files are in the last incarnation for reference if I need them. And using a data drive lets me access my critical data from both working systems.
I'm not trying to convince you of anything. If you don't want to do this, if it doesn't work for your use case, don't do it. It won't hurt my feelings.
On 12/17/2017 03:04 PM, Joe Zeff wrote:
On 12/17/2017 11:55 AM, fred roller wrote:
Main thing to remember is KISS. This is a simple way to have the normal drop points of data redirected to the larger drive.
If you really want to KISS, just migrate /home to the new drive and be done with it. _______________________________________________
Hold on a minute, Joe. If I understand Fred correctly, the system does certain things to the /home directory and each user directory that he did not repeat /not/ want preserved, and no one else should, either. And I can believe it. I've noticed some flakiness when slavishly preserving my main user directory, that didn't happen when I simply "created" my other "users" /de novo/ with every clean install. The flakiness gets worse with every iteration. (You developers who are monitoring this list, are you monitoring this thread? Consider this my formal protest of a certain amount of carelessness, hint, hint, hint!) I attribute that to the kind of hidden file that needs doing away with.
I would add ~/bin to the list, plus a few others I've created, along with a custom bashrc script that sets the PATH to include my own bin directory. But otherwise his principle is a sound one.
I at first thought as you do, Joe: just mount the larger directory as /home and have done with it. I used to do just that when I jerry-built systems having more than one HDD, that I had cobbled together from a few "antique" systems. The problem: that still leaves the system to throw things into /home that one can best do away with. One can do that most easily by doing clean installations on the system drive with every iteration, or at least every /other/ iteration.
Now I have one more question, and this is for Fred or Stan. Should any physical directories named Documents, Downloads, Music, Pictures, Video, etc., remain on the actual /home mount? Or should they exist physically only on the /crypt mount (meaning the larger user-data drive) and only symlinks remain in ~? (Remember: ~ = /home/username where /username/ is the name of the user account.) Understand: I want a clean separation between useful data on the one hand, and configuration on the other--except for things like Thunderbird where I want to preserve e-mail accounts and extensive e-mail databases. (I understand why you didn't bother with Chrome's configuration data. But what about Firefox?)
I genuinely appreciate this discussion and the direction it has taken, more than some of you might know. I've had a bellyful of the flakiness that gets worse with every "system upgrade" I've done--to the point where even KDE's Apper program crashes on launch every single time.
I wonder: am I the first here to build a system with all SDD drives? Or has any other subscriber to this list done that?
Temlakos
On 12/18/17 06:14, Temlakos wrote:
I wonder: am I the first here to build a system with all SDD drives? Or has any other subscriber to this list done that?
Hah, Hah, nope.... Been running one like that for several months now. Half done with switching another over to only SDD. Will finish as soon as I find the time.
On 12/17/2017 05:49 PM, Ed Greshko wrote:
On 12/18/17 06:14, Temlakos wrote:
I wonder: am I the first here to build a system with all SDD drives? Or has any other subscriber to this list done that?
Hah, Hah, nope.... Been running one like that for several months now. Half done with switching another over to only SDD. Will finish as soon as I find the time.
We should compare notes, then. Did you install your system on one big SSD? Or did you install on one smaller system SSD for the "system" and a larger SSD for user data, as I am trying to do? And has your system improved in performance?
Temlakos
On Sun, Dec 17, 2017 at 2:55 PM, Tony Nelson tonynelson@georgeanelson.com wrote:
On 17-12-17 14:38:02, Chris Murphy wrote: ...
smarctl -l scterc /dev/
This will reveal if this drive supports SCT ERC. If it does, and it's not enabled, it can be set to an obscenely high value, and then a matching SCSI block device command timer might permit deep recovery by the drive's firmware. And then you can just rsync the data from one drive to another. I would not depend on SMB for this.
...
Was it you who told us of a script to cope with drives that don't support SCT ERC?
That's for multiple devices not using hardware RAID (mdadm, lvm, or Btrfs raids and concats). Basically you need the drive to time out before the SCSI command timer, hence you either want the drive to have a short recovery, which as it suggests at that page, 70 deciseconds. The drive times out, i.e. stops trying to read the bad sector, produces a discrete read error with a sector address, and then md,lvm,btrfs can get a copy from someother device and do a repair. If the drive does not support SCT ERC, then increasing the block layer command timer to something extreme ensures the drive can produce this read error before the command expires. The kernel tracks every SCSI command sent to a drive (SCSI, SATA, USB block devices) and puts a timer on it, by default this timer is 30 seconds. If the command hasn't completed correctly, nor produced a discrete error, the kernel will reset the device - which is pretty bad behavior almost always.
Any way in the single device scenario, there are drives floating around that support SCT ERC but it's disabled and therefore unknown what their time out it. So I'd set it's SCT ERC to 180 seconds (1800 deciseconds is the unit used by smartctl -l scterc), *and* also increase the SCSI command timer for that device to 180 as well. If it hasn't recovered in 3 minutes, it's not recoverable.
OP: generally, if a drive can't read a sector in a few seconds, it won't ever be able to read that sector.
You'd think so but there are cases where the deep recoveries are well over a minute. It's hard to imagine why but... I've seen it happen. For RAID setups obviously you do want it to error out fast because you have another copy on another device, so it's best if the drive gives up fast, so that might be what you're used to. There's a lot of data about drives accumulating marginally readable sectors (hard drives anyway) and the manifestation of this is a sluggish system, really sluggish. You'll see Windows boards full of this, and people are invariably told to just reinstall Windows and it fixes the problem leading to this myth that it's a "crusty file system" with too much junk. It's bull. It's just that the Windows kernel has very high command timeouts, so it's waiting for the drive to give it the requested sector for a really long time. On Linux by default this turns into piles of link reset errors because the kernel gives up well before the drive gives a discrete read error.
And yeah, reinstalling fixes it because sectors are being overwritten. So they have clean and clear signal, and any sector that fails to write gets removed from use, with the LBA remapped to a reserve sector.
Possibly more data can be
recovered (with some holes) using GNU ddrescue, or the alternative dd_rescue with dd_rhelp. Note that wither would be used to copy a whole partition or disk.
For several reasons I don't recommend imaging file systems other than to have a backup to work on and recover data from. But ff you need to get data off a volume and put it onto another device for production use, use rsync or use a file system specific cloning tool like xfs_copy or btrfs seed (or subvolume send receive).
On Sun, Dec 17, 2017 at 3:53 PM, Temlakos temlakos@gmail.com wrote:
On 12/17/2017 05:49 PM, Ed Greshko wrote:
On 12/18/17 06:14, Temlakos wrote:
I wonder: am I the first here to build a system with all SDD drives? Or has any other subscriber to this list done that?
Hah, Hah, nope.... Been running one like that for several months now. Half done with switching another over to only SDD. Will finish as soon as I find the time.
We should compare notes, then. Did you install your system on one big SSD? Or did you install on one smaller system SSD for the "system" and a larger SSD for user data, as I am trying to do? And has your system improved in performance?
For a server? *shrug* I think an SSD is a bit overkill but it depends very much on the workflow and use case. Even an HDD is going to be faster than a single network connection. If you have multiple computers connected to the server, writing many small files at the same time, an SSD will clobber an HDD. You'll definitely see the performance difference.
As for a smaller SSD just for booting, there's a tiny advantage in that it's physically separate so if you're at all prone to making CLI mistakes, there can be an isolation advantage. But as far as performance, no not really. GPT supports 128+ partitions, and LVM is in somewhere into the stratosphere beyond that many. So with either conventional partitions, or LVM (or Btrfs subvolumes) there are many ways to segment your SSD however you want.
On Sun, Dec 17, 2017 at 4:10 PM, Chris Murphy lists@colorremedies.com wrote:
As for a smaller SSD just for booting, there's a tiny advantage in that it's physically separate so if you're at all prone to making CLI mistakes, there can be an isolation advantage. But as far as performance, no not really. GPT supports 128+ partitions, and LVM is in somewhere into the stratosphere beyond that many. So with either conventional partitions, or LVM (or Btrfs subvolumes) there are many ways to segment your SSD however you want.
Anecdote:
Just today I switched things around on my Intel NUC. It has a 1TB HDD used for data subvolumes mount in /srv and shared with samba. But also is carved up with LVM for VM stuff and throw away block devices for testing. And carved up with a few conventional partitions for /boot/efi, /boot, and /. The HDD's biggest drawback is latency (both head seek latency and rotational latency). So I just moved the boot related stuff over to a Samsung SDXC Card. The sequential performance of the SD Card is a bit less than the HDD, but boot times, and system update times, are much shorter (almost 1/2) due to the difference in latency. The other advantage, I could now spin down the 1TB laptop HDD in this NUC, since this little server isn't used every day. Now that there's no system or logs going to the HDD, it will go to sleep.
Chris Murphy
On 12/18/17 06:53, Temlakos wrote:
On 12/17/2017 05:49 PM, Ed Greshko wrote:
On 12/18/17 06:14, Temlakos wrote:
I wonder: am I the first here to build a system with all SDD drives? Or has any other subscriber to this list done that?
Hah, Hah, nope.... Been running one like that for several months now. Half done with switching another over to only SDD. Will finish as soon as I find the time.
We should compare notes, then. Did you install your system on one big SSD? Or did you install on one smaller system SSD for the "system" and a larger SSD for user data, as I am trying to do? And has your system improved in performance?
There was a online sale and the price per bit was attractive so I'm only using 525GB drives. Yes, to make things easier for me /home is on its own drive. For added space and for things that I'm not concerned about r/w performance (pictures, mp3, etc.) I NFS mount directories in the user's space from a 3TB NAS.
Yes, performance has improved. I really didn't do a comprehensive analysis. But for some I/O intensive things that I do I found things completing about 40%~50% faster.
[snip] Now I have one more question, and this is for Fred or Stan. Should any physical directories named Documents, Downloads, Music, Pictures, Video, etc., remain on the actual /home mount?
My process was to mkdir on the new drive, delete the old directory in /home/user and when I created the link the directory was visible in the /home/user just writing to somewhere else.
[snip]
On 12/17/2017 04:42 PM, fred roller wrote:
Now I have one more question, and this is for Fred or Stan. Should any physical directories named Documents, Downloads, Music, Pictures, Video, etc., remain on the actual /home mount?
My process was to mkdir on the new drive, delete the old directory in /home/user and when I created the link the directory was visible in the /home/user just writing to somewhere else.
What you do, or at least what I did, was leave the actual directories where they are, but edit fstab so that your new partitions are mounted there.
On 12/18/17 08:42, fred roller wrote:
Now I have one more question, and this is for Fred or Stan. Should any physical directories named Documents, Downloads, Music, Pictures, Video, etc., remain on the actual /home mount?
My process was to mkdir on the new drive, delete the old directory in /home/user and when I created the link the directory was visible in the /home/user just writing to somewhere else.
I am none of the folks you cite. However....
For this, pictures are worth a more than a few words.
[egreshko@meimei ~]$ mount | grep Videos ds6:/volume1/video on /home/egreshko/Videos type nfs4
[egreshko@meimei ~]$ ll /home/egreshko/Videos total 20 drwxrwxrwt. 192 root root 16384 Dec 13 04:43 Movies drwxrwxrwt. 87 root root 4096 Dec 13 21:41 TV
[egreshko@meimei ~]$ sudo umount /home/egreshko/Videos
[egreshko@meimei ~]$ ll /home/egreshko/Videos total 0
I have no idea what some people are talking about when they say they are "using links".
Now I have one more question, and this is for Fred or Stan. Should any
physical directories named Documents, Downloads, Music, Pictures, Video, etc., remain on the actual /home mount?
My process was to mkdir on the new drive, delete the old directory in
/home/user and when I created the link the directory was visible in the /home/user just writing to somewhere else.
What you do, or at least what I did, was leave the actual directories
where they are, but edit fstab so that your new partitions are mounted there.
Does seem cleaner; but, I assume you are creating a partition for each directory? And fstab could be saved and at worse used for reference in new build.
At the end of the day the strength of Linux is the options it affords for your specific needs. Pros/Cons to all the great ideas listed here and the beauty is, if setup right, you can try them all.. the data, as is the point, is sitting snugly off to the side.
On 12/17/2017 07:12 PM, fred roller wrote:
Does seem cleaner; but, I assume you are creating a partition for each directory? And fstab could be saved and at worse used for reference in new build.
In some cases, yes. In others, I just used links, depending on how much space I needed.
Allegedly, on or about 17 December 2017, Temlakos sent:
I need to set up links to:
- All folders that I want to hold on the data drive, including
configuration files that I want to preserve from one iteration to the next--like .thunderbird, .firefox, .chrome, .adobe, and so on. These would be the top-level folders, the ones in the home directory, and not the subfolders.
Those kind of things are the ones that can cause you grief, and web browsers seem to be the worst. Settings and plugins for one version of a program change from the next. Blindly applying old ones often creates odd behaviour. You can be better to use the application to import older settings. Then it's (probably) more likely to *convert* old settings into newer ones.
Though I tend to just re-configure the software, rather than import old data. I find it much less painful.
Long ago I changed to using a local IMAP server for mail, so mail programs only need to be reconfigured for logins, the mail is on my server. Else mail programs would be the most hideous thing to try and keep going over different installs.
- Any file that, for whatever reason, is sitting in my home
directory and that I haven't made up my mind to place into a folder, like Downloads or Pictures or Documents--whatever. (This might include password files, if I can get the old Password Manager program reinstalled. I have an rpm for that, but I don't know whether that would install or not.) And I must do that for every user account.
More or less. Then try to get out of the habit of just dumping stuff into your homespace root folder.
I had a couple of ways of dealing with all of this. I changed the defaults from ~/documents, ~/videos, et cetera, to suit my own system. If you modify what's in /etc/skel you can have the system automatically set up new users with a customised set of directories. As a quick example, I have directories like this:
~/local/documents ~/local/downloads ~/local/sort-out-later ~/local/videos
For things I'll quickly dump onto the local hard drive and probably not keep. That was done by creating a "local" folder inside /etc/skel, and the other folders inside the local one.
And directories that are on the network, like this:
~/nas/documents ~/nas/downloads ~/nas/sort-out-later ~/nas/videos
Again, if you keep whatever keyword consistent where I wrote "nas," you can put that inside /etc/skel, too.
That's where I'll store anything worth keeping.
Where "nas" is a symlink to mount point to my own space on the network drive (ln -s /nas/home/tim /home/tim/nas), and those other directories are simply my directories within it (no symlink commands needed).
This scheme works whether I'm symlinking to a NAS, or another drive in the box, just change the path in the link to suit.
Question: would you preserve all hidden application configuration files on the separate drive? Or do some things deserve to reside on the system drive and get overwritten with every clean install?
I'd let configurations go in their default place, and have them new for every install.
Regarding storing configurations abnormally, it's worth noting that if your other drive is a NAS, it needs to be up and running before logging in. Otherwise, your programs are going to start as if from new, and write new configuration.
On 12/17/2017 07:42 PM, fred roller wrote:
[snip] Now I have one more question, and this is for Fred or Stan. Should any physical directories named Documents, Downloads, Music, Pictures, Video, etc., remain on the actual /home mount?
My process was to mkdir on the new drive, delete the old directory in /home/user and when I created the link the directory was visible in the /home/user just writing to somewhere else.
[snip]
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
As I thought. Now may I also assume that you use the chown command to re-create the ownership and group-membership structure of each specific user directory in the new drive? And also use chmod to re-create the permissions structure? I'm familiar enough with chown and chmod. I've used them often enough in my days as a volunteer developer on other sites that use UNIX.
In any event, let me guess: whatever you create and set in the new drive, no re-installation will ever alter. Thereafter you remove any directories in /home/user (where /user/ is the name of a user account) and re-establish the links, right?
I should have figured one thing: I do this for everything that I used to copy over from one computer to the next when I would break in a new(er) computer with (of necessity) a fresh (first!) installation of Fedora. That included all the named directories, any other top-level directories I created, and /home/user/.thunderbird in every account that used Thunderbird regularly. (Same with Kmail, for any KDE user who uses the "native" browser and e-mail client.)
Temlakos
On Mon, 18 Dec 2017 06:23:13 -0500 Temlakos temlakos@gmail.com wrote:
As I thought. Now may I also assume that you use the chown command to re-create the ownership and group-membership structure of each specific user directory in the new drive? And also use chmod to re-create the permissions structure? I'm familiar enough with chown and chmod. I've used them often enough in my days as a volunteer developer on other sites that use UNIX.
Yep.
In any event, let me guess: whatever you create and set in the new drive, no re-installation will ever alter. Thereafter you remove any directories in /home/user (where /user/ is the name of a user account) and re-establish the links, right?
Yep.
I should have figured one thing: I do this for everything that I used to copy over from one computer to the next when I would break in a new(er) computer with (of necessity) a fresh (first!) installation of Fedora. That included all the named directories, any other top-level directories I created, and /home/user/.thunderbird in every account that used Thunderbird regularly. (Same with Kmail, for any KDE user who uses the "native" browser and e-mail client.)
As Tim points out, this can cause problems if configuration options have changed. Better to just let the newer version create its own config, try the app, and if it works the way you want, leave it the way it is. If it doesn't work the way you want, do a diff with the old config to see what has changed, and make changes in the new config based on those.
On 12/18/2017 12:13 PM, stan wrote:
On Mon, 18 Dec 2017 06:23:13 -0500 Temlakos temlakos@gmail.com wrote:
As I thought. Now may I also assume that you use the chown command to re-create the ownership and group-membership structure of each specific user directory in the new drive? And also use chmod to re-create the permissions structure? I'm familiar enough with chown and chmod. I've used them often enough in my days as a volunteer developer on other sites that use UNIX.
Yep.
In any event, let me guess: whatever you create and set in the new drive, no re-installation will ever alter. Thereafter you remove any directories in /home/user (where /user/ is the name of a user account) and re-establish the links, right?
Yep.
All good. Thanks for confirming.
I should have figured one thing: I do this for everything that I used to copy over from one computer to the next when I would break in a new(er) computer with (of necessity) a fresh (first!) installation of Fedora. That included all the named directories, any other top-level directories I created, and /home/user/.thunderbird in every account that used Thunderbird regularly. (Same with Kmail, for any KDE user who uses the "native" browser and e-mail client.)
As Tim points out, this can cause problems if configuration options have changed. Better to just let the newer version create its own config, try the app, and if it works the way you want, leave it the way it is. If it doesn't work the way you want, do a diff with the old config to see what has changed, and make changes in the new config based on those.
Well, I've identified one application, the configuration of which I /must/ preserve in some fashion, and that is: Thunderbird. At a minimum, I need to preserve a folder that has e-mail accounts and saved mail databases on it. Otherwise, I lose more than some minor, out-of-sight configuration. And when I have as many as twenty e-mail accounts or more, I /cannot/ afford to have to re-list them all.
The simplest method is to move the .thunderbird folder onto the new drive and link to it from the system drive. The not-so-simple method is to copy out the particular folder and move it into .thunderbird. Or maybe to go into .thunderbird on the system drive and make a symlink /inside that folder/ to the e-mail accounts folder on the new drive. Maybe I'll try that. You can be sure I'll back everything up--I'm getting a portable HDD with 4 TB of capacity that I'm going to use as an all-around system backup.
Temlakos
The simplest method is to move the .thunderbird folder onto the new drive
and link to it from the system drive. The not-so-simple method is to copy out the particular folder and move it into .thunderbird. Or maybe to go into .thunderbird on the system drive and make a symlink *inside that folder* to the e-mail accounts folder on the new drive. Maybe I'll try that. You can be sure I'll back everything up--I'm getting a portable HDD with 4 TB of capacity that I'm going to use as an all-around system backup.
On rebuild I did a first time start of Thunderbird for the user so it creates the users [abcdefgh].default file in /home/user/.thunderbird, at which time I reloaded the add-on etc. etc. but canceled signing into any account or setting up any such. Purpose was simply to accomplish two things 1) load any add-on such as lighting and 2) was to create the [abcdefgh].default file in /home/user/.thunderbird. Once this was done /then/ I removed the existing /home/user/.thunderbird directory and linked the /Crypt/user/.thunderbird directory on the "Crypt" drive. Start Thunderbird again and it will check compatibility of add-on and then everything should look/work the way it did before the rebuild. Thunderbird will see the older [stuvwxyz].default file and operate as if nothing changed. Life saver given how I had everything set up and my filters were... a lot. Go through your accounts (check for mail) and ensure all the log-in function.
On 12/16/17 20:58, Temlakos wrote:
I now ask the community for some suggestions.
Oh, and BTW, you should consider enabling fstrim.target as this has been shown to help maintain performance over time.
Also, just a bit of info, you'd asked if I'd seen increase in performance after switching to SDD. Well, today I finished upgrading a system to all SSD's. The system is a few years old. An i5 CPU 2.67GHz. While I eliminated some services that I no longer needed the startup time improved dramatically. Previously it would take about 2 minutes to boot-up to the login screen. Now it takes 12 seconds.
Some examples of changes are....
Before 12.997s firewalld.service After 802ms firewalld.service
Before 3.082s named.service After 143ms named-chroot-setup.service
Before 1.417s chronyd.service After 84ms chronyd.service
Before 461ms bluetooth.service After 59ms bluetooth.service
On 12/21/17 18:02, Patrick O'Callaghan wrote:
On Thu, 2017-12-21 at 16:58 +0800, Ed Greshko wrote:
Oh, and BTW, you should consider enabling fstrim.target as this has been shown to help maintain performance over time.
fstrim.target doesn't appear to be in the standard repos. Is this a personal script?
I don't know if there is a specific package that supplies it. Could be the kernel....
[egreshko@meimei ~]$ ls /lib/systemd/system/fstrim.* /lib/systemd/system/fstrim.service /lib/systemd/system/fstrim.timer
[egreshko@meimei ~]$ systemctl status fstrim.timer | more ● fstrim.timer - Discard unused blocks once a week Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: di sabled) Active: active (waiting) since Thu 2017-12-21 16:25:57 CST; 1h 48min ago Trigger: Mon 2017-12-25 00:00:00 CST; 3 days left Docs: man:fstrim
Dec 21 16:25:57 meimei.greshko.com systemd[1]: Started Discard unused blocks once a week.
Den 2017-12-21 kl. 11:14, skrev Ed Greshko:
On 12/21/17 18:02, Patrick O'Callaghan wrote:
On Thu, 2017-12-21 at 16:58 +0800, Ed Greshko wrote:
Oh, and BTW, you should consider enabling fstrim.target as this has been shown to help maintain performance over time.
fstrim.target doesn't appear to be in the standard repos. Is this a personal script?
I don't know if there is a specific package that supplies it. Could be the kernel....
[egreshko@meimei ~]$ ls /lib/systemd/system/fstrim.* /lib/systemd/system/fstrim.service /lib/systemd/system/fstrim.timer
util-linux
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
On Thu, 2017-12-21 at 18:14 +0800, Ed Greshko wrote:
On Thu, 2017-12-21 at 16:58 +0800, Ed Greshko wrote:
Oh, and BTW, you should consider enabling fstrim.target as this has been shown to help maintain performance over time.
fstrim.target doesn't appear to be in the standard repos. Is this a personal script?
I don't know if there is a specific package that supplies it. Could be the kernel....
[egreshko@meimei ~]$ ls /lib/systemd/system/fstrim.* /lib/systemd/system/fstrim.service /lib/systemd/system/fstrim.timer
That explains it. You had mentioned fstrim.target, not fstrim.service.
poc
On 12/21/17 19:06, Patrick O'Callaghan wrote:
On Thu, 2017-12-21 at 18:14 +0800, Ed Greshko wrote:
On Thu, 2017-12-21 at 16:58 +0800, Ed Greshko wrote:
Oh, and BTW, you should consider enabling fstrim.target as this has been shown to help maintain performance over time.
fstrim.target doesn't appear to be in the standard repos. Is this a personal script?
I don't know if there is a specific package that supplies it. Could be the kernel....
[egreshko@meimei ~]$ ls /lib/systemd/system/fstrim.* /lib/systemd/system/fstrim.service /lib/systemd/system/fstrim.timer
That explains it. You had mentioned fstrim.target, not fstrim.service.
Ooops... I should have said "enable fstrim.timer"
Thanks for catching that.
On Thu, 2017-12-21 at 20:04 +0800, Ed Greshko wrote:
On 12/21/17 19:06, Patrick O'Callaghan wrote:
On Thu, 2017-12-21 at 18:14 +0800, Ed Greshko wrote:
On Thu, 2017-12-21 at 16:58 +0800, Ed Greshko wrote:
Oh, and BTW, you should consider enabling fstrim.target as this has been shown to help maintain performance over time.
fstrim.target doesn't appear to be in the standard repos. Is this a personal script?
I don't know if there is a specific package that supplies it. Could be the kernel....
[egreshko@meimei ~]$ ls /lib/systemd/system/fstrim.* /lib/systemd/system/fstrim.service /lib/systemd/system/fstrim.timer
That explains it. You had mentioned fstrim.target, not fstrim.service.
Ooops... I should have said "enable fstrim.timer"
Thanks for catching that.
Thank you for mentioning it. I have an SSD and occasionally have used fstrim but wasn't aware of the fstrim.timer option.
poc
On 12/21/2017 04:21 AM, Patrick O'Callaghan wrote:
On Thu, 2017-12-21 at 20:04 +0800, Ed Greshko wrote:
On 12/21/17 19:06, Patrick O'Callaghan wrote:
On Thu, 2017-12-21 at 18:14 +0800, Ed Greshko wrote:
On Thu, 2017-12-21 at 16:58 +0800, Ed Greshko wrote:
Oh, and BTW, you should consider enabling fstrim.target as this has been shown to help maintain performance over time.
fstrim.target doesn't appear to be in the standard repos. Is this a personal script?
I don't know if there is a specific package that supplies it. Could be the kernel....
[egreshko@meimei ~]$ ls /lib/systemd/system/fstrim.* /lib/systemd/system/fstrim.service /lib/systemd/system/fstrim.timer
That explains it. You had mentioned fstrim.target, not fstrim.service.
Ooops... I should have said "enable fstrim.timer"
Thanks for catching that.
Thank you for mentioning it. I have an SSD and occasionally have used fstrim but wasn't aware of the fstrim.timer option.
And for those of you using SSDs, make sure you have a good backup plan in place. SSDs are fast and a lot more reliable than they used to be. However, when they die, they die suddenly and catastrophically--and typically with no warning. I make it a rule to back up to magnetic media frequently as I've been bitten by that issue often in the past. ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 22643734 Yahoo: origrps2 - - - - Life: That which happens while you search for the remote control. - ----------------------------------------------------------------------
Allegedly, on or about 18 December 2017, Temlakos sent:
Well, I've identified one application, the configuration of which I must preserve in some fashion, and that is: Thunderbird. At a minimum, I need to preserve a folder that has e-mail accounts and saved mail databases on it. Otherwise, I lose more than some minor, out-of-sight configuration. And when I have as many as twenty e-mail accounts or more, I cannot afford to have to re-list them all.
Keeping mail going across many system updates is a pain. As you say, in essence, the whole thing is a database. It's a lot of data, specially organised. And it's another of those things that suffers from program changes over time.
I had a lot of email addresses, too, but this is how I handle them on my system.
1. I have a local mail server, *all* my mail goes into it. 2. On it, I have fetchmail drag in all the mail from all my external email accounts, every so-many minutes, and store them in my (singular) local mailbox. 3. I use IMAP with my mail program, it accesses the mail in the local mailbox.
Keeping point 1 up to date over time is the hardest thing. Though there are various migration tools, but some mail servers you can simply copy the files over as-is. And if you put it on something like CentOS, the OS has a very long lifespan.
The configuration file for fetchmail is a simple thing to understand, since it allows you to write a configuration in an almost human fashion. And that one configuration file can be easily copied onto a new PC. Again, if your server is a long lifespan OS, that's an issue that rarely crops up.
e.g. It's virtually:
poll ExternalMailServerAddress proto pop3 user LogonName with password TopSecret is LocalLogonName here
Just substitute actually details for the variables, one line of configuration per external mail service.
Alternatively, points 1 and 2 can be handled by using an external mail server, one that allows the same kind of thing (it being a central point for all your various addresses).
To set up the mail program, it only has to access one mail server, mine. I set up various identities (different email addresses) within that configuration, so I can reply to mail using the same address it was sent to. That's relatively easy to do so on each PC once or twice a year. And this is the only thing I have to keep on doing each time Fedora gets a version change.
I don't bother, any more, with setting up address books. Since the server keeps the mail, it's not hard to find a prior email from someone and click on their address. And the program can be sent to automatically address-book any email I reply to, so it will auto-fill- in someone's address as I type it.
------------------
Compare that to moving a mail client program (Thunderbird, Evolution, et cetera) over to a new software installation, every time Fedora updates. You have to hope that the mail program allows a straight- forward import of old data (most make it relatively simply to use an import function, even if they don't allow you to simply copy the files). That old configuration files don't cause problems (that happens a lot). You have to be able to find all the hidden files that set up the mail program (some programs seem to spread them here, there, and everywhere). That if you have plug-ins, they'll transfer over (they frequently don't). If there's SSL certificates to deal with, that they'll be managed without a stuff-up.
---------------------
After being on the internet since the 1990s, I've come to believe that mail has being the biggest pain. You lose contact with people because email addresses change (someone's ISP closes, they get a better deal, they change addresses due to spam flooding, etc.). You collect a series of addresses over time, and you get more spam simply because you have more accounts to receive it at. You have to manage changing ISPs, mail and operating system software.
It's easier to:
1. Get your own domain name, so you can keep email address permanently, and are free to create them in the manner that you want to (no ISP restrictions on naming). 2. Never use an ISP address, because it makes it harder for you to leave that ISP (because you'd lose an address that you might not want to). 3. Only have a small number addresses (e.g. personal, family, business, public). Avoid accumulating addresses simply because they're offered to you.
Personal, rather obviously, means private mail just to you.
Family and business addresses would be pertinent to everyone in that group (i.e. not private). This means that communication doesn't go into a black hole if you, personally, aren't responding to mail for some reason. Anxious relatives don't have to wonder if you're alive or dead when you're away for a while, when someone else in your family reads and responds. Business doesn't grind to a halt because people you work with can't read client mail addressed to you personally.
A public address being the one that you'd provide anywhere where you think it may be subjected to spam (you give out this one, instead, so that your other addresses are less likely to receive spam). Whether that be to the shop that wants to email you a guarantee (plus marketing mail), people you meet at gatherings, mailing lists. And this address may be the one that you're prepared to throw away if it gets abused.
On Thu, 2017-12-21 at 11:40 -0800, Rick Stevens wrote:
On 12/21/2017 04:21 AM, Patrick O'Callaghan wrote:
On Thu, 2017-12-21 at 20:04 +0800, Ed Greshko wrote:
On 12/21/17 19:06, Patrick O'Callaghan wrote:
On Thu, 2017-12-21 at 18:14 +0800, Ed Greshko wrote:
On Thu, 2017-12-21 at 16:58 +0800, Ed Greshko wrote: > Oh, and BTW, you should consider enabling fstrim.target as this has been shown to > help maintain performance over time.
fstrim.target doesn't appear to be in the standard repos. Is this a personal script?
I don't know if there is a specific package that supplies it. Could be the kernel....
[egreshko@meimei ~]$ ls /lib/systemd/system/fstrim.* /lib/systemd/system/fstrim.service /lib/systemd/system/fstrim.timer
That explains it. You had mentioned fstrim.target, not fstrim.service.
Ooops... I should have said "enable fstrim.timer"
Thanks for catching that.
Thank you for mentioning it. I have an SSD and occasionally have used fstrim but wasn't aware of the fstrim.timer option.
And for those of you using SSDs, make sure you have a good backup plan in place. SSDs are fast and a lot more reliable than they used to be. However, when they die, they die suddenly and catastrophically--and typically with no warning. I make it a rule to back up to magnetic media frequently as I've been bitten by that issue often in the past.
I do have a nightly backup to rotating rust, but also I reserve the SSD for the root filesystem. It's only 120GB and I doubt that a larger one would make much of a real-life speed difference if I put /home on it.
poc
On 12/22/17 07:56, Patrick O'Callaghan wrote:
On Thu, 2017-12-21 at 11:40 -0800, Rick Stevens wrote:
On 12/21/2017 04:21 AM, Patrick O'Callaghan wrote:
And for those of you using SSDs, make sure you have a good backup plan in place. SSDs are fast and a lot more reliable than they used to be. However, when they die, they die suddenly and catastrophically--and typically with no warning. I make it a rule to back up to magnetic media frequently as I've been bitten by that issue often in the past.
I do have a nightly backup to rotating rust, but also I reserve the SSD for the root filesystem. It's only 120GB and I doubt that a larger one would make much of a real-life speed difference if I put /home on it.
I also have had a backup strategy since before using SSD since HDDs can, and have, also died without warning. I backup to a RAID NAS.
FWIW, my VMs are resident in my $HOME directory and they do benefit from being on the SSD.
On Fri, 2017-12-22 at 08:28 +0800, Ed Greshko wrote:
On 12/22/17 07:56, Patrick O'Callaghan wrote:
On Thu, 2017-12-21 at 11:40 -0800, Rick Stevens wrote:
On 12/21/2017 04:21 AM, Patrick O'Callaghan wrote:
And for those of you using SSDs, make sure you have a good backup plan in place. SSDs are fast and a lot more reliable than they used to be. However, when they die, they die suddenly and catastrophically--and typically with no warning. I make it a rule to back up to magnetic media frequently as I've been bitten by that issue often in the past.
I do have a nightly backup to rotating rust, but also I reserve the SSD for the root filesystem. It's only 120GB and I doubt that a larger one would make much of a real-life speed difference if I put /home on it.
I also have had a backup strategy since before using SSD since HDDs can, and have, also died without warning. I backup to a RAID NAS.
Ditto. The Raid has saved me several times from failing Seagate disks (now replaced with Western Digital).
FWIW, my VMs are resident in my $HOME directory and they do benefit from being on the SSD.
I recently replaced my 1TB /home drive with a 2TB unit, and decided to move my QCOW VM drive from /home to a raw Windows partition on the old drive. It made a huge difference and the VM now runs games at close to native speed. Clearly an SSD would also have improved things but on balance I think it was the right decision. A lot depends on what your goals are and where you're starting from.
BTW, most people are probably using Sata-type SSDs, however my son recently built a new machine with a 128GB Samsung PCIe NVMe as the system drive (he's an animator and needs the speed). It's at least twice as fast as a Sata SSD but of course you need a fairly new motherboard to support it.
poc