Thanks!
Could you also let us know the output of "stat --file-system /home".
Stratis uses a constant threshold of filesystem usage to determine if the
filesystem should
be expanded.
I think it is possible that a devicemapper event triggers the filesystem
check, but the
threshold has not yet been reached and so no expansion action is taken.
It looks like the snapshots are very similar to their origins; otherwise
the pool used
amount would be much larger, so I don't see why they should have any effect.
- mulhern
On Thu, Sep 16, 2021 at 10:49 PM Ryan Gonzalez <rymg19(a)gmail.com> wrote:
On 9/16/21 9:23 PM, the Mulhern wrote:
Hi!
One thing I'ld like to mention is that in Stratis 3.0, you'll
be able to set the size of the filesystem at the command-line,
although the default size will remain 1 TiB. There are some caveats;
it's not a good idea to grow an xfs filesystem by more than about 8
times as the metadata layout is calculated for the original size specified.
So just for some clarity, if I had to make the rootfs smaller to make room
for home to expand, would the only way to do that to be to wait for 3.0,
then just re-create the FS with a smaller initial size? I don't anticipate
it nearly growing past 8x the original size.
But, while this might be useful to others, I don't think it's really
all that relevant to your case. The amount of space allocated by the
thinpool for your data should really correspond to your filesystem
usage, which is only about 1 TiB.
Generally speaking, Stratis responds to devicemapper events, including
the low water event. It then does an analysis of your pool, including the
filesystems, to determine if any adjustment should be made.
Can you show us the output of the "stratis pool list" command? That
should show both the amount of space actually used (by your data +
Stratis metadata) the total size as understood by Stratis, and the amount
that Stratis interprets as free.
This is the output:
Name Total Physical
Properties UUID
fedora 1.81 TiB / 1.06 TiB / 769.29 GiB ~Ca, Cr
d9c81a12-b249-4c4c-ae7b-afe658911993
In case it helps, here's also the output of `fs list`:
Pool Name Name Used Created
Device UUID
fedora root_backup 125.42 GiB Sep 05 2021 17:34
/dev/stratis/fedora/root_backup 4d2e3dc8-8b30-44f2-b2e9-d7be3f257784
fedora home_backup 895.84 GiB Sep 05 2021 17:33
/dev/stratis/fedora/home_backup a96b7544-5047-461e-baee-f33311be9643
fedora home 897.71 GiB Sep 04 2021 22:54
/dev/stratis/fedora/home 80d9ffe8-f686-4c5c-9b2c-b9380d9a8309
fedora root 122.97 GiB Sep 04 2021 22:53
/dev/stratis/fedora/root 70398648-899d-426d-981f-313706768233
I...just realized I still have these snapshots lying around that I was
(trying to) use for backup purposes, could that be affecting things as well?
From what you tell me, I think it's possible that, during I/O, the low
water
mark is reset multiple times by stratisd, and also crossed multiple times,
triggering repeated re-evaluations of the pool and filesystem states, which
perhaps then result in no action, but cause the slow I/O that you
mentioned.
You can't prevent Stratis from doing this check, without accepting a patch
and
recompiling. This check is there to prevent running out of space, so it's
not optional. It is our present goal to make these checks more effective
and less intrusive.
In fact, we just today merged a PR[1] which inaugurates
our development efforts on thinpool and filesystem management improvements.
Thanks for getting in touch,
- mulhern
[1]
https://github.com/stratis-storage/stratisd/pull/2786
On Thu, Sep 16, 2021 at 8:45 PM Ryan Gonzalez <rymg19(a)gmail.com> wrote:
> Hello there again! I've been using Stratis as my rootfs for a bit now,
> and it seems to be working pretty well...with one particular catch...
>
> For some context: I have a single 2TB pool on a single disk, with two
> filesystems inside. When I created said filesystems, they seem to have
> been both sized as 1TB, which was a bit confusing since I thought they
> would start smaller and expand as I wrote to them. In particular, one of
> them (the rootfs) is only ~200GB full, while the other is ~800GB. Of
> course, the latter of these is reaching the current 1TB limit, which is
> where things get...weird. In particular, based on the system logs, I
> seem to have hit this:
>
https://github.com/stratis-storage/stratisd/issues/1466
>
> From what I can understand, this is just Stratis trying to expand the
> filesystem. However...it doesn't seem to ever succeed, since the size
> stays the same. Despite that, it still seems to run whenever I write in
> a large amount of new data, resulting in some *very* brutally slow I/O
> speeds (despite being on an NVMe disk), to the extent that I can't
> really even open a terminal (to be fair, I have quite a few oh-my-zsh
> plugins that could be contributing negatively to this...)
>
> This has led to two particular questions:
>
> - How exactly does the resizing of filesystems work? I know you can't
> shrink XFS, but I believe the Stratis design paper references reclaiming
> unused space via trims. Is it even possible for the rootfs to "shrink"
> (but not really) to accommodate an expansion of the other one?
>
> - If the other one can't expand, is it possible to just tell Stratis to
> stop trying and avoid the major lag? Or, could there be something in my
> configuration making this significantly slower than it's supposed to be
> (maybe disk schedulers)?
> _______________________________________________
> stratis-devel mailing list -- stratis-devel(a)lists.fedorahosted.org
> To unsubscribe send an email to
> stratis-devel-leave(a)lists.fedorahosted.org
> Fedora Code of Conduct:
>
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
>
https://lists.fedorahosted.org/archives/list/stratis-devel@lists.fedoraho...
> Do not reply to spam on the list, report it:
>
https://pagure.io/fedora-infrastructure
>