Re: stratis-devel Digest, Vol 47, Issue 2
by aanno
Dear John,
this is what stratis reports after deleting about 50GB of data from
/mnt/home and about 30GB of data from /mnt/opt:
# stratis pool
Name Total / Used / Free Properties
UUID Alerts
stratis_hdd 1.43 TiB / 819.34 GiB / 645.49 GiB Ca,~Cr, Op
093c8d42-21b8-46a2-a7e8-5d35f458fa58 WS001
# stratis blockdev
Pool Name Device Node Physical Size Tier UUID
stratis_hdd /dev/dm-6 1.43 TiB DATA
9a5b1c4d-8014-4155-990e-eedfd27803a6
stratis_hdd /dev/dm-7 292.95 GiB CACHE
770d0190-c17b-4349-a9f9-0968036e6b2c
# stratis fs
Pool Filesystem Total / Used / Free / Limit
Created Device UUID
stratis_hdd home 2 TiB / 708.75 GiB / 1.31 TiB / None Apr
28 2019 12:29 /dev/stratis/stratis_hdd/home 1715
5095-e225-4fb0-b020-ec2ffa6a5e4d
stratis_hdd opt 1 TiB / 109.11 GiB / 914.89 GiB / None Apr
28 2019 12:30 /dev/stratis/stratis_hdd/opt fb19
a29e-ab39-4b41-8d37-0dc6d222a2b9
For me, it looks like there is no change on reported numbers and/or the
alert. Perhaps I misunderstood what to do?
Kind regards,
aanno
Am 13.12.23 um 05:05 schrieb stratis-devel-request(a)lists.fedorahosted.org:
> Send stratis-devel mailing list submissions to
> stratis-devel(a)lists.fedorahosted.org
>
> To subscribe or unsubscribe via email, send a message with subject or
> body 'help' to
> stratis-devel-request(a)lists.fedorahosted.org
>
> You can reach the person managing the list at
> stratis-devel-owner(a)lists.fedorahosted.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of stratis-devel digest..."
>
> Today's Topics:
>
> 1. Re: stratis: fs exhausted? (John Baublitz)
>
>
> ----------------------------------------------------------------------
>
> Date: Tue, 12 Dec 2023 17:57:53 -0500
> From: John Baublitz<jbaublit(a)redhat.com>
> Subject: [stratis-devel] Re: stratis: fs exhausted?
> To: aanno<aannoaanno(a)gmail.com>
> Cc:stratis-devel@lists.fedorahosted.org
> Message-ID:
> <CANPnS0=GeO7FUb5EJpL1_2OAKVULMNFhP=efvwfNWCms83WZAg(a)mail.gmail.com>
> Content-Type: multipart/alternative;
> boundary="000000000000b6b959060c57fb18"
>
> --000000000000b6b959060c57fb18
> Content-Type: text/plain; charset="UTF-8"
> Content-Transfer-Encoding: quoted-printable
>
> Hi aanno,
> I would also like to just mention that when you say the physical space is
> exhausted, I'm assuming you're referring to alert WS001, correct? We may
> need to document this a little bit better because this alert just means
> that all of the space in the pool is fully allocated. This alert
> specifically pops up when:
>
> 1. The pool extends to take up all available physical space (this will not
> set the warning you are seeing).
> 2. You fill up the remaining space to the point where only 15 GiB is left.
> 3. The pool attempts to extend the data device further and determines there
> is no space left to extend (this is when the warning is set).
>
> Now practically what this means is that when the warning is set, you have
> 15GiB left of space. If you fill up the 15GiB, the pool will not be able to
> be extended further to help you out. What I'm mostly curious about is,
> looking at the space you reported in stratis pool/fs it looks like the
> physical space on the pool is not full enough to warrant the alert. Is this
> output after deleting a significant amount of data from the pool? From what
> I can see, you shouldn't have this alert set unless your usage spiked
> significantly and went back down. The warning doesn't clear out unless you
> add more space so the state you're in is not impossible or even unlikely,
> but it does make me wonder a little bit about your usage pattern. Are you
> still seeing this instability even with the lower usage you pasted here?
>
> On Thu, Dec 7, 2023 at 1:37=E2=80=AFPM aanno<aannoaanno(a)gmail.com> wrote:
>
>> Hello,
>>
>> I'm using stratis for many years now (for my /home and /opt mount points)=
> .
>> It looks like this:
>>
>> $ stratis pool
>> Name Total / Used / Free Properties
>> UUID Alerts
>> stratis_hdd 1.43 TiB / 819.51 GiB / 645.32 GiB Ca,~Cr, Op
>> 093c8d42-21b8-46a2-a7e8-5d35f458fa58 WS001
>>
>> $ stratis fs
>> Pool Filesystem Total / Used / Free / Limit
>> Created Device UUID
>>
>> stratis_hdd home 2 TiB / 708.92 GiB / 1.31 TiB / None Apr 2=
> 8
>> 2019 12:29 /dev/stratis/stratis_hdd/home 1715
>> 5095-e225-4fb0-b020-ec2ffa6a5e4d
>> stratis_hdd opt 1 TiB / 109.11 GiB / 914.89 GiB / None Apr 2=
> 8
>> 2019 12:30 /dev/stratis/stratis_hdd/opt fb19
>> a29e-ab39-4b41-8d37-0dc6d222a2b9
>>
>> $ stratis blockdev
>> Pool Name Device Node Physical Size Tier UUID
>>
>> stratis_hdd /dev/dm-6 292.95 GiB CACHE
>> 770d0190-c17b-4349-a9f9-0968036e6b2c
>> stratis_hdd /dev/dm-7 1.43 TiB DATA
>> 9a5b1c4d-8014-4155-990e-eedfd27803a6
>>
>> However, since a few weeks my /home has become unstable. Newly created (o=
> r
>> altered) files are (sometimes) corrupted after a reboot of the system.
>>
>> For both mount points I'm using a SSD cache in front of a HDD. It feels a
>> bit like the caching layer is all right, but the HDD layer is on error
>> (exhausted?). For me, it looks like that would explain the observed
>> behaviour.
>>
>> I wonder if
>>
>> * I could somehow 'disable' (or remove) the SSD caching layer?
>> * I could get some debug informations?
>>
>> Kind regards,
>>
>> aanno
>>
>> --
>> _______________________________________________
>> stratis-devel mailing list --stratis-devel(a)lists.fedorahosted.org
>> To unsubscribe send an email tostratis-devel-leave(a)lists.fedorahosted.or=
> g
>> Fedora Code of Conduct:
>> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
>> List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
>> List Archives:
>> https://lists.fedorahosted.org/archives/list/stratis-devel@lists.fedoraho=
> sted.org
>> Do not reply to spam, report it:
>> https://pagure.io/fedora-infrastructure/new_issue
>>
>
> --=20
> John
> he/him
> Principal Software Engineer, Stratis team
>
> --000000000000b6b959060c57fb18
> Content-Type: text/html; charset="UTF-8"
> Content-Transfer-Encoding: quoted-printable
>
> <div dir=3D"ltr"><div dir=3D"ltr"><div>Hi aanno,</div><div>I would also lik=
> e to just mention that when you say the physical space is exhausted, I'=
> m assuming you're referring to alert WS001, correct? We may need to doc=
> ument this a little bit better because this alert just means that all of th=
> e space in the pool is fully allocated. This alert specifically pops up whe=
> n:</div><div><br></div><div>1. The pool extends to take up all available ph=
> ysical space (this will not set the warning you are seeing).</div><div>2. Y=
> ou fill up the remaining space to the point where only 15 GiB is left.</div=
>> <div>3. The pool attempts to extend the data device further and determines=
> there is no space left to extend (this is when the warning is set).<br></d=
> iv><div><br></div><div>Now practically what this means is that when the war=
> ning is set, you have 15GiB left of space. If you fill up the 15GiB, the po=
> ol will not be able to be extended further to help you out. What I'm mo=
> stly curious about is, looking at the space you reported in stratis pool/fs=
> it looks like the physical space on the pool is not full enough to warrant=
> the alert. Is this output after deleting a significant amount of data from=
> the pool? From what I can see, you shouldn't have this alert set unles=
> s your usage spiked significantly and went back down. The warning doesn'=
> ;t clear out unless you add more space so the state you're in is not im=
> possible or even unlikely, but it does make me wonder a little bit about yo=
> ur usage pattern. Are you still seeing this instability even with the lower=
> usage you pasted here?<br></div></div><br><div class=3D"gmail_quote"><div =
> dir=3D"ltr" class=3D"gmail_attr">On Thu, Dec 7, 2023 at 1:37=E2=80=AFPM aan=
> no <<a href=3D"mailto:aannoaanno@gmail.com">aannoaanno(a)gmail.com</a>>=
> wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
> 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><u></u>
>
> =20
>
> =20
> =20
> <div>
> <p>Hello,</p>
> <p>I'm using stratis for many years now (for my /home and /opt moun=
> t
> points). It looks like this:</p>
> <p><span style=3D"font-family:monospace"><span style=3D"color:rgb(0,0,0=
> );background-color:rgb(255,255,255)">$ stratis pool
> </span><br>
> Name =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
> =A0=C2=A0Total / Used / Free =C2=A0=C2=A0=C2=A0Properties
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
> =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0UUID =C2=A0=
> =C2=A0Alerts
> <br>
> stratis_hdd =C2=A0=C2=A01.43 TiB / 819.51 GiB / 645.32 GiB =C2=A0=
> =C2=A0=C2=A0Ca,~Cr, Op
> =C2=A0=C2=A0093c8d42-21b8-46a2-a7e8-5d35f458fa58 =C2=A0=C2=A0WS001 =
> <br>
> <br>
> </span><span style=3D"font-family:monospace"></span></p>
> <p><span style=3D"font-family:monospace"><span style=3D"color:rgb(0,0,0=
> );background-color:rgb(255,255,255)">$ stratis fs
> </span><br>
> Pool =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0Filesyst=
> em =C2=A0=C2=A0Total / Used / Free / Limit
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
> =C2=A0=C2=A0Created =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
> =C2=A0=C2=A0=C2=A0Device
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
> =A0=C2=A0UUID<br>
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
> =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0<br>
> stratis_hdd =C2=A0=C2=A0home =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
> =C2=A0=C2=A02 TiB / 708.92 GiB / 1.31 TiB / None
> =C2=A0=C2=A0=C2=A0=C2=A0Apr 28 2019 12:29 =C2=A0=C2=A0/dev/stratis/=
> stratis_hdd/home =C2=A0=C2=A01715<br>
> 5095-e225-4fb0-b020-ec2ffa6a5e4d
> <br>
> stratis_hdd =C2=A0=C2=A0opt =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
> =A0=C2=A0=C2=A01 TiB / 109.11 GiB / 914.89 GiB /
> None =C2=A0=C2=A0Apr 28 2019 12:30 =C2=A0=C2=A0/dev/stratis/stratis=
> _hdd/opt =C2=A0=C2=A0=C2=A0fb19<br>
> a29e-ab39-4b41-8d37-0dc6d222a2b9<br>
> <br>
> </span></p>
> <p><span style=3D"font-family:monospace"></span><span style=3D"font-fam=
> ily:monospace"><span style=3D"color:rgb(0,0,0);background-color:rgb(255,255=
> ,255)">$ stratis
> blockdev
> </span><br>
> Pool Name =C2=A0=C2=A0=C2=A0=C2=A0Device Node =C2=A0=C2=A0Physical =
> Size =C2=A0=C2=A0=C2=A0Tier =C2=A0=C2=A0UUID
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
> =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0<br>
> stratis_hdd =C2=A0=C2=A0/dev/dm-6 =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
> =A0=C2=A0292.95 GiB =C2=A0=C2=A0CACHE
> =C2=A0=C2=A0770d0190-c17b-4349-a9f9-0968036e6b2c
> <br>
> stratis_hdd =C2=A0=C2=A0/dev/dm-7 =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
> =A0=C2=A0=C2=A0=C2=A01.43 TiB =C2=A0=C2=A0=C2=A0DATA
> =C2=A0=C2=A09a5b1c4d-8014-4155-990e-eedfd27803a6<br>
> </span><br>
> </p>
> <p>However, since a few weeks my /home has become unstable. Newly
> created (or altered) files are (sometimes) corrupted after a
> reboot of the system.</p>
> <p>For both mount points I'm using a SSD cache in front of a HDD. I=
> t
> feels a bit like the caching layer is all right, but the HDD layer
> is on error (exhausted?). For me, it looks like that would explain
> the observed behaviour.</p>
> <p>I wonder if</p>
> <p>* I could somehow 'disable' (or remove) the SSD caching laye=
> r?<br>
> * I could get some debug informations?<br>
> </p>
> <p>Kind regards,</p>
> <p>aanno<br>
> </p>
> <p><span style=3D"font-family:monospace"></span></p>
> </div>
>
> --<br>
> _______________________________________________<br>
> stratis-devel mailing list -- <a href=3D"mailto:stratis-devel@lists.fedorah= osted.org" target=3D"_blank">stratis-devel(a)lists.fedorahosted.org</a><br>
> To unsubscribe send an email to <a href=3D"mailto:stratis-devel-leave@lists= .fedorahosted.org" target=3D"_blank">stratis-devel-leave(a)lists.fedorahosted=
> .org</a><br>
> Fedora Code of Conduct: <a href=3D"https://docs.fedoraproject.org/en-US/pro= ject/code-of-conduct/" rel=3D"noreferrer" target=3D"_blank">https://docs.fe=
> doraproject.org/en-US/project/code-of-conduct/</a><br>
> List Guidelines: <a href=3D"https://fedoraproject.org/wiki/Mailing_list_gui= delines" rel=3D"noreferrer" target=3D"_blank">https://fedoraproject.org/wik=
> i/Mailing_list_guidelines</a><br>
> List Archives: <a href=3D"https://lists.fedorahosted.org/archives/list/stra=
> tis-devel(a)lists.fedorahosted.org" rel=3D"noreferrer" target=3D"_blank">http=
> s://lists.fedorahosted.org/archives/list/stratis-devel@lists.fedorahosted.o=
> rg</a><br>
> Do not reply to spam, report it: <a href=3D"https://pagure.io/fedora-infras= tructure/new_issue" rel=3D"noreferrer" target=3D"_blank">https://pagure.io/=
> fedora-infrastructure/new_issue</a><br>
> </blockquote></div><br clear=3D"all"><br><span class=3D"gmail_signature_pre=
> fix">-- </span><br><div dir=3D"ltr" class=3D"gmail_signature"><div dir=3D"l=
> tr">John<br>he/him<br>Principal Software Engineer, Stratis team</div></div>=
> </div>
>
> --000000000000b6b959060c57fb18--
>
> ------------------------------
>
> Subject: Digest Footer
>
> --
> _______________________________________________
> stratis-devel mailing list --stratis-devel(a)lists.fedorahosted.org
> To unsubscribe send an email tostratis-devel-leave(a)lists.fedorahosted.org
> Fedora Code of Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:https://lists.fedorahosted.org/archives/list/stratis-devel@lists...
> Do not reply to spam, report it:https://pagure.io/fedora-infrastructure/new_issue
>
>
> ------------------------------
>
> End of stratis-devel Digest, Vol 47, Issue 2
> ********************************************
4 months, 2 weeks
stratis: fs exhausted?
by aanno
Hello,
I'm using stratis for many years now (for my /home and /opt mount
points). It looks like this:
$ stratis pool
Name Total / Used / Free Properties
UUID Alerts
stratis_hdd 1.43 TiB / 819.51 GiB / 645.32 GiB Ca,~Cr, Op
093c8d42-21b8-46a2-a7e8-5d35f458fa58 WS001
$ stratis fs
Pool Filesystem Total / Used / Free / Limit
Created Device UUID
stratis_hdd home 2 TiB / 708.92 GiB / 1.31 TiB / None Apr
28 2019 12:29 /dev/stratis/stratis_hdd/home 1715
5095-e225-4fb0-b020-ec2ffa6a5e4d
stratis_hdd opt 1 TiB / 109.11 GiB / 914.89 GiB / None Apr
28 2019 12:30 /dev/stratis/stratis_hdd/opt fb19
a29e-ab39-4b41-8d37-0dc6d222a2b9
$ stratis blockdev
Pool Name Device Node Physical Size Tier UUID
stratis_hdd /dev/dm-6 292.95 GiB CACHE
770d0190-c17b-4349-a9f9-0968036e6b2c
stratis_hdd /dev/dm-7 1.43 TiB DATA
9a5b1c4d-8014-4155-990e-eedfd27803a6
However, since a few weeks my /home has become unstable. Newly created
(or altered) files are (sometimes) corrupted after a reboot of the system.
For both mount points I'm using a SSD cache in front of a HDD. It feels
a bit like the caching layer is all right, but the HDD layer is on error
(exhausted?). For me, it looks like that would explain the observed
behaviour.
I wonder if
* I could somehow 'disable' (or remove) the SSD caching layer?
* I could get some debug informations?
Kind regards,
aanno
4 months, 2 weeks
Re: stratis-devel Digest, Vol 47, Issue 1
by aanno
Dear Bryan,
first of all I like to say that I expect this to be a hardware problem.
Stratis has served for many years - and I think a bug is very unlikely.
To your questions:
* Linux: Fedora 39 on x86_64 pc hardware
* Kernel: Linux redsnapper 6.6.3-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC
Tue Nov 28 19:11:52 UTC 2023 x86_64 GNU/Linux
* stratis-cli-3.6.0-1.fc39.noarch
* stratisd-3.6.3-1.fc39.x86_64
lsblk -l
NAME MAJ:MIN RM SIZE
RO TYPE MOUNTPOINTS
[... snapd stuff]
sda 8:0 0 3,6T
0 disk
sda1 8:1 0 16M
0 part
sda2 8:2 0 293G
0 part
sda3 8:3 0 1,4T
0 part
zram0 252:0 0 8G
0 disk [SWAP]
luks-8cfdfb51-cdfa-401a-9815-d3be9a527942 253:0 0 300G
0 crypt
fedora-root 253:1 0 120G
0 lvm /var/lib/snapd/snap
/
fedora-00 253:2 0 32G
0 lvm [SWAP]
fedora-thinpool_tmeta 253:3 0 24M
0 lvm
fedora-thinpool_tdata 253:4 0 40G
0 lvm
fedora-thinpool 253:5 0 40G
0 lvm
luks-stratis-ssd-vg 253:6 0 293G
0 crypt
luks-stratis-hdd-vg 253:7 0 1,4T
0 crypt
stratis-1-private-093c8d4221b846a2a7e85d35f458fa58-physical-originsub
253:8 0 1,4T
0 stratis
stratis-1-private-093c8d4221b846a2a7e85d35f458fa58-physical-metasub
253:9 0 512M
0 stratis
stratis-1-private-093c8d4221b846a2a7e85d35f458fa58-physical-cachesub
253:10 0 292,4G
0 stratis
stratis-1-private-093c8d4221b846a2a7e85d35f458fa58-physical-cache
253:11 0 1,4T
0 stratis
stratis-1-private-093c8d4221b846a2a7e85d35f458fa58-flex-thindata
253:12 0 1,4T
0 stratis
stratis-1-private-093c8d4221b846a2a7e85d35f458fa58-flex-thinmeta
253:13 0 1,5G
0 stratis
stratis-1-private-093c8d4221b846a2a7e85d35f458fa58-thinpool-pool
253:14 0 1,4T
0 stratis
stratis-1-private-093c8d4221b846a2a7e85d35f458fa58-flex-mdv
253:15 0 16M
0 stratis
stratis-1-093c8d4221b846a2a7e85d35f458fa58-thin-fs-17155095e2254fb0b020ec2ffa6a5e4d
253:16 0 2T
0 stratis /mnt/home
stratis-1-093c8d4221b846a2a7e85d35f458fa58-thin-fs-fb19a29eab394b418d370dc6d222a2b9
253:17 0 1T
0 stratis /mnt/opt
nvme0n1 259:0 0 931,5G
0 disk
nvme0n1p1 259:1 0 499M
0 part
nvme0n1p2 259:2 0 99M
0 part /boot/efi
nvme0n1p3 259:3 0 16M
0 part
nvme0n1p4 259:4 0 194,1G
0 part
nvme0n1p5 259:5 0 577M
0 part
nvme0n1p6 259:6 0 2G
0 part /boot
nvme0n1p7 259:7 0 300G
0 part
nvme0n1p8 259:8 0 293G
0 part
nvme0n1p9 259:9 0 48,8G
0 part /mnt/nocrypt-ext4
nvme0n1p10 259:10 0 92,4G
0 part /mnt/nocrypt-f2fs
journalctl -kb
XFS:
Dez 09 11:04:54 redsnapper kernel: SGI XFSwith ACLs, security
attributes, realtime, scrub, quota, no debug enabled
Dez 09 11:04:54 redsnapper kernel: XFS(dm-1): Mounting V5 Filesystem
98082300-1963-46a0-919d-f3713aee7213
Dez 09 11:04:54 redsnapper kernel: XFS(dm-1): Starting recovery (logdev:
internal)
Dez 09 11:04:54 redsnapper kernel: XFS(dm-1): Ending recovery (logdev:
internal)
Dez 09 10:05:13 redsnapper kernel: XFS(dm-15): Mounting V5 Filesystem
093c8d42-21b8-46a2-a7e8-5d35f458fa58
Dez 09 10:05:13 redsnapper kernel: XFS(dm-15): Starting recovery
(logdev: internal)
Dez 09 10:05:13 redsnapper kernel: XFS(dm-15): Ending recovery (logdev:
internal)
Dez 09 10:05:13 redsnapper kernel: xfs filesystem being mounted at
/run/stratisd/ns_mounts/.mdv-093c8d4221b846a2a7e85d35f45>
Dez 09 10:05:15 redsnapper kernel: bridge: filtering via
arp/ip/ip6tables is no longer available by default. Update your sc>
Dez 09 10:05:15 redsnapper kernel: XFS(dm-16): Mounting V5 Filesystem
17155095-e225-4fb0-b020-ec2ffa6a5e4d
Dez 09 10:05:15 redsnapper kernel: XFS(dm-16): Starting recovery
(logdev: internal)
Dez 09 10:05:15 redsnapper kernel: kvm_intel: L1TF CPU bug present and
SMT on, data leak possible. See CVE-2018-3646 and ht>
Dez 09 10:05:16 redsnapper kernel: XFS(dm-16): Ending recovery (logdev:
internal)
Dez 09 10:05:16 redsnapper kernel: XFS(dm-17): Mounting V5 Filesystem
fb19a29e-ab39-4b41-8d37-0dc6d222a2b9
Dez 09 10:05:16 redsnapper kernel: XFS(dm-17): Ending clean mount
sd:
Dez 09 11:04:44 redsnapper kernel: sd0:0:0:0: Attached scsi generic sg0
type 0
Dez 09 11:04:44 redsnapper kernel: sd0:0:0:0: [sda] 7814037168 512-byte
logical blocks: (4.00 TB/3.64 TiB)
Dez 09 11:04:44 redsnapper kernel: sd0:0:0:0: [sda] 4096-byte physical
blocks
Dez 09 11:04:44 redsnapper kernel: sd0:0:0:0: [sda] Write Protect is off
Dez 09 11:04:44 redsnapper kernel: sd0:0:0:0: [sda] Mode Sense: 00 3a 00 00
Dez 09 11:04:44 redsnapper kernel: sd0:0:0:0: [sda] Write cache:
enabled, read cache: enabled, doesn't support DPO or FUA
Dez 09 11:04:44 redsnapper kernel: sd0:0:0:0: [sda] Preferred minimum
I/O size 4096 bytes
Dez 09 11:04:44 redsnapper kernel: usb 2-10: new SuperSpeed USB device
number 2 using xhci_hcd
Dez 09 11:04:44 redsnapper kernel: sda: sda1 sda2 sda3
Dez 09 11:04:44 redsnapper kernel: sd0:0:0:0: [sda] Attached SCSI disk
ata:
Dez 09 11:04:44 redsnapper kernel: ata1: SATA max UDMA/133 abar
m2048@0x7f222000 port 0x7f222100 irq 124
Dez 09 11:04:44 redsnapper kernel: ata2: SATA max UDMA/133 abar
m2048@0x7f222000 port 0x7f222180 irq 124
Dez 09 11:04:44 redsnapper kernel: ata3: SATA max UDMA/133 abar
m2048@0x7f222000 port 0x7f222200 irq 124
Dez 09 11:04:44 redsnapper kernel: ata4: SATA max UDMA/133 abar
m2048@0x7f222000 port 0x7f222280 irq 124
Dez 09 11:04:44 redsnapper kernel: ata1: SATA link up 6.0 Gbps (SStatus
133 SControl 300)
Dez 09 11:04:44 redsnapper kernel: ata3: SATA link down (SStatus 4
SControl 300)
Dez 09 11:04:44 redsnapper kernel: ata4: SATA link down (SStatus 4
SControl 300)
Dez 09 11:04:44 redsnapper kernel: ata2: SATA link down (SStatus 4
SControl 300)
Dez 09 11:04:44 redsnapper kernel: ata1.00: ACPI cmd
f5/00:00:00:00:00:00(SECURITY FREEZE LOCK) filtered out
Dez 09 11:04:44 redsnapper kernel: ata1.00: ACPI cmd
b1/c1:00:00:00:00:00(DEVICE CONFIGURATION OVERLAY) filtered out
Dez 09 11:04:44 redsnapper kernel: ata1.00: ATA-8: TOSHIBA HDWT140,
FP1S, max UDMA/100
Dez 09 11:04:44 redsnapper kernel: ata1.00: 7814037168 sectors, multi
16: LBA48 NCQ (depth 32), AA
Dez 09 11:04:44 redsnapper kernel: ata1.00: ACPI cmd
f5/00:00:00:00:00:00(SECURITY FREEZE LOCK) filtered out
Dez 09 11:04:44 redsnapper kernel: ata1.00: ACPI cmd
b1/c1:00:00:00:00:00(DEVICE CONFIGURATION OVERLAY) filtered out
Dez 09 11:04:44 redsnapper kernel: ata1.00: configured for UDMA/100
Kind regards,
aanno
4 months, 2 weeks