Gitweb: https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=8e5305f630be818e67757…
Commit: 8e5305f630be818e677572e4c43d2c728a89f54f
Parent: e7f1329cae118ccbfded213eee4895d99d79120b
Author: Zdenek Kabelac <zkabelac(a)redhat.com>
AuthorDate: Sat Feb 17 11:24:32 2018 +0100
Committer: Zdenek Kabelac <zkabelac(a)redhat.com>
CommitterDate: Mon Feb 19 16:45:10 2018 +0100
tests: correct usage of pipe
This is somewhat tricky - for test suite we keep using
'set -e -o pipefail' - the effect here is - we get error report
from any 'failing' command in whole pipeline - thus when something
like this: 'lvs | head -1' is used - and 'head' finishes before
lead 'lvs' is done - it recieves SIGPIPE and exits with error,
and somewhat misleading gets occasionally reported depending
of speed of commands.
For this case we have to avoid using standard pipes and rather
switch to using streamed results with temporary output file.
This is all nicely handled with bash feature '< <()'.
For more info:
https://stackoverflow.com/questions/41516177/bash-zcat-head-causes-pipefail
---
test/lib/get.sh | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/test/lib/get.sh b/test/lib/get.sh
index e7139dc..afc10bc 100644
--- a/test/lib/get.sh
+++ b/test/lib/get.sh
@@ -47,7 +47,7 @@ lv_field() {
lv_first_seg_field() {
local r
- r=$(lvs --config 'log{prefix=""}' --noheadings -o "$2" "${@:3}" "$1" | head -1)
+ r=$(head -1 < <(lvs --config 'log{prefix=""}' --unbuffered --noheadings -o "$2" "${@:3}" "$1"))
trim_ "$r"
}
@@ -74,7 +74,7 @@ lv_field_lv_() {
lv_tree_devices_() {
local lv="$1/$2"
local type
- type=$(lv_field "$lv" segtype -a --unbuffered | head -n 1)
+ type=$(lv_first_seg_field "$lv" segtype -a)
#local orig
#orig=$(lv_field_lv_ "$lv" origin)
# FIXME: should we count in also origins ?
Gitweb: https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=c3bb2b29d441f27d7e1d8…
Commit: c3bb2b29d441f27d7e1d88f71d934ba8c955b26d
Parent: e87fa7c9cef53013efa46033ad037822c70c1bb9
Author: Zdenek Kabelac <zkabelac(a)redhat.com>
AuthorDate: Mon Feb 19 16:31:52 2018 +0100
Committer: Zdenek Kabelac <zkabelac(a)redhat.com>
CommitterDate: Mon Feb 19 16:45:05 2018 +0100
locking: move cache dropping to primary locking code
While 'file-locking' code always dropped cached VG before
lock was taken - other locking types actually missed this.
So while the cache dropping has been implement for i.e. clvmd,
actually running command in cluster keept using cache even
when the lock has been i.e. dropped and taken again.
This rather 'hard-to-hit' error was noticable in some
tests running in cluster where content of PV has been
changed (metadata-balance.sh)
Fix the code by moving cache dropping directly lock_vol() function.
TODO: it's kind of strange we should ever need drop_cached_metadata()
used in several places - this all should happen automatically
this some futher thinking here is likely needed.
---
WHATS_NEW | 1 +
lib/locking/file_locking.c | 5 +----
lib/locking/locking.c | 7 +++++++
3 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/WHATS_NEW b/WHATS_NEW
index b11de8c..5791930 100644
--- a/WHATS_NEW
+++ b/WHATS_NEW
@@ -1,5 +1,6 @@
Version 2.02.178 -
=====================================
+ Ensure cluster commands drop their device cache before locking VG.
Do not report LV as remotely active when it's locally exclusive in cluster.
Add deprecate messages for usage of mirrors with mirrorlog.
Separate reporting of monitoring status and error status.
diff --git a/lib/locking/file_locking.c b/lib/locking/file_locking.c
index 245892e..517a64f 100644
--- a/lib/locking/file_locking.c
+++ b/lib/locking/file_locking.c
@@ -60,11 +60,8 @@ static int _file_lock_resource(struct cmd_context *cmd, const char *resource,
return_0;
break;
case LCK_VG:
- if (!strcmp(resource, VG_SYNC_NAMES)) {
+ if (!strcmp(resource, VG_SYNC_NAMES))
fs_unlock();
- } else if (strcmp(resource, VG_GLOBAL))
- /* Skip cache refresh for VG_GLOBAL - the caller handles it */
- lvmcache_drop_metadata(resource, 0);
/* LCK_CACHE does not require a real lock */
if (flags & LCK_CACHE)
diff --git a/lib/locking/locking.c b/lib/locking/locking.c
index 1a3ce9d..d61aa35 100644
--- a/lib/locking/locking.c
+++ b/lib/locking/locking.c
@@ -336,6 +336,13 @@ int lock_vol(struct cmd_context *cmd, const char *vol, uint32_t flags, const str
!lvmcache_verify_lock_order(vol))
return_0;
+ if ((flags == LCK_VG_DROP_CACHE) ||
+ (strcmp(vol, VG_GLOBAL) && strcmp(vol, VG_SYNC_NAMES))) {
+ /* Skip dropping cache for internal VG names #global, #sync_names */
+ log_debug_locking("Dropping cache for %s.", vol);
+ lvmcache_drop_metadata(vol, 0);
+ }
+
/* Lock VG to change on-disk metadata. */
/* If LVM1 driver knows about the VG, it can't be accessed. */
if (!check_lvm1_vg_inactive(cmd, vol))
Gitweb: https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=1671b83585a6ee4944837…
Commit: 1671b83585a6ee49448376c20834a9e89936a7f3
Parent: f5401fbd34371dfed74f981eb1d4c5b4b3220cc2
Author: Marian Csontos <mcsontos(a)redhat.com>
AuthorDate: Fri Feb 16 17:09:40 2018 +0100
Committer: Marian Csontos <mcsontos(a)redhat.com>
CommitterDate: Fri Feb 16 17:10:54 2018 +0100
doc: Fixing VDO document
---
doc/vdo.md | 59 +++++++++++++++++++++++++++++++++++++++--------------------
1 files changed, 39 insertions(+), 20 deletions(-)
diff --git a/doc/vdo.md b/doc/vdo.md
index a85518b..5c5a33c 100644
--- a/doc/vdo.md
+++ b/doc/vdo.md
@@ -22,17 +22,25 @@ Usual limitations apply:
- Never layer LUKS over another LUKS - it makes no sense.
- LUKS is better over the raids, than under.
+Devices which are not best suitable as backing device:
+
+- thin volumes - at the moment it is not possible to take snapshot of active VDO volume on top of thin volume.
+
### Using VDO as a PV:
-1. under tpool
+1. under tdata
- The best fit - it will deduplicate additional redundancies among all
snapshots and will reduce the footprint.
- Risks: Resize! dmevent will not be able to handle resizing of tpool ATM.
2. under corig
- - Cache fits better under VDO device - it will reduce amount of data, and
- deduplicate, so there should be more hits.
- This is useful to keep the most frequently used data in cache
- uncompressed (if that happens to be a bottleneck.)
+ uncompressed or without deduplication if that happens to be a bottleneck.
+ - Cache may fit better under VDO device, depending on compressibility and
+ amount of duplicates, as
+ - compression will reduce amount of data, thus effectively increasing
+ size of cache,
+ - and deduplication may emphasize hotspots.
+ - Performance testing of your particular workload is strongly recommended.
3. under (multiple) linear LVs - e.g. used for VMs.
### And where VDO does not fit:
@@ -50,36 +58,47 @@ Usual limitations apply:
- under snapshot CoW device - when there are multiple of those it could deduplicate
+## Development
+
### Things to decide
-- under integrity devices - it should work - mostly for data
- - hash is not compressible and unique - it makes sense to have separate imeta and idata volumes for integrity devices
+- under integrity devices
+ - VDO should work well for data blocks,
+ - but hashes are mostly unique and not compressible - were it possible it
+ would make sense to have separate imeta and idata volumes for integrity
+ devices.
### Future Integration of VDO into LVM:
One issue is using both LUKS and RAID under VDO. We have two options:
- use mdadm x LUKS x VDO+LV
-- use LV RAID x LUKS x VDO+LV - still requiring recursive LVs.
+- use LV RAID x LUKS x VDO+LV
+
+In both cases dmeventd will not be able to resize the volume at the moment.
-Another issue is duality of VDO - it is a top level LV but it can be seen as a "pool" for multiple devices.
+Another issue is duality of VDO - it can be used as a top level LV (with a
+filesystem on top) but it can be used as "pool" for multiple devices too.
-- This is one usecase which could not be handled by LVM at the moment.
-- Size of the VDO is its physical size and virtual size - just like tpool.
- - same problems with virtual vs physical size - it can get full, without exposing it fo a FS
+This will be solved in similar way thin pools allow multiple volumes.
-Another possible RFE is to split data and metadata:
+Also VDO, has two sizes - its physical size and virtual size - and when
+overprovisioning, just like tpool, we face same problems - VDO can get full,
+without exposing it to a FS. dmeventd monitoring will be needed.
-- e.g. keep data on HDD and metadata on SSD
+Another possible RFE is to split data and metadata - keep data on HDD and metadata on SSD.
## Issues / Testing
- fstrim/discard pass down - does it work with VDO?
-- VDO can run in synchronous vs. asynchronous mode
- - synchronous for devices where write is safe after it is confirmed. Some devices are lying.
- - asynchronous for devices requiring flush
-- multiple devices under VDO - need to find common options
-- pvmove - changing characteristics of underlying device
-- autoactivation during boot
- - Q: can we use VDO for RootFS?
+- VDO can run in synchronous vs. asynchronous mode:
+ - synchronous for devices where write is safe after it is confirmed. Some
+ devices are lying.
+ - asynchronous for devices requiring flush.
+- Multiple devices under VDO - need to find and expose common properties, or
+ not allow grouping them together. (This is same for all volumes with more
+ physical devices below.)
+- pvmove changing characteristics of underlying device.
+- autoactivation during boot?
+ - Q: can we use VDO for RootFS? Dracut!