-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 06/11/2013 04:52 PM, Jan Safranek wrote:
Hi,
I've been talking to Stephen Gallagher how to proceed with client script development in lmishell. The goal is to provide high-level functionality to manage remote systems without complete knowledge of the CIM API.
We agreed that:
- we should provide python modules with high-level functions
(we were thinking about nice classes, e.g. VolumeGroup with methods to extend/destroy/examine a volume group, but it would end up in duplicating the API we already have. We also assume that our users are not familiar with OOP).
- these python functions try to hide the object model - we assume
that administrators won't remember association names and won't use e.g. vg.associators(AssocClass="LMI_VGAssociatedComponentExtent") to get list of physical volumes of a vg. We want nice vg_get_pvs(vg) function. We will expose CIM classes and properties though.
- these python functions are synchronous, i.e. they do stuff and
return once the stuff is finished. They can do stuff in parallel inside (e.g. format multiple devices simultaneously) but from outside perspective, the stuff is completed once the function returns.
(we were thinking about python functions just scheduling multiple actions and do stuff in parallel massively, but we quickly got into lot of corner cases)
- each high-level function takes a LmiNamespace parameter, which
specifies the WBEM connection + the namespace on which it operates -> i.e. applications/other scripts can run our functions on multiple connections -> if the LmiNamespace is not provided by caller, some 'global' one will be used (so users just connect once and this connection is then used for all high-level functions)
- we should probably split these high-level function to several
modules by functionality, i.e. have lmi.networking and lmi.storage.vg, lmi.storage.lv etc.
- it should be easy to build command-line versions for these
high-level functions -> it is not clear if we should mimic existing cmdline tools (mdadm, vgcreate, ip, ...) or make some cleanup (so creation of MD raid looks the same like creation of a VG)
- we should introduce some 'lmi' metacommand, which would wrap
these command line tools, like 'lmi vgcreate mygroup /dev/sda1 /dev/sdb1' and 'lmi ip addr show'. It's quite similar to fedpkg or koji command line utilities.
- 'lmi' metacommand could also have a shell: $ lmi shell
vgcreate mygroup /dev/sda1 /dev/sdb1 ip addr show
I tried to create a simple module for volume group management. I ran into several issues with lmishell (see trac tickets), attached you can find first proposal. It's quite crude and misses several important aspects like proper logging and error handling.
Please look at it and let us know what you think. It is just a proposal, we can change it in any way.
Once we agree on the concept, we must also define strict documentation and logging standards so all functions and scripts are nicely documented and all of them provide the same user experience.
Jan
P.S.: note that I'm out of office for next week and with sporadic email access this week.
On a call with Russ and Tomáš this morning, we had a few additional thoughts that we probably need to incorporate into the design.
First, I think we should plan for all LMI exceptions to descend from a single LMIException class. This base class should always provide a human-readable and *localizable* error (in Unicode format). The individual descendants of this object can and should carry more information in specific manner, but the idea of the base class is this:
When invoking the LMI module from the command-line (either directly using the lmishell interpreter or via the 'lmi' meta-commands), we should always have a catch-all for LMIException in the main function that will return the human-readable error on STDERR. This will make it much easier for admins to identify where something went wrong (without seeing a scary python traceback).
Note: if we get back an exception that is NOT an LMIException, we should probably allow it to crash out, since it's most likely a programming bug (an error case we didn't handle properly that was cascaded back up the stack). We will want to know about those.