Some observations:
* A main goal is for sysadmins used to bash and shell scripts to easily move to lmishell and scriptons - we want them to think of it as "a familiar environment on steroids".
* Having said this, we want to take advantage of the power of the Python language as a scripting tool. However, going to a full OO interface will be a step too far...
* Radek suggested that we should do some prototyping. This makes a lot of sense, and has certainly served us well so far. I would like to see some prototypes before we firm up best practices for scriptons anyway, so having the prototyping phase include both procedural and OO examples is reasonable.
On a completely different topic, would it make sense to rename "lmishell" to just "lmi" before we go any further? "lmi" is shorter and easier to type, and I don't see where including "shell" adds any information?
Russ
On Fri, 2013-06-21 at 08:28 -0400, Stephen Gallagher wrote:
On 06/12/2013 06:20 AM, Jan Synacek wrote:
On 06/11/2013 10:52 PM, Jan Safranek wrote:
Hi,
Hello!
I've been talking to Stephen Gallagher how to proceed with client script development in lmishell. The goal is to provide high-level functionality to manage remote systems without complete knowledge of the CIM API.
We agreed that:
- we should provide python modules with high-level functions
(we were thinking about nice classes, e.g. VolumeGroup with methods to extend/destroy/examine a volume group, but it would end up in duplicating the API we already have. We also assume that our users are not familiar with OOP).
Define "our users". Are they admins that will use the scriptons from python scripts? If yes, I think that degrading the CIM API from OOP to pure procedural just doesn't sound right. Or maybe I'm just misunderstanding something here.
Most admins are not really familiar with object-oriented programming. The largest set of admins we're targeting tend towards bash scripting with command-line tools. We want to capture that group and encourage them to use OpenLMI.
By making the calls useful and procedural, we can get them to start using OpenLMI. We're not changing the underlying OO API underneath. Once people are using our interface, they will always have the option of extending their usage to call the low-level OpenLMI object-oriented functions.
The point of the lmishell is to be *very* easy for admins to use. Object-oriented programming is (perceived to be) hard and will scare away a fair number of admins.
- these python functions try to hide the object model - we assume
that administrators won't remember association names and won't use e.g. vg.associators(AssocClass="LMI_VGAssociatedComponentExtent") to get list of physical volumes of a vg. We want nice vg_get_pvs(vg) function. We will expose CIM classes and properties though.
- these python functions are synchronous, i.e. they do stuff and
return once the stuff is finished. They can do stuff in parallel inside (e.g. format multiple devices simultaneously) but from outside perspective, the stuff is completed once the function returns.
What about leaving an option for functions that are asynchronous to run asynchronously? Or create both async and sync version of such functions. Because, again, forcing something that can be run asynchronously into a synchronous mode is degrading what we already have, needlessly. IMO we shouldn't make things more simple than simple.
Again, the point here is to simplify the interface into something that admins are comfortable with. Some of them will understand async processing, but most won't. In order for us to have an async interface, we'll need to provide a set of job-processing tools to wait for results and we'd have to train admins to know when to block and wait (or how to write a mainloop and do full async processing). Our view was that this was *far* too complicated for the average user (and as we went down the path of trying to figure out how to make it easier, we hit so many edge-cases that it became clear that providing async needs to be at earliest a "2.0" feature).
Remember again that what we're trying to do here is capture admins whose usual behavior is to just call command-line applications and wait for their return. This is little different from their perspective. Async is a difficult problem to solve, and while there are obvious performance gains to being able to run some activities in parallel, it introduces the possibility of race-conditions and other concurrency bugs.
(we were thinking about python functions just scheduling multiple actions and do stuff in parallel massively, but we quickly got into lot of corner cases)
- each high-level function takes a LmiNamespace parameter, which
specifies the WBEM connection + the namespace on which it operates -> i.e. applications/other scripts can run our functions on multiple connections -> if the LmiNamespace is not provided by caller, some 'global' one will be used (so users just connect once and this connection is then used for all high-level functions)
Having two extra parameters for each function sounds like a huge API bloat. I think that having some 'global' one, i.e. some kind of a state object that the underlying layer (lmishell?) would use, would be better.
There's only one parameter, namespace (which encompasses both the connection and namespace on which it operates). There will effectively be a global object that will save the state. The idea is that when we create a connection, we'll set the global variable internally. If you create multiple connections, the last one created will be the default.
Then, if you want to run a routine for a connection *other* than the default, you will need to specify the namespace parameter.
So for the majority of cases, this argument will simply be left out.
- we should probably split these high-level function to several
modules by functionality, i.e. have lmi.networking and lmi.storage.vg, lmi.storage.lv etc.
- it should be easy to build command-line versions for these
high-level functions -> it is not clear if we should mimic existing cmdline tools (mdadm, vgcreate, ip, ...) or make some cleanup (so creation of MD raid looks the same like creation of a VG)
- we should introduce some 'lmi' metacommand, which would wrap
these command line tools, like 'lmi vgcreate mygroup /dev/sda1 /dev/sdb1' and 'lmi ip addr show'. It's quite similar to fedpkg or koji command line utilities.
- 'lmi' metacommand could also have a shell: $ lmi shell
vgcreate mygroup /dev/sda1 /dev/sdb1 ip addr show
I would go with the metacommand style (ala virsh).
I'm in favor of the metacommand style as well. As Jan and I discussed that day, much of the point of lmishell is going to be to reduce the number of *different* commands an admin needs to learn. Thus, duplicating the existing commands would go against that effort.
I tried to create a simple module for volume group management. I ran into several issues with lmishell (see trac tickets), attached you can find first proposal. It's quite crude and misses several important aspects like proper logging and error handling.
Please look at it and let us know what you think. It is just a proposal, we can change it in any way.
Once we agree on the concept, we must also define strict documentation and logging standards so all functions and scripts are nicely documented and all of them provide the same user experience.
Jan
P.S.: note that I'm out of office for next week and with sporadic email access this week.
As for the logging, maybe use something similar to the logging decorators we now use in openlmi-storage? They would tell the lmishell (which I suppose would be used as an 'interpreter' for the scriptons) if it should log somehow or not. That would make it easier to create a centralized logging policy/style/output.
openlmi-devel mailing list openlmi-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/openlmi-devel