Hi.
Trying to get my head around what should be basic/trivial process.
I've got a remote VM. I can fire up a local term, and then ssh into the remote VM with no prob. I can then run the remote functions, all is good.
However, I'd really like to have some process on the local side, that would allow me to do all the above in a shell/prog process on the local side,
Psuedo Processes:: -spin up the remoter term of user1@1.2.3.4 -track the remote term/session - so I could "log into it" see what's going on for the initiated processes" -perform some dir functions as user1 on the remote system -run appA as user1 on 1.2.3.4 (long running) -run appB as user1 on 1.2.3.4 (long running) -etc.. -when the apps/processes are finished, shut down the "remote term"
I'd prefer to be able to do all of this, without actually having the "physical" local term be generated/displayed in the local desktop.
I'm going to be running a bunch of long running apps on the cloud, so I'm trying to walk through the appropriate process/approach to handling this.
Sites/Articles/thoughts are more than welcome.
Thanks guys/gals! (and any AI that's commenting as well!!)
On 11/08/2016 04:02 AM, bruce wrote:
Hi.
Trying to get my head around what should be basic/trivial process.
I've got a remote VM. I can fire up a local term, and then ssh into the remote VM with no prob. I can then run the remote functions, all is good.
However, I'd really like to have some process on the local side, that would allow me to do all the above in a shell/prog process on the local side,
Psuedo Processes:: -spin up the remoter term of user1@1.2.3.4 -track the remote term/session - so I could "log into it" see what's going on for the initiated processes" -perform some dir functions as user1 on the remote system -run appA as user1 on 1.2.3.4 (long running) -run appB as user1 on 1.2.3.4 (long running) -etc.. -when the apps/processes are finished, shut down the "remote term"
I'd prefer to be able to do all of this, without actually having the "physical" local term be generated/displayed in the local desktop.
I'm going to be running a bunch of long running apps on the cloud, so I'm trying to walk through the appropriate process/approach to handling this.
Sites/Articles/thoughts are more than welcome.
1. Set up ssh keys so you don't need to use passwords between the two systems (no interaction).
2. Launch your tasks on the remote VM using screen over ssh by doing something like:
ssh user1@1.2.3.4 screen -d -m -S firstsessionname "command you wish to run on the VM" ssh user1@1.2.3.4 screen -d -m -S secondsessionname "second command you wish to run on the VM"
3. If you want to check on the tasks, log into the VM via ssh interactively and check the various screen sessions. I recommend setting the screen session names via the "-S" option so they're easier to differentiate.
That should do it. The items in step 2 could be done in a shell script if you're lazy like me. :-) ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - The trouble with troubleshooting is that trouble sometimes - - shoots back. - ----------------------------------------------------------------------
Hey Rick!!
Thanks for the reply...
That was kind of going to be my thinking..
I came across some apps that appear to be devOps related, one of which was ClusterSSH/Cluster SSH.
As far as I can tell, it appears to allow you to setup the given ipAdress, as well as user to run the ssh connection as and the ssh ocnfig files, to allow the user to connect/access to the given term sessions for the remote instances.
The app also appears to allow you to then run the different commands to the given systems. (Not sure if you can "package" the commands you want to run, so you can run group of cmnds to different groups of systems.)
I'm currently looking at information on this, as well as a few others.
My use case, has a bunch of vms on digitalocean, so I need a way of "managing"/starting the processes on the machines - manual ain't going to cut it when i have 40-50 to test, and if things wrk, will easily scale to 300-500 where I have to spin up, run the stuff, and then spin them down..
Actually, it would be good to have a gui/tool to be able to implement the DO/digitalocean API to generate/create, run, create the snapshots, destroy, to save costs.
Whew!!
Thoughts/Comments etc..
On Tue, Nov 8, 2016 at 12:12 PM, Rick Stevens ricks@alldigital.com wrote:
On 11/08/2016 04:02 AM, bruce wrote:
Hi.
Trying to get my head around what should be basic/trivial process.
I've got a remote VM. I can fire up a local term, and then ssh into the remote VM with no prob. I can then run the remote functions, all is good.
However, I'd really like to have some process on the local side, that would allow me to do all the above in a shell/prog process on the local side,
Psuedo Processes:: -spin up the remoter term of user1@1.2.3.4 -track the remote term/session - so I could "log into it" see what's going on for the initiated processes" -perform some dir functions as user1 on the remote system -run appA as user1 on 1.2.3.4 (long running) -run appB as user1 on 1.2.3.4 (long running) -etc.. -when the apps/processes are finished, shut down the "remote term"
I'd prefer to be able to do all of this, without actually having the "physical" local term be generated/displayed in the local desktop.
I'm going to be running a bunch of long running apps on the cloud, so I'm trying to walk through the appropriate process/approach to handling this.
Sites/Articles/thoughts are more than welcome.
- Set up ssh keys so you don't need to use passwords between the
two systems (no interaction).
- Launch your tasks on the remote VM using screen over ssh by doing
something like:
ssh user1@1.2.3.4 screen -d -m -S firstsessionname "command you wish torun on the VM" ssh user1@1.2.3.4 screen -d -m -S secondsessionname "second command you wish to run on the VM"
- If you want to check on the tasks, log into the VM via ssh
interactively and check the various screen sessions. I recommend setting the screen session names via the "-S" option so they're easier to differentiate.
That should do it. The items in step 2 could be done in a shell script if you're lazy like me. :-)
- Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com -
- AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 -
-The trouble with troubleshooting is that trouble sometimes -shoots back. -
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
On 11/08/2016 10:00 AM, bruce wrote:
Hey Rick!!
Thanks for the reply...
That was kind of going to be my thinking..
I came across some apps that appear to be devOps related, one of which was ClusterSSH/Cluster SSH.
As far as I can tell, it appears to allow you to setup the given ipAdress, as well as user to run the ssh connection as and the ssh ocnfig files, to allow the user to connect/access to the given term sessions for the remote instances.
The app also appears to allow you to then run the different commands to the given systems. (Not sure if you can "package" the commands you want to run, so you can run group of cmnds to different groups of systems.)
I'm currently looking at information on this, as well as a few others.
My use case, has a bunch of vms on digitalocean, so I need a way of "managing"/starting the processes on the machines - manual ain't going to cut it when i have 40-50 to test, and if things wrk, will easily scale to 300-500 where I have to spin up, run the stuff, and then spin them down..
Actually, it would be good to have a gui/tool to be able to implement the DO/digitalocean API to generate/create, run, create the snapshots, destroy, to save costs.
Whew!!
Ok, what I suggested was really aimed at launching processes in the background on a remote machine in a way where you could check on them.
ClusterSSH (a.k.a. "cssh") is a different beastie. It's a GUI tool that allows you to open parallel ssh sessions to a whole bunch of remote machines simultaneously. We use it a lot, as we have about 300 machines in our data center broken into "clusters" that do specific things.
ClusterSSH opens a small terminal window for each machine so you can see what's going on. You can enter commands for THAT machine in that window as well. It also opens a "master" command line window, and whatever you type into that master window gets sent to ALL of the open windows. Fairly handy, but be REALLY careful, as sometimes (due to network load, etc.) some keystrokes may NOT make it to all of the open windows. This can be disastrous if you're, say, editing files and such.
cssh does offer a way to specify a command to be sent to all of the windows by using its "-a" option. I'd imagine it'd be something like:
cssh -a "screen -d -m -S firstsessionname 'command you wish to run on the VM'" user1@1.2.3.4 user1@5.6.7.8
which should run that screen command on the two machines specified. I can't speak to that too well. We don't typically use it like that--we tend to use it in the interactive mode only.
I can give you examples of cssh usage (such as an /etc/clusters file and such) if you want to go down that road.
On Tue, Nov 8, 2016 at 12:12 PM, Rick Stevens ricks@alldigital.com wrote:
On 11/08/2016 04:02 AM, bruce wrote:
Hi.
Trying to get my head around what should be basic/trivial process.
I've got a remote VM. I can fire up a local term, and then ssh into the remote VM with no prob. I can then run the remote functions, all is good.
However, I'd really like to have some process on the local side, that would allow me to do all the above in a shell/prog process on the local side,
Psuedo Processes:: -spin up the remoter term of user1@1.2.3.4 -track the remote term/session - so I could "log into it" see what's going on for the initiated processes" -perform some dir functions as user1 on the remote system -run appA as user1 on 1.2.3.4 (long running) -run appB as user1 on 1.2.3.4 (long running) -etc.. -when the apps/processes are finished, shut down the "remote term"
I'd prefer to be able to do all of this, without actually having the "physical" local term be generated/displayed in the local desktop.
I'm going to be running a bunch of long running apps on the cloud, so I'm trying to walk through the appropriate process/approach to handling this.
Sites/Articles/thoughts are more than welcome.
- Set up ssh keys so you don't need to use passwords between the
two systems (no interaction).
- Launch your tasks on the remote VM using screen over ssh by doing
something like:
ssh user1@1.2.3.4 screen -d -m -S firstsessionname "command you wish torun on the VM" ssh user1@1.2.3.4 screen -d -m -S secondsessionname "second command you wish to run on the VM"
- If you want to check on the tasks, log into the VM via ssh
interactively and check the various screen sessions. I recommend setting the screen session names via the "-S" option so they're easier to differentiate.
That should do it. The items in step 2 could be done in a shell script if you're lazy like me. :-)
- Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com -
- AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 -
-The trouble with troubleshooting is that trouble sometimes -shoots back. -
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
On Tue, Nov 8, 2016 at 1:33 PM, Rick Stevens ricks@alldigital.com wrote:
On 11/08/2016 10:00 AM, bruce wrote:
Hey Rick!!
Thanks for the reply...
That was kind of going to be my thinking..
I came across some apps that appear to be devOps related, one of which was ClusterSSH/Cluster SSH.
As far as I can tell, it appears to allow you to setup the given ipAdress, as well as user to run the ssh connection as and the ssh ocnfig files, to allow the user to connect/access to the given term sessions for the remote instances.
The app also appears to allow you to then run the different commands to the given systems. (Not sure if you can "package" the commands you want to run, so you can run group of cmnds to different groups of systems.)
I'm currently looking at information on this, as well as a few others.
My use case, has a bunch of vms on digitalocean, so I need a way of "managing"/starting the processes on the machines - manual ain't going to cut it when i have 40-50 to test, and if things wrk, will easily scale to 300-500 where I have to spin up, run the stuff, and then spin them down..
Actually, it would be good to have a gui/tool to be able to implement the DO/digitalocean API to generate/create, run, create the snapshots, destroy, to save costs.
Whew!!
Ok, what I suggested was really aimed at launching processes in the background on a remote machine in a way where you could check on them.
ClusterSSH (a.k.a. "cssh") is a different beastie. It's a GUI tool that allows you to open parallel ssh sessions to a whole bunch of remote machines simultaneously. We use it a lot, as we have about 300 machines in our data center broken into "clusters" that do specific things.
ClusterSSH opens a small terminal window for each machine so you can see what's going on. You can enter commands for THAT machine in that window as well. It also opens a "master" command line window, and whatever you type into that master window gets sent to ALL of the open windows. Fairly handy, but be REALLY careful, as sometimes (due to network load, etc.) some keystrokes may NOT make it to all of the open windows. This can be disastrous if you're, say, editing files and such.
cssh does offer a way to specify a command to be sent to all of the windows by using its "-a" option. I'd imagine it'd be something like:
cssh -a "screen -d -m -S firstsessionname 'command you wish torun on the VM'" user1@1.2.3.4 user1@5.6.7.8
which should run that screen command on the two machines specified. I can't speak to that too well. We don't typically use it like that--we tend to use it in the interactive mode only.
I can give you examples of cssh usage (such as an /etc/clusters file and such) if you want to go down that road.
On Tue, Nov 8, 2016 at 12:12 PM, Rick Stevens ricks@alldigital.com wrote:
On 11/08/2016 04:02 AM, bruce wrote:
Hi.
Trying to get my head around what should be basic/trivial process.
I've got a remote VM. I can fire up a local term, and then ssh into the remote VM with no prob. I can then run the remote functions, all is good.
However, I'd really like to have some process on the local side, that would allow me to do all the above in a shell/prog process on the local side,
Psuedo Processes:: -spin up the remoter term of user1@1.2.3.4 -track the remote term/session - so I could "log into it" see what's going on for the initiated processes" -perform some dir functions as user1 on the remote system -run appA as user1 on 1.2.3.4 (long running) -run appB as user1 on 1.2.3.4 (long running) -etc.. -when the apps/processes are finished, shut down the "remote term"
I'd prefer to be able to do all of this, without actually having the "physical" local term be generated/displayed in the local desktop.
I'm going to be running a bunch of long running apps on the cloud, so I'm trying to walk through the appropriate process/approach to handling this.
Sites/Articles/thoughts are more than welcome.
- Set up ssh keys so you don't need to use passwords between the
two systems (no interaction).
- Launch your tasks on the remote VM using screen over ssh by doing
something like:
ssh user1@1.2.3.4 screen -d -m -S firstsessionname "command you wish torun on the VM" ssh user1@1.2.3.4 screen -d -m -S secondsessionname "second command you wish to run on the VM"
- If you want to check on the tasks, log into the VM via ssh
interactively and check the various screen sessions. I recommend setting the screen session names via the "-S" option so they're easier to differentiate.
That should do it. The items in step 2 could be done in a shell script if you're lazy like me. :-)
------ Hey Rick!!
Thanks for the reply..
Slowly slogging through this.
My use case (for now) has DigitalOcean (DO) as the cloud provider. The project is working on the ability to be able to spin up/down 500-1000 droplets as needed.
DO provides the api, to create/delete the actual instances, so cost of the actual cloud usage can be handled. This is trivial.
However, on the devops side, it appears that clusterSSH would/will be useful (needed), in order to be able to kind of manage/track the overall progress of the running apps on the droplets.
I'm envisioning a process that allows:
1) The ability to use a "config" process to spin up the terms for the required droplets -droplet would have user1:passwd1 and use sshkey -droplet would have known ipAddress -as the process/droplets change the required config file could be script generated
2) The ability to spin down the terms 3) The ability to "see" the progress of the generated/viewed terms -The assumption is if the Screen process is used to run the remote processes, then when the term is reconnected via clusterSSH --ssh - then the screen session can automatically be attached, to then see the current status/process... 4) As the term is spun up, process needs to be able to do invoke/run the remote process
**Also/aside, need to be able to generate a remote cmd, to check if a given process is running on the droplet -- This should be trivial once the clusterSSH stuff is nailed down, use the same/similar process for generating the remote Screen app/sessions on the remote droplets..
If the project is running 100-200 droplets, there's no way to check all the droplets within a gui on the desktop, so there should be a way to "view" 20-30 at a time...
So... whatever you have regarding the clusterSSH/Screen session part, hit me up.
Or, if someone has thoughts/comments, feel free to post!
Thanks
Have you tried "screen"?
On Fri, Nov 11, 2016 at 10:02 AM, bruce badouglas@gmail.com wrote:
On Tue, Nov 8, 2016 at 1:33 PM, Rick Stevens ricks@alldigital.com wrote:
On 11/08/2016 10:00 AM, bruce wrote:
Hey Rick!!
Thanks for the reply...
That was kind of going to be my thinking..
I came across some apps that appear to be devOps related, one of which was ClusterSSH/Cluster SSH.
As far as I can tell, it appears to allow you to setup the given ipAdress, as well as user to run the ssh connection as and the ssh ocnfig files, to allow the user to connect/access to the given term sessions for the remote instances.
The app also appears to allow you to then run the different commands to the given systems. (Not sure if you can "package" the commands you want to run, so you can run group of cmnds to different groups of systems.)
I'm currently looking at information on this, as well as a few others.
My use case, has a bunch of vms on digitalocean, so I need a way of "managing"/starting the processes on the machines - manual ain't going to cut it when i have 40-50 to test, and if things wrk, will easily scale to 300-500 where I have to spin up, run the stuff, and then spin them down..
Actually, it would be good to have a gui/tool to be able to implement the DO/digitalocean API to generate/create, run, create the snapshots, destroy, to save costs.
Whew!!
Ok, what I suggested was really aimed at launching processes in the background on a remote machine in a way where you could check on them.
ClusterSSH (a.k.a. "cssh") is a different beastie. It's a GUI tool that allows you to open parallel ssh sessions to a whole bunch of remote machines simultaneously. We use it a lot, as we have about 300 machines in our data center broken into "clusters" that do specific things.
ClusterSSH opens a small terminal window for each machine so you can see what's going on. You can enter commands for THAT machine in that window as well. It also opens a "master" command line window, and whatever you type into that master window gets sent to ALL of the open windows. Fairly handy, but be REALLY careful, as sometimes (due to network load, etc.) some keystrokes may NOT make it to all of the open windows. This can be disastrous if you're, say, editing files and such.
cssh does offer a way to specify a command to be sent to all of the windows by using its "-a" option. I'd imagine it'd be something like:
cssh -a "screen -d -m -S firstsessionname 'command you wish torun on the VM'" user1@1.2.3.4 user1@5.6.7.8
which should run that screen command on the two machines specified. I can't speak to that too well. We don't typically use it like that--we tend to use it in the interactive mode only.
I can give you examples of cssh usage (such as an /etc/clusters file and such) if you want to go down that road.
On Tue, Nov 8, 2016 at 12:12 PM, Rick Stevens ricks@alldigital.com
wrote:
On 11/08/2016 04:02 AM, bruce wrote:
Hi.
Trying to get my head around what should be basic/trivial process.
I've got a remote VM. I can fire up a local term, and then ssh into the remote VM with no prob. I can then run the remote functions, all is good.
However, I'd really like to have some process on the local side, that would allow me to do all the above in a shell/prog process on the local side,
Psuedo Processes:: -spin up the remoter term of user1@1.2.3.4 -track the remote term/session - so I could "log into it" see what's going on for the initiated processes" -perform some dir functions as user1 on the remote system -run appA as user1 on 1.2.3.4 (long running) -run appB as user1 on 1.2.3.4 (long running) -etc.. -when the apps/processes are finished, shut down the "remote term"
I'd prefer to be able to do all of this, without actually having the "physical" local term be generated/displayed in the local desktop.
I'm going to be running a bunch of long running apps on the cloud, so I'm trying to walk through the appropriate process/approach to handling this.
Sites/Articles/thoughts are more than welcome.
- Set up ssh keys so you don't need to use passwords between the
two systems (no interaction).
- Launch your tasks on the remote VM using screen over ssh by doing
something like:
ssh user1@1.2.3.4 screen -d -m -S firstsessionname "commandyou wish to
run on the VM" ssh user1@1.2.3.4 screen -d -m -S secondsessionname "second
command you
wish to run on the VM"
- If you want to check on the tasks, log into the VM via ssh
interactively and check the various screen sessions. I recommend
setting
the screen session names via the "-S" option so they're easier to differentiate.
That should do it. The items in step 2 could be done in a shell script if you're lazy like me. :-)
Hey Rick!!
Thanks for the reply..
Slowly slogging through this.
My use case (for now) has DigitalOcean (DO) as the cloud provider. The project is working on the ability to be able to spin up/down 500-1000 droplets as needed.
DO provides the api, to create/delete the actual instances, so cost of the actual cloud usage can be handled. This is trivial.
However, on the devops side, it appears that clusterSSH would/will be useful (needed), in order to be able to kind of manage/track the overall progress of the running apps on the droplets.
I'm envisioning a process that allows:
- The ability to use a "config" process to spin up the terms for the
required droplets -droplet would have user1:passwd1 and use sshkey -droplet would have known ipAddress -as the process/droplets change the required config file could be script generated
- The ability to spin down the terms
- The ability to "see" the progress of the generated/viewed terms
-The assumption is if the Screen process is used to run the remote processes, then when the term is reconnected via clusterSSH --ssh - then the screen session can automatically be attached, to then see the current status/process... 4) As the term is spun up, process needs to be able to do invoke/run the remote process
**Also/aside, need to be able to generate a remote cmd, to check if a given process is running on the droplet -- This should be trivial once the clusterSSH stuff is nailed down, use the same/similar process for generating the remote Screen app/sessions on the remote droplets..
If the project is running 100-200 droplets, there's no way to check all the droplets within a gui on the desktop, so there should be a way to "view" 20-30 at a time...
So... whatever you have regarding the clusterSSH/Screen session part, hit me up.
Or, if someone has thoughts/comments, feel free to post!
Thanks _______________________________________________ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
On 16-11-11 10:19:43, Saint Michael wrote:
Have you tried "screen"?
Or TMux, if it seems screen would work but you hate it.
On 11/11/2016 07:02 AM, bruce wrote:
On Tue, Nov 8, 2016 at 1:33 PM, Rick Stevens ricks@alldigital.com wrote:
On 11/08/2016 10:00 AM, bruce wrote:
Hey Rick!!
Thanks for the reply...
That was kind of going to be my thinking..
I came across some apps that appear to be devOps related, one of which was ClusterSSH/Cluster SSH.
As far as I can tell, it appears to allow you to setup the given ipAdress, as well as user to run the ssh connection as and the ssh ocnfig files, to allow the user to connect/access to the given term sessions for the remote instances.
The app also appears to allow you to then run the different commands to the given systems. (Not sure if you can "package" the commands you want to run, so you can run group of cmnds to different groups of systems.)
I'm currently looking at information on this, as well as a few others.
My use case, has a bunch of vms on digitalocean, so I need a way of "managing"/starting the processes on the machines - manual ain't going to cut it when i have 40-50 to test, and if things wrk, will easily scale to 300-500 where I have to spin up, run the stuff, and then spin them down..
Actually, it would be good to have a gui/tool to be able to implement the DO/digitalocean API to generate/create, run, create the snapshots, destroy, to save costs.
Whew!!
Ok, what I suggested was really aimed at launching processes in the background on a remote machine in a way where you could check on them.
ClusterSSH (a.k.a. "cssh") is a different beastie. It's a GUI tool that allows you to open parallel ssh sessions to a whole bunch of remote machines simultaneously. We use it a lot, as we have about 300 machines in our data center broken into "clusters" that do specific things.
ClusterSSH opens a small terminal window for each machine so you can see what's going on. You can enter commands for THAT machine in that window as well. It also opens a "master" command line window, and whatever you type into that master window gets sent to ALL of the open windows. Fairly handy, but be REALLY careful, as sometimes (due to network load, etc.) some keystrokes may NOT make it to all of the open windows. This can be disastrous if you're, say, editing files and such.
cssh does offer a way to specify a command to be sent to all of the windows by using its "-a" option. I'd imagine it'd be something like:
cssh -a "screen -d -m -S firstsessionname 'command you wish torun on the VM'" user1@1.2.3.4 user1@5.6.7.8
which should run that screen command on the two machines specified. I can't speak to that too well. We don't typically use it like that--we tend to use it in the interactive mode only.
I can give you examples of cssh usage (such as an /etc/clusters file and such) if you want to go down that road.
On Tue, Nov 8, 2016 at 12:12 PM, Rick Stevens ricks@alldigital.com wrote:
On 11/08/2016 04:02 AM, bruce wrote:
Hi.
Trying to get my head around what should be basic/trivial process.
I've got a remote VM. I can fire up a local term, and then ssh into the remote VM with no prob. I can then run the remote functions, all is good.
However, I'd really like to have some process on the local side, that would allow me to do all the above in a shell/prog process on the local side,
Psuedo Processes:: -spin up the remoter term of user1@1.2.3.4 -track the remote term/session - so I could "log into it" see what's going on for the initiated processes" -perform some dir functions as user1 on the remote system -run appA as user1 on 1.2.3.4 (long running) -run appB as user1 on 1.2.3.4 (long running) -etc.. -when the apps/processes are finished, shut down the "remote term"
I'd prefer to be able to do all of this, without actually having the "physical" local term be generated/displayed in the local desktop.
I'm going to be running a bunch of long running apps on the cloud, so I'm trying to walk through the appropriate process/approach to handling this.
Sites/Articles/thoughts are more than welcome.
- Set up ssh keys so you don't need to use passwords between the
two systems (no interaction).
- Launch your tasks on the remote VM using screen over ssh by doing
something like:
ssh user1@1.2.3.4 screen -d -m -S firstsessionname "command you wish torun on the VM" ssh user1@1.2.3.4 screen -d -m -S secondsessionname "second command you wish to run on the VM"
- If you want to check on the tasks, log into the VM via ssh
interactively and check the various screen sessions. I recommend setting the screen session names via the "-S" option so they're easier to differentiate.
That should do it. The items in step 2 could be done in a shell script if you're lazy like me. :-)
Hey Rick!!
Thanks for the reply..
Slowly slogging through this.
My use case (for now) has DigitalOcean (DO) as the cloud provider. The project is working on the ability to be able to spin up/down 500-1000 droplets as needed.
DO provides the api, to create/delete the actual instances, so cost of the actual cloud usage can be handled. This is trivial.
However, on the devops side, it appears that clusterSSH would/will be useful (needed), in order to be able to kind of manage/track the overall progress of the running apps on the droplets.
I'm envisioning a process that allows:
- The ability to use a "config" process to spin up the terms for the
required droplets -droplet would have user1:passwd1 and use sshkey -droplet would have known ipAddress -as the process/droplets change the required config file could be script generated
Uhm, that sounds like you'd use a PHP/Perl/shell script to me.
- The ability to spin down the terms
Not sure what you mean there.
- The ability to "see" the progress of the generated/viewed terms
-The assumption is if the Screen process is used to run the remote processes, then when the term is reconnected via clusterSSH --ssh - then the screen session can automatically be attached, to then see the current status/process...
That's possible if you know the screen session name. If you start a screen session by specifying the session name using screen's "-S" option, be aware that only sets part of the session name. screen prepends the PID of the screen session to the start of that name. In other words, if I were to do
screen -S rickterm -d -m "command to run"
the screen session would actually be named "PID.rickterm", where "PID" is the process ID of the screen session (as shown by "screen -ls"). I can re-attach to it by doing "screen -r -S rickterm" and detach from it (leaving it running) by doing "CTRL-A, D" inside the screen session. Just wanted you to be aware of the naming conventions used.
- As the term is spun up, process needs to be able to do invoke/run
the remote process
When you launch a screen session and do not provide a command to run as an argument to the screen command as I've shown above, screen just starts a shell session. If you DO provide a command to run, note that the screen session will terminate when that command completes unless you run the command in a nohup session inside screen:
screen -S rickterm -d -m "nohup command to run &"
**Also/aside, need to be able to generate a remote cmd, to check if a given process is running on the droplet -- This should be trivial once the clusterSSH stuff is nailed down, use the same/similar process for generating the remote Screen app/sessions on the remote droplets..
screen just provides a detached terminal, typically a shell session. What you do inside that session is up to you, but understand that when the commands end, so does the screen session.
If the project is running 100-200 droplets, there's no way to check all the droplets within a gui on the desktop, so there should be a way to "view" 20-30 at a time...
So... whatever you have regarding the clusterSSH/Screen session part, hit me up.
The /etc/clusters (or ~/.clusterssh/clusters) file allows you to specify "clusters" of machines. For example, we have 22 "ingest" nodes for a specific purpose. Using cssh to open sessions for all 22 machines at once makes each window pretty small on my screen, so I actually do them in two groups of 11. My /etc/clusters file has these entries:
mioing-1 rstevens@ing1-r1 rstevens@ing2-r1 rstevens@ing3-r1 rstevens@ing4-r1 rstevens@ing5-r1 rstevens@ing6-r1 rstevens@ing7-r1 rstevens@ing8-r1 rstevens@ing9-r1 rstevens@ing10-r1 rstevens@ing11-r1
mioing-2 rstevens@ing12-r1 rstevens@ing13-r1 rstevens@ing14-r1 rstevens@ing15-r1 rstevens@ing16-r1 rstevens@ing17-r1 rstevens@ing18-r1 rstevens@ing19-r1 rstevens@ing20-r1 rstevens@ing21-r1 rstevens@ing22-r1
So, I can "cssh mioing-1" to open ssh terminals to the first 11 machines, and "cssh mioing-2" to open terminals to the second 11 machines. I could also do:
cssh -a "top" mioing-1
that would open cssh sessions to the first 11 machines and run a "top" command on each, with the top data showing up in their individual terminals.
Hope that helps. ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - Real Time, adj.: Here and now, as opposed to fake time, which only - - occurs there and then - ----------------------------------------------------------------------