All;
We have a database failover process in place that leverages a component that will also do load balancing. For reasons due to the client's environment we want n+ (many) of these load balancing components, however this means we will have many failover scripts on different servers that will all want to force a failover at the same time if a failure occurs.
Question: will flock() properly lock a file in a way that incoming commands over ssh from multiple other servers will respect the lock?
Thanks in advance
On 3/20/24 12:36, Sbob wrote:
We have a database failover process in place that leverages a component that will also do load balancing. For reasons due to the client's environment we want n+ (many) of these load balancing components, however this means we will have many failover scripts on different servers that will all want to force a failover at the same time if a failure occurs.
Question: will flock() properly lock a file in a way that incoming commands over ssh from multiple other servers will respect the lock?
Yes, the command is still running on the same system. It doesn't matter where the connection comes from.
Hi
On Wed, 20 Mar 2024 12:56:33 -0700 Samuel Sieb wrote:
On 3/20/24 12:36, Sbob wrote:
Question: will flock() properly lock a file in a way that incoming commands over ssh from multiple other servers will respect the lock?
Yes, the command is still running on the same system. It doesn't matter where the connection comes from.
Right, but you can also define the failover as a systemd service. systemd will do the locking itself since "systemctl start X" is a noop if X is started.
In addition, you get the control with systemctl and the log in the journal.
For example:
---------- X.service ----------
[Unit] Description=%n
[Service] Type=oneshot SyslogIdentifier=%N RemainAfterExit=yes
ExecStart=command doing the failover
---------- X.service ----------