modules/enterprise/agent/src/etc/rhq-agent-wrapper-ec2 | 122 ++++++ modules/enterprise/agent/src/main/java/org/rhq/enterprise/agent/AgentManagement.java | 4 modules/enterprise/agent/src/main/java/org/rhq/enterprise/agent/AgentManagementMBean.java | 2 modules/enterprise/gui/portal-war/src/main/java/org/rhq/enterprise/gui/startup/StartupServlet.java | 8 modules/enterprise/server/container/pom.xml | 2 modules/enterprise/server/container/src/main/scripts/rhq-container.build.xml | 3 modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/cloud/instance/ServerManagerBean.java | 15 modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/cloud/instance/ServerManagerLocal.java | 9 modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/cloud/instance/SyncEndpointAddressException.java | 20 + modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/AbstractJobWrapper.java | 10 modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/AbstractTypeServerPluginContainer.java | 1 modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/ConcurrentJobWrapper.java | 9 modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/ScheduledJobInvocationContext.java | 70 +++ modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/StatefulJobWrapper.java | 9 modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/StatefulScheduledJobInvocationContext.java | 65 +++ modules/enterprise/server/plugins/cloud/pom.xml | 146 +++++++ modules/enterprise/server/plugins/cloud/src/main/java/org/rhq/enterprise/server/plugins/cloud/CloudServerPluginComponent.java | 193 ++++++++++ modules/enterprise/server/plugins/cloud/src/main/resources/META-INF/rhq-serverplugin.xml | 57 ++ modules/plugins/rhq-agent/src/main/java/org/rhq/plugins/agent/AgentServerComponent.java | 3 modules/plugins/rhq-agent/src/main/resources/META-INF/rhq-plugin.xml | 11 pom.xml | 12 21 files changed, 770 insertions(+), 1 deletion(-)
New commits: commit fcd0a45565086410ecfbc443b7312743a6bb017d Merge: 0f4f948 0846d35 Author: John Sanda jsanda@redhat.com Date: Mon Mar 14 15:30:51 2011 -0400
Merge branch 'master' of ssh://git.fedorahosted.org/git/rhq/rhq
commit 0f4f94855bca6fae56f83919d378c8168e648410 Author: John Sanda jsanda@redhat.com Date: Mon Mar 14 15:14:31 2011 -0400
Updating version of cloud server plugin
diff --git a/modules/enterprise/server/plugins/cloud/pom.xml b/modules/enterprise/server/plugins/cloud/pom.xml index 8d10fcd..b66ad28 100644 --- a/modules/enterprise/server/plugins/cloud/pom.xml +++ b/modules/enterprise/server/plugins/cloud/pom.xml @@ -4,7 +4,7 @@ <parent> <groupId>org.rhq</groupId> <artifactId>rhq-enterprise-server-plugins-parent</artifactId> - <version>3.0.1-SNAPSHOT</version> + <version>4.0.0-SNAPSHOT</version> </parent>
<modelVersion>4.0.0</modelVersion>
commit 332f77eb8450aed98d2c33935f47680b7d29e392 Author: John Sanda jsanda@redhat.com Date: Mon Mar 14 15:09:53 2011 -0400
Squahshing 13 commits from ec2 branch first commit message: Adding new operation to rhq-agent plugin to update the server endpoint url
Invoking the switchToServer operation will result in the operation switching over to the specified server. In an EC2 deployment, the server can invoke this operation whenever its IP address has changed. Then the agent will have a valid server endpoint url from which it can download its failover list.
second commit message: Initial commit for rhq-agent-wrapper-ec2 script
This is a wrapper script for rhq-agent-wrapper.sh that additionally configures the server endpoint url and will also set up the agent to run as a service.
third commit message: Adding logic to check for and skip initialization if it has already run
The initialization function only needs to run the first time an agent runs so that the server endpoint url can be specified. That url will be supplied in the AMI's user-defined data. Also adding logic to force initialization to run when specifying the <init> cmd line arg.
fourth commit message: Adding ability for plugin jobs to persist data across invocations of jobs
Server plugin scheduled jobs previously had no mechanism provided by the plugin framework for persisting data across invocations of the job. Quartz however provides this capability when using a StatefulJob. Server plugin jobs run as StatefulJob objects when they do not run concurrently. The only change needed then was to expose a hook to the JobDataMap. A map has been added to ScheduledJobInvocationContext which comes from the JobDataMap. Anything added to this map by the server plugin job will be persisted from invocation of the job to the next.
Initial commit for cloud plugin which at this point is just testing that persisting job data works as expected.
fifth commit message: Adding support for syncing the server endpoint url at start up.
The behavior is turned off by default. It can be enabled by setting
rhq.sync.endpoint-address=true
in rhq-server.properties. When this is enabled, the server at start up will compare its endpoint address to the host name/address found on the host machine. If they differ, the server endpoint address will be updated to the value found on the host machine. This addresses the (first part of the problem of) address/host name changes that will occur in EC2 when a server machine is restarted.
sixth commit message: Restricting plugins to storing only string in the job data
Quartz is already configured to only allow strings to be stored in the job data map when a job is scheduled for concurrent execution. Restricting entries in the map to strings is recommended to avoid class versioning errors that are common during deserialization. ScheduledInvocationContext has been refactored a bit so that the underlying map is no longer directly exposed. To edit values in the map it now exposes get/put methods that help enforce and ensure only strings are stored in the map. There is still a getProperties() method; however, it returns a read-only view of the data map.
seventh commit message: Updating plugin descriptor to make cloud plugin clustered
Also refactoring plugin to work with new methods in ScheduledJobInvocationContext
eigth commit message: Only allow job data to be persisted when the job is not marked concurrent
Adding logic such that persisting job data in the ScheduledJobInvocationContext is only supported when the job is not marked as concurrent. If the job is marked as concurrent, and the plugin calls a method to access or update properties, an exception will be thrown.
Also adding some javadocs to explain how/when persisting job data is supported
ninth commmit message: Fixing methods to throw the correct exception type
tenth commit message: Adding logic to detect changed addresses
eleventh commit message: Schedule the 'switchToServer' agent resource operation
twelth commit message: Exposing address sync job as an operation that can be manually invoked.
This is a first code at this plugin operation. Some error handling needs to be added along with some logging.
thirteenth commit message: Adding logic to purge stale servers from the sync list
If a server has been taken down and removed from the system, then we need to remove it from the list of servers we look at to see if their addresses need to be synced.
Adding a bunch of debug logging.
Conflicts:
modules/plugins/rhq-agent/src/main/resources/META-INF/rhq-plugin.xml
diff --git a/modules/enterprise/agent/src/etc/rhq-agent-wrapper-ec2 b/modules/enterprise/agent/src/etc/rhq-agent-wrapper-ec2 new file mode 100755 index 0000000..3808ba0 --- /dev/null +++ b/modules/enterprise/agent/src/etc/rhq-agent-wrapper-ec2 @@ -0,0 +1,122 @@ +#!/bin/bash + +# ============================================================================= +# RHQ AGENT LINUX/EC2 Boot Script +# +# This script is a wrapper script for rhq-agent-wrapper.sh which is used to +# start/stop the agent on LINUX/UNIX platforms. This script provides some +# additional functionality for automatically configuring the agent in an EC2 +# environment. +# +# When the agent is started with this script, it will check to see if server +# endpoint URL has been configured. If it finds that the agent has not already +# been configured the script fetches paramerters specified as part of the AMIs +# user-defined data, parses the parameters, looking for the jon.server.url +# parameter. The rhq.agent.server.bind-address setting is then configured with +# the parameter's value. If the jon.server.url parameter is not found, an error +# message is logged and the agent will not be started. +# +# To force initialization or to re-initialize the agent run, +# +# $ rhq-agent-wrapper-ec2 init +# ============================================================================= + +RHQ_AGENT_HOME=/usr/share/rhq-agent-<VERSION> +SCRIPT_NAME=rhq-wrapper-agent-ec2 + +function load_user_data +{ + user_data_url="http://169.254.169.254/1.0/user-data/" + user_data_file="/tmp/user_data" + + #curl $user_data_url > $user_data_file + + user_data=`cat $user_data_file` +} + +function parse_jon_server_url +{ + ORIG_IFS=$IFS + IFS=$',' + + read -rda params <<< $user_data + + IFS=$ORIG_IFS + + for param in $params + do + idx=`expr index $param =` + key=${param:0:idx - 1} + + if [ $key = jon.server.url ]; then + jon_server_url=${param:idx} + return 0 + fi + done +} + +function init +{ + force_init=$1 + default_prefs=~/.java/.userPrefs/rhq-agent/default/prefs.xml + + if [ -s $default_prefs ] && [ "force" != $force_init ]; then + echo "Default configuration found at $default_prefs. Skipping " \ + "configuration. Run $SCRIPT_NAME <init> to force configuration " \ + "of server endpoint URL." + fi + + load_user_data + parse_jon_server_url + + if [ -z $jon_server_url ]; then + echo "Warning: Failed find jon.server.url parameter. You need to " \ + "manually set the rhq.agent.server.bind-address property in " \ + "<RHQ_AGENT>/conf/agent-configuration.xml. Cannot start agent." + return 1 + fi + + export RHQ_AGENT_CMDLINE_OPTS="--daemon $RHQ_AGENT_CMDLINE_OPTS" + export RHQ_AGENT_CMDLINE_OPTS="-Drhq.agent.server.bind-address=$jon_server_url $RHQ_AGENT_CMDLINE_OPTS" + + return 0 +} + + +cd $RHQ_AGENT_HOME/bin + +case "$1" in +'init') + init "force" + ;; +'start') + if init + then + ./rhq-agent-wrapper.sh start + else + echo "Failed: Cannot start the agent do to previous initialization errors" + fi + ;; +'stop') + ./rhq-agent-wrapper.sh stop + ;; +'kill') + ./rhq-agent-wrapper.sh kill + ;; +'status') + ./rhq-agent-wrapper.sh status + ;; +'restart') + ./rhq-agent-wrapper.sh restart + ;; +'quiet-restart') + ./rhq-agent-wrapper.sh quiet-restart + ;; +'echo') + echo "script: $0" + ;; +*) + echo "Usage: $0 { init | start | stop | kill | restart | status }" + exit 1 + ;; +esac diff --git a/modules/enterprise/agent/src/main/java/org/rhq/enterprise/agent/AgentManagement.java b/modules/enterprise/agent/src/main/java/org/rhq/enterprise/agent/AgentManagement.java index 42dd631..1a51f43 100644 --- a/modules/enterprise/agent/src/main/java/org/rhq/enterprise/agent/AgentManagement.java +++ b/modules/enterprise/agent/src/main/java/org/rhq/enterprise/agent/AgentManagement.java @@ -87,6 +87,10 @@ public class AgentManagement implements AgentManagementMBean, MBeanRegistration m_agent = agent; }
+ public void switchToServer(String server) { + m_agent.switchToServer(server); + } + public void restart() { // restarting the agent is a suicidal act - this MBean instance will // be unregistered after we shutdown. Therefore, we must do this in a diff --git a/modules/enterprise/agent/src/main/java/org/rhq/enterprise/agent/AgentManagementMBean.java b/modules/enterprise/agent/src/main/java/org/rhq/enterprise/agent/AgentManagementMBean.java index 62108a1..1be62dc 100644 --- a/modules/enterprise/agent/src/main/java/org/rhq/enterprise/agent/AgentManagementMBean.java +++ b/modules/enterprise/agent/src/main/java/org/rhq/enterprise/agent/AgentManagementMBean.java @@ -82,6 +82,8 @@ public interface AgentManagementMBean { */ String PLUGIN_INFO_MD5 = "md5";
+ void switchToServer(String server); + /** * This will perform an agent <i>hot-restart</i>. The agent will be {@link #shutdown()} and then immediately started * again. This is usually called after a client has diff --git a/modules/enterprise/gui/portal-war/src/main/java/org/rhq/enterprise/gui/startup/StartupServlet.java b/modules/enterprise/gui/portal-war/src/main/java/org/rhq/enterprise/gui/startup/StartupServlet.java index ae89a83..b5c0ac9 100644 --- a/modules/enterprise/gui/portal-war/src/main/java/org/rhq/enterprise/gui/startup/StartupServlet.java +++ b/modules/enterprise/gui/portal-war/src/main/java/org/rhq/enterprise/gui/startup/StartupServlet.java @@ -51,6 +51,7 @@ import org.rhq.enterprise.server.alert.engine.internal.AlertConditionCacheCoordi import org.rhq.enterprise.server.auth.SessionManager; import org.rhq.enterprise.server.auth.prefs.SubjectPreferencesCache; import org.rhq.enterprise.server.cloud.instance.ServerManagerLocal; +import org.rhq.enterprise.server.cloud.instance.SyncEndpointAddressException; import org.rhq.enterprise.server.core.AgentManagerLocal; import org.rhq.enterprise.server.core.CustomJaasDeploymentServiceMBean; import org.rhq.enterprise.server.core.comm.ServerCommunicationsServiceUtil; @@ -165,6 +166,13 @@ public class StartupServlet extends HttpServlet { // Establish the current server mode for the server. This will move the server to NORMAL // mode from DOWN if necessary. This can also affect comm layer behavior. serverManager.establishCurrentServerMode(); + if ("true".equals(System.getProperty("rhq.sync.endpoint-address", "false"))) { + try { + serverManager.syncEndpointAddress(); + } catch (SyncEndpointAddressException e) { + log("Failed to sync server endpoint address.", e); + } + } }
/** diff --git a/modules/enterprise/server/container/pom.xml b/modules/enterprise/server/container/pom.xml index 1ec4552..b4116aa 100644 --- a/modules/enterprise/server/container/pom.xml +++ b/modules/enterprise/server/container/pom.xml @@ -203,6 +203,8 @@
<property name="rhq.server.enable.ws" value="${rhq.server.enable.ws}" /> <property name="jbossws-native-dist.version" value="${jbossws-native-dist.version}" /> + + <property name="rhq.sync.endpoint-address" value="${rhq.sync.endpoint-address}"/> </ant> </tasks> </configuration> diff --git a/modules/enterprise/server/container/src/main/scripts/rhq-container.build.xml b/modules/enterprise/server/container/src/main/scripts/rhq-container.build.xml index d45932d..c98dd12 100644 --- a/modules/enterprise/server/container/src/main/scripts/rhq-container.build.xml +++ b/modules/enterprise/server/container/src/main/scripts/rhq-container.build.xml @@ -20,6 +20,8 @@ <property name="default.rhq.server.quartz.selectWithLockSQL" value="SELECT * FROM {0}LOCKS WHERE LOCK_NAME = ? FOR UPDATE" /> <property name="default.rhq.server.quartz.lockHandlerClass" value="org.quartz.impl.jdbcjobstore.StdRowLockSemaphore" />
+ <property name="rhq.sync.endpoint-address" value="false"/> + <target name="set-predeploy-prop"> <condition property="predeploy" value="true"> <or> @@ -604,6 +606,7 @@ rhq.server.plugin-scan-period-ms=${rhq.server.plugin-scan-period-ms} rhq.autoinstall.enabled=false rhq.autoinstall.database=auto rhq.autoinstall.public-endpoint-address= +rhq.sync.endpoint-address=${rhq.sync.endpoint-address}
</echo>
diff --git a/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/cloud/instance/ServerManagerBean.java b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/cloud/instance/ServerManagerBean.java index cc2c565..be91c65 100644 --- a/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/cloud/instance/ServerManagerBean.java +++ b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/cloud/instance/ServerManagerBean.java @@ -18,6 +18,8 @@ */ package org.rhq.enterprise.server.cloud.instance;
+import java.net.InetAddress; +import java.net.UnknownHostException; import java.util.Collection; import java.util.List;
@@ -284,6 +286,19 @@ public class ServerManagerBean implements ServerManagerLocal { Server.OperationMode.MAINTENANCE.name()); }
+ public void syncEndpointAddress() throws SyncEndpointAddressException { + Server server = getServer(); + try { + String hostName = InetAddress.getLocalHost().getHostName(); + + if (!hostName.equals(server.getAddress())) { + server.setAddress(hostName); + } + } catch (UnknownHostException e) { + throw new SyncEndpointAddressException("Failed to sync endpoint address for " + server, e); + } + } + @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW) public void beat() { Server server = getServer(); diff --git a/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/cloud/instance/ServerManagerLocal.java b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/cloud/instance/ServerManagerLocal.java index 135db97..04e244c 100644 --- a/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/cloud/instance/ServerManagerLocal.java +++ b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/cloud/instance/ServerManagerLocal.java @@ -102,6 +102,15 @@ public interface ServerManagerLocal { void establishCurrentServerMode();
/** + * Synchronizes the endpoint address of this server with the host name or address found on the host machine. If the + * host name or address of this machine differs from {@link Server#getAddress()} then this server will be updated + * with the value of this machine's host name/address. + * + * @throws SyncEndpointAddressException + */ + void syncEndpointAddress() throws SyncEndpointAddressException; + + /** * Updates server mtime to register active heart beat */ void beat(); diff --git a/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/cloud/instance/SyncEndpointAddressException.java b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/cloud/instance/SyncEndpointAddressException.java new file mode 100644 index 0000000..a3924d7 --- /dev/null +++ b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/cloud/instance/SyncEndpointAddressException.java @@ -0,0 +1,20 @@ +package org.rhq.enterprise.server.cloud.instance; + +public class SyncEndpointAddressException extends Exception { + + public SyncEndpointAddressException() { + super(); + } + + public SyncEndpointAddressException(String message) { + super(message); + } + + public SyncEndpointAddressException(String message, Throwable cause) { + super(message, cause); + } + + public SyncEndpointAddressException(Throwable cause) { + super(cause); + } +} diff --git a/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/AbstractJobWrapper.java b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/AbstractJobWrapper.java index 19c8eb8..5c61f8b 100644 --- a/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/AbstractJobWrapper.java +++ b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/AbstractJobWrapper.java @@ -19,13 +19,18 @@
package org.rhq.enterprise.server.plugin.pc;
+import java.io.Serializable; import java.lang.reflect.Method; +import java.util.Date; +import java.util.HashMap; +import java.util.Map; import java.util.Properties;
import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.quartz.Job; import org.quartz.JobDataMap; +import org.quartz.JobDetail; import org.quartz.JobExecutionContext; import org.quartz.JobExecutionException;
@@ -108,6 +113,9 @@ abstract class AbstractJobWrapper implements Job { */ public static final String DATAMAP_IS_CLUSTERED = DATAMAP_LEADER + "isClustered";
+ protected abstract ScheduledJobInvocationContext createContext(ScheduledJobDefinition jobDefinition, + ServerPluginContext pluginContext, ServerPluginComponent serverPluginComponent, Map<String, String> jobData); + /** * This is the method that quartz calls when the schedule has triggered. This method will * delegate to the plugin component that is responsible to do work for the plugin. @@ -229,7 +237,7 @@ abstract class AbstractJobWrapper implements Job { ScheduledJobDefinition jobDefinition = new ScheduledJobDefinition(jobId, true, jobClass, jobMethodName, scheduleType, callbackData); ServerPluginContext pluginContext = pluginManager.getServerPluginContext(pluginEnv); - params[0] = new ScheduledJobInvocationContext(jobDefinition, pluginContext, pluginComponent); + params[0] = createContext(jobDefinition, pluginContext, pluginComponent,dataMap); } catch (NoSuchMethodException e) { try { // see if there is a no-arg method of the given name diff --git a/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/AbstractTypeServerPluginContainer.java b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/AbstractTypeServerPluginContainer.java index b8e760c..c7fc556 100644 --- a/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/AbstractTypeServerPluginContainer.java +++ b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/AbstractTypeServerPluginContainer.java @@ -19,6 +19,7 @@
package org.rhq.enterprise.server.plugin.pc;
+import java.io.Serializable; import java.util.Collection; import java.util.Collections; import java.util.HashMap; diff --git a/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/ConcurrentJobWrapper.java b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/ConcurrentJobWrapper.java index 13cbfc4..dfbb941 100644 --- a/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/ConcurrentJobWrapper.java +++ b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/ConcurrentJobWrapper.java @@ -19,6 +19,10 @@
package org.rhq.enterprise.server.plugin.pc;
+import java.util.Map; + +import org.rhq.enterprise.server.xmlschema.ScheduledJobDefinition; + /** * The actual quartz job that the plugin container will submit when it needs to invoke * a concurrent scheduled job on behalf of a plugin. This is a normal non-stateful "job" @@ -31,4 +35,9 @@ package org.rhq.enterprise.server.plugin.pc; * @author John Mazzitelli */ public class ConcurrentJobWrapper extends AbstractJobWrapper { + @Override + protected ScheduledJobInvocationContext createContext(ScheduledJobDefinition jobDefinition, + ServerPluginContext pluginContext, ServerPluginComponent serverPluginComponent, Map<String, String> jobData) { + return new ScheduledJobInvocationContext(jobDefinition, pluginContext, serverPluginComponent); + } } diff --git a/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/ScheduledJobInvocationContext.java b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/ScheduledJobInvocationContext.java index cdb0ebf..b69fc9c 100644 --- a/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/ScheduledJobInvocationContext.java +++ b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/ScheduledJobInvocationContext.java @@ -19,12 +19,26 @@
package org.rhq.enterprise.server.plugin.pc;
+import java.io.Serializable; +import java.util.Collections; +import java.util.HashMap; +import java.util.Iterator; +import java.util.Map; + import org.rhq.enterprise.server.xmlschema.ScheduledJobDefinition;
/** + * <p> * A scheduled job's invocation method can take a single argument of this type. * If this is the case, the scheduled job's method will be passed an object of this type enabling * the job being invoked to be given information about the invocation. + * </p> + * <p> + * When a job is <strong>not</strong> marked for concurrent execution, it can store data in the context in the form of + * key/value properties. These properties will be persisted across invocations of the job. If the job is marked to as + * clustered, the job data will persist across server restarts as well. Lastly, if the job is marked to run + * concurrently, the methods for accessing/updating properties will throw an exception. + * </p> * * @author John Mazzitelli */ @@ -68,4 +82,60 @@ public class ScheduledJobInvocationContext { public ServerPluginComponent getServerPluginComponent() { return serverPluginComponent; } + + /** + * Adds a property to the context that is persisted across invocations of the job. The property is persisted across + * server restarts <strong>only if</strong> the scheduled job is declared to run as clustered. Note that method + * will throw an exception if the job is declared for concurrent execution. + * + * @param key The property name + * @param value The property value + * @throws UnsupportedOperationException if the job is marked for concurrent execution + */ + public void put(String key, String value) { + throw new UnsupportedOperationException("This operation is only supported for stateful jobs."); + } + + /** + * Retrieves a property value from the context. + * + * @param key The property key + * @return The property value or <code>null<code> if the key is not found + * @throws UnsupportedOperationException if the job is marked for concurrent execution + */ + public String get(String key) { + throw new UnsupportedOperationException("This operation is only supported for stateful jobs."); + } + + /** + * Removes the property value associated with the specified key + * + * @param key The property key + * @return The value previously associated with the key or <code>null</code> if the key is present in the context + * @throws UnsupportedOperationException if the job is marked for concurrent execution + */ + public String remove(String key) { + throw new UnsupportedOperationException("This operation is only supported for stateful jobs."); + } + + /** + * Checks to see whether or not the property key is stored in the context. + * @param key The property key + * @return <code>true</code> if the key is found, <code>false</code> otherwise. + * @throws UnsupportedOperationException if the job is marked for concurrent execution + */ + public boolean containsKey(String key) { + throw new UnsupportedOperationException("This operation is only supported for stateful jobs."); + } + + /** + * Returns a <strong>read-only</strong> view of the properties stored in the context. + * + * @return A <strong>read-only</strong> view of the properties stored in the context. + * @throws UnsupportedOperationException if the job is marked for concurrent execution + */ + public Map<String, String> getJobData() { + throw new UnsupportedOperationException("This operation is only supported for stateful jobs."); + } + } diff --git a/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/StatefulJobWrapper.java b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/StatefulJobWrapper.java index d04558b..61a151a 100644 --- a/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/StatefulJobWrapper.java +++ b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/StatefulJobWrapper.java @@ -19,8 +19,12 @@
package org.rhq.enterprise.server.plugin.pc;
+import java.util.Map; + import org.quartz.StatefulJob;
+import org.rhq.enterprise.server.xmlschema.ScheduledJobDefinition; + /** * The actual quartz job that the plugin container will submit when it needs to invoke * a scheduled job on behalf of a plugin. This is a "stateful job" which tells quartz @@ -33,4 +37,9 @@ import org.quartz.StatefulJob; * @author John Mazzitelli */ public class StatefulJobWrapper extends AbstractJobWrapper implements StatefulJob { + @Override + protected ScheduledJobInvocationContext createContext(ScheduledJobDefinition jobDefinition, + ServerPluginContext pluginContext, ServerPluginComponent serverPluginComponent, Map<String, String> jobData) { + return new StatefulScheduledJobInvocationContext(jobDefinition, pluginContext, serverPluginComponent, jobData); + } } diff --git a/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/StatefulScheduledJobInvocationContext.java b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/StatefulScheduledJobInvocationContext.java new file mode 100644 index 0000000..c526400 --- /dev/null +++ b/modules/enterprise/server/jar/src/main/java/org/rhq/enterprise/server/plugin/pc/StatefulScheduledJobInvocationContext.java @@ -0,0 +1,65 @@ +package org.rhq.enterprise.server.plugin.pc; + +import java.util.Collections; +import java.util.Map; + +import org.rhq.enterprise.server.xmlschema.ScheduledJobDefinition; + +public class StatefulScheduledJobInvocationContext extends ScheduledJobInvocationContext { + private Map<String, String> jobData; + + public StatefulScheduledJobInvocationContext(ScheduledJobDefinition jobDefinition, + ServerPluginContext pluginContext, ServerPluginComponent serverPluginComponent, Map<String, String> jobData) { + super(jobDefinition, pluginContext, serverPluginComponent); + this.jobData = jobData; + } + + /** + * Adds a property to the context that is persisted across invocations of the job. + * + * @param key The property name + * @param value The property value + */ + public void put(String key, String value) { + jobData.put(key, value); + } + + /** + * Retrieves a property value from the context. + * + * @param key The property key + * @return The property value or <code>null<code> if the key is not found + */ + public String get(String key) { + return jobData.get(key); + } + + /** + * Removes the property value associated with the specified key + * + * @param key The property key + * @return The value previously associated with the key or <code>null</code> if the key is present in the context + */ + public String remove(String key) { + return jobData.remove(key); + } + + /** + * Checks to see whether or not the property key is stored in the context. + * @param key The property key + * @return <code>true</code> if the key is found, <code>false</code> otherwise. + */ + public boolean containsKey(String key) { + return jobData.containsKey(key); + } + + /** + * Returns a <strong>read-only</strong> view of the properties stored in the context. + * + * @return A <strong>read-only</strong> view of the properties stored in the context. + */ + public Map<String, String> getJobData() { + return Collections.unmodifiableMap(jobData); + } + +} diff --git a/modules/enterprise/server/plugins/cloud/pom.xml b/modules/enterprise/server/plugins/cloud/pom.xml new file mode 100644 index 0000000..8d10fcd --- /dev/null +++ b/modules/enterprise/server/plugins/cloud/pom.xml @@ -0,0 +1,146 @@ +<?xml version="1.0" encoding="UTF-8"?> +<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> + + <parent> + <groupId>org.rhq</groupId> + <artifactId>rhq-enterprise-server-plugins-parent</artifactId> + <version>3.0.1-SNAPSHOT</version> + </parent> + + <modelVersion>4.0.0</modelVersion> + + <groupId>org.rhq</groupId> + <artifactId>rhq-serverplugin-cloud</artifactId> + + <name>RHQ Enterprise Server Cloud Plugin</name> + + <scm> + <connection>scm:git:ssh://git.fedorahosted.org/git/rhq.git/modules/enterprise/server/plugins/cloud/</connection> + <developerConnection>scm:git:ssh://git.fedorahosted.org/git/rhq.git/modules/enterprise/server/plugins/cloud/</developerConnection> + </scm> + + <properties> + <scm.module.path>modules/enterprise/server/plugins/cloud/</scm.module.path> + <cobber4j.version>0.1</cobber4j.version> + </properties> + + <dependencies> + </dependencies> + + <build> + <plugins> + + <!-- + <plugin> + <artifactId>maven-dependency-plugin</artifactId> + <version>2.0</version> + <executions> + <execution> + <id>copy-libs</id> + <phase>process-resources</phase> + <goals> + <goal>copy</goal> + </goals> + <configuration> + <artifactItems> + </artifactItems> + <outputDirectory>${project.build.outputDirectory}/lib</outputDirectory> + </configuration> + </execution> + </executions> + </plugin> + --> + + <plugin> + <artifactId>maven-surefire-plugin</artifactId> + <configuration> + <excludedGroups>${rhq.testng.excludedGroups}</excludedGroups> + <!-- + <argLine>-Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=y</argLine> + --> + </configuration> + </plugin> + + </plugins> + </build> + + <profiles> + + <profile> + <id>dev</id> + + <properties> + <rhq.rootDir>../../../../..</rhq.rootDir> + <rhq.containerDir>${rhq.rootDir}/${rhq.defaultDevContainerPath}</rhq.containerDir> + <rhq.deploymentDir>${rhq.containerDir}/jbossas/server/default/deploy/${rhq.earName}/rhq-serverplugins</rhq.deploymentDir> + </properties> + + <build> + <plugins> + + <plugin> + <artifactId>maven-antrun-plugin</artifactId> + <version>1.1</version> + <executions> + + <execution> + <id>deploy</id> + <phase>compile</phase> + <configuration> + <tasks> + <mkdir dir="${rhq.deploymentDir}" /> + <property name="deployment.file" location="${rhq.deploymentDir}/${project.build.finalName}.jar" /> + <echo>*** Updating ${deployment.file}...</echo> + <jar destfile="${deployment.file}" basedir="${project.build.outputDirectory}" /> + </tasks> + </configuration> + <goals> + <goal>run</goal> + </goals> + </execution> + + <execution> + <id>undeploy</id> + <phase>clean</phase> + <configuration> + <tasks> + <property name="deployment.file" location="${rhq.deploymentDir}/${project.build.finalName}.jar" /> + <echo>*** Deleting ${deployment.file}...</echo> + <delete file="${deployment.file}" /> + </tasks> + </configuration> + <goals> + <goal>run</goal> + </goals> + </execution> + + <execution> + <id>deploy-jar-meta-inf</id> + <phase>package</phase> + <configuration> + <tasks> + <property name="deployment.file" location="${rhq.deploymentDir}/${project.build.finalName}.jar" /> + <echo>*** Updating META-INF dir in ${deployment.file}...</echo> + <unjar src="${project.build.directory}/${project.build.finalName}.jar" dest="${project.build.outputDirectory}"> + <patternset> + <include name="META-INF/**" /> + </patternset> + </unjar> + <jar destfile="${deployment.file}" manifest="${project.build.outputDirectory}/META-INF/MANIFEST.MF" update="true"> + </jar> + </tasks> + </configuration> + <goals> + <goal>run</goal> + </goals> + </execution> + + </executions> + </plugin> + + </plugins> + </build> + </profile> + + </profiles> +</project> diff --git a/modules/enterprise/server/plugins/cloud/src/main/java/org/rhq/enterprise/server/plugins/cloud/CloudServerPluginComponent.java b/modules/enterprise/server/plugins/cloud/src/main/java/org/rhq/enterprise/server/plugins/cloud/CloudServerPluginComponent.java new file mode 100644 index 0000000..88eb3ba --- /dev/null +++ b/modules/enterprise/server/plugins/cloud/src/main/java/org/rhq/enterprise/server/plugins/cloud/CloudServerPluginComponent.java @@ -0,0 +1,193 @@ +package org.rhq.enterprise.server.plugins.cloud; + +import java.io.Serializable; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; + +import javax.persistence.EntityManager; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; + +import org.rhq.core.domain.cloud.Server; +import org.rhq.core.domain.configuration.Configuration; +import org.rhq.core.domain.configuration.PropertySimple; +import org.rhq.core.domain.criteria.ResourceCriteria; +import org.rhq.core.domain.resource.Agent; +import org.rhq.core.domain.resource.Resource; +import org.rhq.enterprise.server.auth.SubjectManagerLocal; +import org.rhq.enterprise.server.cloud.CloudManagerLocal; +import org.rhq.enterprise.server.operation.OperationManagerLocal; +import org.rhq.enterprise.server.operation.ResourceOperationSchedule; +import org.rhq.enterprise.server.plugin.pc.ControlFacet; +import org.rhq.enterprise.server.plugin.pc.ControlResults; +import org.rhq.enterprise.server.plugin.pc.ScheduledJobInvocationContext; +import org.rhq.enterprise.server.plugin.pc.ServerPluginComponent; +import org.rhq.enterprise.server.plugin.pc.ServerPluginContext; +import org.rhq.enterprise.server.util.LookupUtil; + +public class CloudServerPluginComponent implements ServerPluginComponent, ControlFacet { + + private static Log log = LogFactory.getLog(CloudServerPluginComponent.class); + + public void initialize(ServerPluginContext context) throws Exception { + } + + public void start() { + } + + public void stop() { + } + + public void shutdown() { + } + + public ControlResults invoke(String name, Configuration parameters) { + if ("syncServerEndpoint".equals(name)) { + String serverName = parameters.getSimpleValue("name", null); + String serverAddr = parameters.getSimpleValue("address", null); + + if (log.isDebugEnabled()) { + log.debug("Invoked syncServerEndpoint with [name: " + serverName + ", address: " + serverAddr + "]"); + } + + ControlResults results = new ControlResults(); + + CloudManagerLocal cloudMgr = LookupUtil.getCloudManager(); + Server server = cloudMgr.getServerByName(serverName); + + if (server == null) { + log.warn("Failed to locate server. No address sync will be performed."); + results.setError("No update performed. Failed to find server " + server.getName()); + return results; + } + + if (serverAddr != null) { + SubjectManagerLocal subjectMgr = LookupUtil.getSubjectManager(); + + server.setAddress(serverAddr); + cloudMgr.updateServer(subjectMgr.getOverlord(), server); + } + + int updateCount = notifyAgents(server); + + Configuration complexResults = results.getComplexResults(); + complexResults.put(new PropertySimple("results", updateCount + " agents have been updated.")); + + if (log.isDebugEnabled()) { + log.debug("Notified " + updateCount + " agents of the address change."); + } + + return results; + } + + return null; + } + + public void syncServerEndpoints(ScheduledJobInvocationContext context) { + log.debug("Preparing to sync server endpoints."); + + CloudManagerLocal cloudMgr = LookupUtil.getCloudManager(); + List<Server> servers = cloudMgr.getAllServers(); + + purgeStaleServers(context, servers); + + for (Server server : servers) { + if (!context.containsKey("server:" + server.getName())) { + log.debug("Adding server [" + server.getName() + "] to sync list."); + context.put("server:" + server.getName(), server.getAddress()); + } else if (addressChanged(context, server)) { + if (log.isDebugEnabled()) { + log.debug("Detected address change for " + server); + log.debug("Old address was " + context.get("server:" + server.getName() + ", new address is " + + server.getAddress())); + } + context.put("server:" + server.getName(), server.getAddress()); + notifyAgents(server); + } + } + } + + private void purgeStaleServers(ScheduledJobInvocationContext context, List<Server> servers) { + List<String> purgeList = new ArrayList<String>(); + + Set<String> serverNames = new HashSet<String>(); + for (Server server : servers) { + serverNames.add(server.getName()); + } + + for (String key : context.getJobData().keySet()) { + if (key.startsWith("server:")) { + String serverName = parseServerName(key); + if (!serverNames.contains(serverName)) { + log.debug("Detected a stale server: " + serverName); + log.debug(serverName + " will be removed from the sync list."); + purgeList.add(key); + } + } + } + + for (String staleServer : purgeList) { + context.remove(staleServer); + } + } + + private String parseServerName(String key) { + return key.substring("server:".length()); + } + + private boolean addressChanged(ScheduledJobInvocationContext context, Server server) { + String lastKnownAddr = context.get("server:" + server.getName()); + return !server.getAddress().endsWith(lastKnownAddr); + } + + @SuppressWarnings("unchecked") + private int notifyAgents(Server server) { + EntityManager entityMgr = LookupUtil.getEntityManager(); + String queryString = "select r " + + "from Resource r " + + "where r.resourceType.plugin = :pluginName and " + + "r.resourceType.name = :resourceTypeName and " + + "r.agent in (select a " + + "from Agent a " + + "where a.server = :server)"; + + List<Resource> agents = entityMgr.createQuery(queryString) + .setParameter("pluginName", "RHQAgent") + .setParameter("resourceTypeName", "RHQ Agent") + .setParameter("server", server) + .getResultList(); + + if (log.isDebugEnabled()) { + log.debug("Found " + agents.size() + " to be updated with new server endpoint for " + server); + } + + int numUpdated = 0; + for (Resource agent : agents) { + updateAgent(agent, server); + numUpdated++; + } + + return numUpdated; + } + + private void updateAgent(Resource agent, Server server) { + OperationManagerLocal operationMgr = LookupUtil.getOperationManager(); + SubjectManagerLocal subjectMgr = LookupUtil.getSubjectManager(); + + Configuration params = new Configuration(); + params.put(new PropertySimple("server", server.getAddress())); + + ResourceOperationSchedule schedule = operationMgr.scheduleResourceOperation(subjectMgr.getOverlord(), + agent.getId(), "switchToServer", 0, 0, 0, 0, params, "Cloud Plugin: syncing server endpoint address"); + + if (log.isDebugEnabled()) { + log.debug("Schedule address sync for agent [name: " + agent.getName() + "]."); + log.debug("Operation schedule is " + schedule); + } + } +} diff --git a/modules/enterprise/server/plugins/cloud/src/main/resources/META-INF/rhq-serverplugin.xml b/modules/enterprise/server/plugins/cloud/src/main/resources/META-INF/rhq-serverplugin.xml new file mode 100644 index 0000000..e6944c0 --- /dev/null +++ b/modules/enterprise/server/plugins/cloud/src/main/resources/META-INF/rhq-serverplugin.xml @@ -0,0 +1,57 @@ +<generic-plugin name="CloudServerPlugin" + displayName="Cloud" + description="" + package="org.rhq.enterprise.server.plugins.cloud" + disabledOnDiscovery="false" + xmlns="urn:xmlns:rhq-serverplugin.generic" + xmlns:serverplugin="urn:xmlns:rhq-serverplugin" + xmlns:c="urn:xmlns:rhq-configuration"> + <serverplugin:plugin-component class="CloudServerPluginComponent"> + <serverplugin:control name="syncServerEndpoint" description=""> + serverplugin:parameters + <c:simple-property name="name" + required="true" + description="The server name"/> + <c:simple-property name="address" + required="false" + description="If an address is specified, it will overwrite the server's current value + in the database. If an address is not specified then the server's + current address will be sent down to its agents."/> + </serverplugin:parameters> + serverplugin:results + <c:simple-property name="results" description="Contains a status or error message"/> + </serverplugin:results> + </serverplugin:control> + </serverplugin:plugin-component> + + serverplugin:scheduled-jobs + <c:map-property name="syncServerEndpoints"> + <c:simple-property name="scheduleType" + type="string" + required="true" + default="periodic" + summary="true" + description="Indicates the type of trigger this synchronize job will use"> + <c:property-options> + <c:option value="periodic" default="true"/> + <c:option value="cron"/> + </c:property-options> + </c:simple-property> + <c:simple-property name="scheduleTrigger" default="300000"/> + <c:simple-property name="concurrent" + type="boolean" + required="true" + default="false" + summary="false" + readOnly="true" + description="This must always be false - only want one sync job running at a time"/> + <c:simple-property name="clustered" + type="boolean" + required="true" + default="true" + summary="false" + readOnly="true" + description="This must always be true"/> + </c:map-property> + </serverplugin:scheduled-jobs> +</generic-plugin> \ No newline at end of file diff --git a/modules/plugins/rhq-agent/src/main/java/org/rhq/plugins/agent/AgentServerComponent.java b/modules/plugins/rhq-agent/src/main/java/org/rhq/plugins/agent/AgentServerComponent.java index 2c83140..d352ae9 100644 --- a/modules/plugins/rhq-agent/src/main/java/org/rhq/plugins/agent/AgentServerComponent.java +++ b/modules/plugins/rhq-agent/src/main/java/org/rhq/plugins/agent/AgentServerComponent.java @@ -194,6 +194,9 @@ public class AgentServerComponent extends JMXServerComponent implements JMXCompo throw eie; } } + } else if (name.equals("switchToServer")) { + String server = params.getSimpleValue("server", null); + getAgentBean().getOperation(name).invoke(server); } else { // this should really never happen throw new IllegalArgumentException("Operation [" + name + "] does not support params"); diff --git a/modules/plugins/rhq-agent/src/main/resources/META-INF/rhq-plugin.xml b/modules/plugins/rhq-agent/src/main/resources/META-INF/rhq-plugin.xml index 2361c2d..438cbd5 100644 --- a/modules/plugins/rhq-agent/src/main/resources/META-INF/rhq-plugin.xml +++ b/modules/plugins/rhq-agent/src/main/resources/META-INF/rhq-plugin.xml @@ -170,6 +170,17 @@ </results> </operation>
+ <operation name="switchToServer" + description="Tell the agent to immediately switch to another server. The given server can be a simple + hostname in which case, the current transport, port and transport parameters being used to + talk to the current server will stay the same. Otherwise, it will be assumed the server + is a full endpoint URL."> + <parameters> + <c:simple-property name="server" description="A simple hostname or a full endpoint URL" + required="true"/> + </parameters> + </operation> + <metric property="Trait.SigarVersion" displayName="SIGAR Version" dataType="trait" diff --git a/pom.xml b/pom.xml index ba31907..3c49e2d 100644 --- a/pom.xml +++ b/pom.xml @@ -147,6 +147,18 @@
<rhq.server.enable.ws>false</rhq.server.enable.ws>
+ <!-- + When this property is set to true will compare its endpoint address + that is stored in the database against the actual host name/IP address + returned by the host system. If they differ, the address stored in the + datbase will be updated to the value found on the host machine. While + host name changes are/should be uncommon in a typical deployment, they + are more common in a cloud deployment such as EC2. And in a cloud + deployment like EC2, we want to turn this behavior on to ensure that + the server endpoint accurately reflects the current machine address. + --> + <rhq.sync.endpoint-address>false</rhq.sync.endpoint-address> + <!-- NOTE: The below line is a workaround for a Maven bug, where it does not expand settings.* properties used in the distributionManagement section of the POM. --> <localRepository>${user.home}/.m2/repository</localRepository>
rhq-commits@lists.fedorahosted.org