NOTE: This document is for information on starting an Cloudsoft AMP Server. For information on using the AMP Client CLI to access an already running AMP Server, refer to Client CLI Reference.
If you are using the .rpm
or .deb
package of Cloudsoft AMP, then AMP
will integrate with your OS service management. Commands such as
service AMP start
will work as expected, and AMP’s PID file will be
stored in the normal location for your OS, such as /var/run/brooklyn.pid
.
The platform-independent distributions are packaged in .tar.gz
and .zip
files.
To launch AMP, from the directory where AMP is unpacked, run:
% bin/amp launch > /dev/null 2>&1 & disown
With no configuration, this will launch the AMP web console and REST API on http://localhost:8081/
.
No password is set, but the server is listening only on the loopback network interface for security.
Once security is configured, AMP will listen on all network interfaces by default.
See the Server CLI Reference for more information about the AMP server process.
The AMP startup script will create a file name pid_java
at the root of
the AMP directory, which contains the PID of the last AMP process to
be started.
To stop AMP, simply send a TERM
signal to the AMP process. The PID
of the most recently run AMP process can be found in the pid_java
file at
the root of the AMP directory.
For example:
% kill $( cat pid_java )
For .tar.gz
and .zip
distributions of AMP, the AMP startup script
will create a file name pid_java
at the root of the AMP directory, which
contains the PID of the last AMP process to be started. You can examine
this file to discover the PID, and then test that the process is still running.
.rpm
and .deb
distributions of AMP will use the normal mechanism that
your OS uses, such as writing to /var/run/brooklyn.pid
.
This should lead to a fairly straightforward integration with many monitoring tools - the monitoring tool can discover the expected PID, and can execute the start or stop commands shown above as necessary.
For example, here is a fragment of a monitrc
file as used by
Monit, for a AMP .tar.gz
distribution
unpacked and installed at /opt/apache-brooklyn
:
check process apachebrooklyn with pidfile /opt/apache-brooklyn/pid_java
start program = "/bin/bash -c '/opt/apache-brooklyn/bin/amp launch --persist auto & disown'" with timeout 10 seconds
stop program = "/bin/bash -c 'kill $( cat /opt/apache-brooklyn/pid_java )'"
In addition to monitoring the AMP process itself, you will almost certainly want to monitor resource usage of AMP. In particular, please see the Requirements section for a discussion on AMP’s disk space requirements.
NOTE: This document is for information on starting a AMP Server. For information on using the AMP Client CLI to access an already running AMP Server, refer to Client CLI Reference.
To launch AMP, from the directory where AMP is unpacked, run:
% nohup bin/amp launch > /dev/null 2>&1 &
With no configuration, this will launch the AMP web console and REST API on http://localhost:8081/
.
No password is set, but the server is listening only on the loopback network interface for security.
Once security is configured, AMP will listen on all network interfaces by default.
By default, AMP will write log messages at the INFO level or above to brooklyn.info.log
and messgages at the
DEBUG level or above to brooklyn.debug.log
. Redirecting the output to /dev/null
prevents the default console output
being written to nohup.out
.
You may wish to add AMP to your path;
assuming you’ve done this, to get information the supported CLI options
at any time, just run brooklyn help
:
% bin/brooklyn help
usage: AMP [(-q | --quiet)] [(-v | --verbose)] <command> [<args>]
The most commonly used AMP commands are:
help Display help information about brooklyn
info Display information about brooklyn
launch Starts a AMP application. Note that a BROOKLYN_CLASSPATH environment variable needs to be set up beforehand to point to the user application classpath.
See 'brooklyn help <command>' for more information on a specific command.
It is important that AMP is launched with either nohup ... &
or ... & disown
, to ensure
it keeps running after the shell terminates.
The Server CLI arguments for persistence and HA and the catalog are described separately.
In order to have easy access to the server cli it is useful to configure the PATH environment variable to also point to the cli’s bin directory:
BROOKLYN_HOME=/path/to/brooklyn/
export PATH=$PATH:$BROOKLYN_HOME/usage/dist/target/brooklyn-dist/bin/
The amount of memory required by the AMP process depends on the usage - for example the number of entities/VMs under management.
For a standard AMP deployment, the defaults are to start with 256m, and to grow to 1g of memory.
These numbers can be overridden by setting the environment variable JAVA_OPTS
before launching
the brooklyn script
:
JAVA_OPTS=-Xms1g -Xmx1g -XX:MaxPermSize=256m
AMP stores a task history in-memory using soft references. This means that, once the task history is large, AMP will continually use the maximum allocated memory. It will only expunge tasks from memory when this space is required for other objects within the AMP process.
The web console will by default bind to 0.0.0.0. It’s restricted to 127.0.0.1 if the --noConsoleSecurity
flag is used.
To specify a local interface, or use the local loopback (127.0.0.1), for the web console to bind to you should use:
--bindAddress <IP>
AMP reads configuration from a variety of places. It aggregates the configuration. The list below shows increasing precedence (i.e. the later ones will override values from earlier ones, if exactly the same property is specified multiple times).
classpath://brooklyn/location-metadata.properties
is shipped as part of AMP, containing
generic metadata such as jurisdiction and geographic information about Cloud providers. ~/.brooklyn/location-metadata.properties
(unless --noGlobalAMPProperties
is specified).
This is intended to contain custom metadata about additional locations.~/.brooklyn/brooklyn.properties
(unless --noGlobalAMPProperties
is specified).--localAMPProperties <local brooklyn.properties file>
is specified.-D
on the AMP (Java) command-line.These properties are described in more detail here.
The default AMP directory structure includes:
./conf/
: for configuration resources../lib/patch/
: for Jar files containing patches../lib/brooklyn/
: for the AMP libraries../lib/dropins/
: for additional Jars.Resources added to conf/
will be available on the classpath.
A patch can be applied by adding a Jar to the lib/patch/
directory, and restarting AMP.
All jars in this directory will be at the head of the classpath.
Additional Jars should be added to lib/dropins/
, prior to starting AMP. These jars will
be at the end of the classpath.
The initial classpath, as set in the brooklyn
script, is:
conf:lib/patch/*:lib/brooklyn/*:lib/dropins/*
Additional entries can be added at the head of the classpath by setting the environment variable
BROOKLYN_CLASSPATH
before running the brooklyn
script.
Work in progress.
The AMP web-console is loaded from the classpath as the resource classpath://brooklyn.war
.
To replace this, an alternative WAR with that name can be added at the head of the classpath. However, this approach is likely to change in a future release - consider this feature as “beta”.
The brooklyn
command line tool includes support for querying (and managing) cloud
compute resources and blob-store resources.
For example, brooklyn cloud-compute list-instances --location aws-ec2:eu-west-1
will use the AWS credentials from brooklyn.properties
and list the VM instances
running in the given EC2 region.
Use brooklyn help
and brooklyn help cloud-compute
to find out more information.
This functionality is not intended as a generic cloud management CLI, but instead
solves specific AMP use-cases. The main use-case is discovering the valid
configuration options on a given cloud, such as for imageId
and hardwareId
.
The command brooklyn cloud-compute
has the following options:
list-images
: lists VM images within the given cloud, which can be chosen when
provisioning new VMs.
This is useful for finding the possible values for the imageId
configuration.
get-image <imageId1> <imageId2> ...
: retrieves metadata about the specific images.
list-hardware-profiles
: lists the ids and the details of the hardware profiles
available when provisioning.
This is useful for finding the possible values for the hardwareId
configuration.
default-template
: retrieves metadata about the image and hardware profile that will
be used by AMP for that location, if no additional configuration options
are supplied.
list-instances
: lists the VM instances within the given cloud.
terminate-instances <instanceId1> <instanceId2> ...
: Terminates the instances with
the given ids.
The command brooklyn cloud-blobstore
is used to access a given object store, such as S3
or Swift. It has the following options:
list-containers
: lists the containers (i.e. buckets in S3 terminology) within the
given object store.
list-container <containerName>
: lists all the blobs (i.e. objects) contained within
the given container.
blob --container <containerName> --blob <blobName>
: retrieves the given blob
(i.e. object), including metadata and its contents.
NOTE: These documents are for using the AMP Client CLI tool to access a running AMP Server. For information on starting on a AMP Server, refer to Server CLI Reference.
A selection of distributions of the CLI tool, br
, are available to download from the download site
The CLI can be downloaded using the most appropriate link for your OS:
Operating System | Download links |
---|---|
Windows | 64-bit 32-bit |
Linux | 64-bit 32-bit |
Mac | 64-bit 32-bit |
For Linux/Unix based systems, ensure you have execute permissions for it: chmod u+x ./br
. When using br
in your Terminal, refer to it as ./br
.
The binary is completely self-contained so you can either copy it to your bin/
directory
or add the appropriate directory above to your path:
PATH=$PATH:$HOME/apache-brooklyn/bin/brooklyn-client-cli/linux.amd64/
NAME:
br - A AMP command line client application
USAGE:
br [global options] command [command options] [arguments...]
Commands whose description begins with a *
character are particularly experimental
and likely to change in upcoming releases.
COMMANDS:
access Show access control
activity Show the activity for an application / entity
add-catalog * Add a new catalog item from the supplied YAML
add-children * Add a child or children to this entity from the supplied YAML
application Show the status and location of running applications
catalog * List the available catalog applications
config Show the config for an application or entity
delete * Delete (expunge) a AMP application
deploy Deploy a new application from the given YAML (read from file or stdin)
destroy-policy Destroy a policy
effector Show the effectors for an application or entity
entity Show the entities of an application or entity
env Show the ENV stream for a given activity
invoke Invoke an effector of an application and entity
locations * List the available locations
login Login to brooklyn
policy Show the policies for an application or entity
rename Rename an application or entity
restart Invoke restart effector on an application and entity
sensor Show values of all sensors or named sensor for an application or entity
set Set config for an entity
spec Get the YAML spec used to create the entity, if available
start Invoke start effector on an application and entity
start-policy Start or resume a policy
stderr Show the STDERR stream for a given activity
stdin Show the STDIN stream for a given activity
stdout Show the STDOUT stream for a given activity
stop Invoke stop effector on an application and entity
stop-policy Suspends a policy
tree * Show the tree of all applications
version Display the version of the connected AMP
help
GLOBAL OPTIONS:
--help, -h show help
--version, -v print the version
Many commands require a “scope” expression to indicate the target on which they operate.
Where this
is required the usage statements below will use the shorthand nomenclature of <X-scope>
.
The various scopes should be replaced on the command line as:
<app-scope>
application <Name|AppID>
<entity-scope>
application <Name|AppID> entity <Name|EntityID>
<effector-scope>
application <Name|AppID> effector <Name>
application <Name|AppID> entity <Name|EntityID> effector <Name>
<config-scope>
application <Name|AppID> entity <Name|EntityID> config <ConfigID>
<activity-scope>
activity <ActivityID>
application <Name|AppID> entity <Name|EntityID> activity <ActivityID>
Many of the commands and scopes have shortened aliases:
activity act
application app
entity ent
policy pol
br login <URL> [username [password]]
Login to AMP. The CLI will prompt for a password if it is not provided. If the AMP server is running on
localhost with no security enabled, the username and password may be omitted.
On successful login, the version of the connected AMP server is shown.
br version
Show the version of the connected AMP server.
br deploy ( <FILE> | - )
Deploy an application based on the supplied YAML file or read from STDIN when -
is given instead of a file name.
br application
List the running applications.
br application <Name|AppID>
Show the detail for an application.
br <app-scope> config
Show the configuration details for an application.
br <app-scope> config <ConfigID>
Show the value for a configuration item.
br <app-scope> spec
Show the YAML specification used to create the application.
br <app-scope> rename <Name>
Rename the application to
br <app-scope> stop
Stop an application. See below for further information on the stop
effector.
br <app-scope> start
Start an application. See below for further information on the start
effector.
br <app-scope> restart
Restart an application. See below for further information on the restart
effector.
br <app-scope> delete
Delete an application from AMP.
NOTE: Use this command with care. Even if the application / entities are still running, AMP will drop all
knowledge of them and they will be left running in an ‘orphaned’ state.
br <app-scope> entity
List the child entities for an application.
br <entity-scope> entity
List the child entities for an entity.
br <app-scope> entity <Name|EntityID>
Show the detail of an entity.
br <app-scope> entity -c <Name|EntityID>
List the child entities for an entity.
br <entity-scope> config
Show the configuration details for an entity.
br <entity-scope> config <ConfigID>
Show the value for a configuration item.
br <config-scope> set <ConfigValue>
Set the value of a configuration item.
br <entity-scope> spec
Show the YAML specification used to create the entity.
br <entity-scope> rename <Name>
Rename the entity to
br <entity-scope> stop
Stop an entity. See below for further information on the stop
effector.
br <entity-scope> start
Start an entity. See below for further information on the start
effector.
br <entity-scope> restart
Restart an entity. See below for further information on the restart
effector.
br <app-scope> sensor
List the sensors and values for an application.
br <app-scope> sensor <SensorID>
Show the value for a sensor.
br <entity-scope> sensor
List the sensors and values for an entity.
br <entity-scope> sensor <SensorID>
Show the value for a sensor.
br <app-scope> effector
List the effectors for an application.
br <app-scope> effector <EffectorID>
Show the detail for an application effector.
br <app-scope> effector <EffectorID> invoke
Invoke the effector without any parameters.
br <app-scope> effector <EffectorID> invoke [-P <param>=<value> ...]
Invoke the effector with one of more parameters.
br <entity-scope> effector
List the effectors for an entity.
br <entity-scope> effector <EffectorID>
Show the detail for an entity effector.
br <entity-scope> effector <EffectorID> invoke
Invoke the effector without any parameters.
br <entity-scope> effector <EffectorID> invoke [-P <param>=<value> ...]
Invoke the effector with one of more parameters.
If the parameter value is
complex or multi-lined it may be provided in a file and referenced as:
<param>=@<FILE>
NOTE Shortcut commands have been provided for the standard start, restart and stop effectors. For example:
br <app-scope> stop
br <entity-scope> restart restartChildren=true
br <entity-scope> policy
List the policies for an entity.
br <entity-scope> policy <PolicyID>
Show the detail for an entity policy.
br <entity-scope> start-policy <PolicyID>
Start an entity policy.
br <entity-scope> stop-policy <PolicyID>
Stop an entity policy.
br <entity-scope> destroy-policy <PolicyID>
Destroy an entity policy.
br <app-scope> activity
List the activities for an application.
br <entity-scope> activity
List the activities for an entity.
br <activity-scope> activity
List the activities for an activity (ie its children).
br activity <ActivityID>
Show the detail for an activity.
br activity -c <ActivityID>
List the child activities of an activity.
br <activity-scope> stdin
Show the <STDIN>
stream for an activity.
br <activity-scope> stdout
Show the <STDOUT>
stream for an activity.
br <activity-scope> stderr
Show the <STDERR>
stream for an activity.
br <activity-scope> env
Show the Environment for an activity.
These commands are likely to change significantly or be removed in later versions of the AMP CLI.
br tree
br <entity-scope> add-children <FILE>
br catalog
List the application catalog.
br add-catalog <FILE>
Add a catalog entry from a YAML file.
br locations
List the location catalog.
br access
Show if you have access to provision locations.
This document provides a brief overview of using the most common AMP CLI commands, by using the CLI to deploy an application then examine various aspects of it.
The YAML blueprint for the application that will be deployed is shown at the end of this document.
NOTE: In the sample output, some additional line-wrapping has been used to aid readabilty.
First, login to the running AMP server. This example assumes that the AMP server
is running on localhost
; change the URL and credentials as necessary.
$ br login http://localhost:8081 admin
Enter Password: *
Connected to AMP version 0.9.0-SNAPSHOT at http://localhost:8081
The version of the connected AMP server may be viewed with the version
command:
$ br version
0.9.0-SNAPSHOT
Deploy the application; on success the Id of the new application is displayed:
$ br deploy webapp-policy.yaml
Id: lmOcZbsT
Name: WebCluster
Status: In progress
The application
command can be used to list a summary of all the running applications.
After all of the entities have been started, the application status changes to RUNNING
:
$ br application
Id Name Status Location
YeEQHwgW AppCluster RUNNING CNTBOtjI
lmOcZbsT WebCluster RUNNING CNTBOtjI
Further details of an application can be seen by using the ApplicationID or Name as a
parameter for the application
command:
$ br application WebCluster
Id: lmOcZbsT
Name: WebCluster
Status: RUNNING
ServiceUp: true
Type: org.apache.brooklyn.entity.stock.BasicApplication
CatalogItemId: null
LocationId: CNTBOtjI
LocationName: FixedListMachineProvisioningLocation:CNTB
LocationSpec: byon
LocationType: org.apache.brooklyn.location.byon.FixedListMachineProvisioningLocation
The configuration details of an application can be seen with the config
command:
$ br application WebCluster config
Key Value
camp.template.id TYWVroRz
brooklyn.wrapper_app true
The entities of an application can be viewed with the entity
command:
$ br app WebCluster entity
Id Name Type
xOcMooka WebApp org.apache.brooklyn.entity.webapp.ControlledDynamicWebAppCluster
thHnLFkP WebDB org.apache.brooklyn.entity.database.mysql.MySqlNode
It is common for an entity to have child entities; these can be listed by providing an
entity-scope for the entity
command:
$ br app WebCluster entity WebApp entity
Id Name Type
e5pWAiHf Cluster of TomcatServer org.apache.brooklyn.entity.webapp.DynamicWebAppCluster
CZ8QUVgX NginxController:CZ8Q org.apache.brooklyn.entity.proxy.nginx.NginxController
or by using -c
(or --children
) flag with the entity
command:
$ br app WebCluster entity -c e5pWAiHf
Id Name Type
x0P2LRxZ quarantine org.apache.brooklyn.entity.group.QuarantineGroup
QK6QjmrW TomcatServer:QK6Q org.apache.brooklyn.entity.webapp.tomcat.TomcatServer
As for applications, the configuration details of an entity can be seen with the config
command:
$ br app WebCluster entity thHnLFkP config
Key Value
install.unique_label MySqlNode_5.6.26
brooklyn.wrapper_app true
datastore.creation.script.url https://bit.ly/brooklyn-visitors-creation-script
camp.template.id dnw3GqN0
camp.plan.id db
onbox.base.dir /home/vagrant/brooklyn-managed-processes
onbox.base.dir.resolved true
The value of a single configuration item can be displayed by using the configuration key
as a parameter for the config
command:
$ br app WebCluster entity thHnLFkP config datastore.creation.script.url
https://bit.ly/brooklyn-visitors-creation-script
The value of a configuration item can be changed by using the set
command:
$ br app WebCluster entity thHnLFkP config datastore.creation.script.url set \"https://bit.ly/new-script\"
The sensors associated with an application or entity can be listed with the sensor
command:
$ br app WebCluster entity CZ8QUVgX sensor
Name Value
download.addon.urls: {"stickymodule":"https://bitbucket.org/nginx-goodies/n
ginx-sticky-module-ng/get/${addonversion}.tar.gz","pcr
e":"ftp://ftp.csx.cam.ac.uk/pub/software/programming/p
cre/pcre-${addonversion}.tar.gz"}
download.url: http://nginx.org/download/nginx-${version}.tar.gz
expandedinstall.dir: /home/vagrant/brooklyn-managed-processes/installs/Ngi
nxController_1.8.0/nginx-1.8.0
host.address: 192.168.52.102
host.name: 192.168.52.102
host.sshAddress: vagrant@192.168.52.102:22
host.subnet.address: 192.168.52.102
host.subnet.hostname: 192.168.52.102
http.port: 8000
install.dir: /home/vagrant/brooklyn-managed-processes/installs/Ngin
xController_1.8.0
log.location: /home/vagrant/brooklyn-managed-processes/apps/FoEXXwJ2
/entities/NginxController_CZ8QUVgX/console
main.uri: http://192.168.52.102:8000/
member.sensor.hostandport:
member.sensor.hostname: {"typeToken":null,"type":"java.lang.String","name":"ho
st.subnet.hostname","description":"Host name as known
internally in the subnet where it is running (if diffe
rent to host.name)","persistence":"REQUIRED"}
member.sensor.portNumber: {"typeToken":null,"type":"java.lang.Integer","name":"h
ttp.port","description":"HTTP port","persistence":"RE
QUIRED","configKey":{"name":"http.port","typeToken":nu
ll,"type":"org.apache.brooklyn.api.location.PortRange"
,"description":"HTTP port","defaultValue":{"ranges":[{
"port":8080},{"start":18080,"end":65535,"delta":1}]},"
reconfigurable":false,"inheritance":null,"constraint":
"ALWAYS_TRUE"}}
nginx.log.access: /home/vagrant/brooklyn-managed-processes/apps/FoEXXwJ2
/entities/NginxController_CZ8QUVgX/logs/access.log
nginx.log.error: /home/vagrant/brooklyn-managed-processes/apps/FoEXXwJ2
/entities/NginxController_CZ8QUVgX/logs/error.log
nginx.pid.file: /home/vagrant/brooklyn-managed-processes/apps/FoEXXwJ2
/entities/NginxController_CZ8QUVgX/pid.txt
nginx.url.answers.nicely: true
proxy.domainName:
proxy.http.port: 8000
proxy.https.port: 8443
proxy.protocol: http
proxy.serverpool.targets: {"TomcatServerImpl{id=QK6QjmrW}":"192.168.52.103:8080"}
run.dir: /home/vagrant/brooklyn-managed-processes/apps/FoEXXwJ2
/entities/NginxController_CZ8QUVgX
service.isUp: true
service.notUp.diagnostics: {}
service.notUp.indicators: {}
service.problems: {}
service.process.isRunning: true
service.state: RUNNING
service.state.expected: running @ 1449314377781 / Sat Dec 05 11:19:37 GMT 2015
softwareprocess.pid.file:
softwareservice.provisioningLocation: {"type":"org.apache.brooklyn.api.location.Location","i
d":"zhYBc6xt"}
webapp.url: http://192.168.52.102:8000/
Details for an individual sensor can be shown by providing the Sensor Name as a
parameter to the sensor
command:
$ br app WebCluster entity CZ8QUVgX sensor service.state.expected
running @ 1449314377781 / Sat Dec 05 11:19:37 GMT 2015
The effectors for an application or entity can be listed with the effector
command:
$ br app WebCluster effector
Name Description Parameters
restart Restart the process/service represented by an entity
start Start the process/service represented by an entity locations
stop Stop the process/service represented by an entity
$ br app WebCluster entity NginxController:CZ8Q effector
Name Description Parameters
deploy Deploys an archive ...
getCurrentConfiguration Gets the current ...
populateServiceNotUpDiagnostics Populates the attribute ...
reload Forces reload of ...
restart Restart the process/service ... restartChildren,restartMachine
start Start the process/service ... locations
stop Stop the process/service ... stopProcessMode,stopMachineMode
update Updates the entities ...
Details of an individual effector can be viewed by using the name as a parameter for
the effector
command:
$ br app WebCluster entity NginxController:CZ8Q effector update
Name: update
Description: Updates the entities configuration, and then forces reload of that configuration
Parameters:
An effector can be invoked by using the invoke
command with an effector-scope:
$ br app WebCluster entity NginxController:CZ8Q effector update invoke
Parameters can also be passed to the effector:
$ br app WebCluster entity NginxController:CZ8Q effector restart invoke -P restartChildren=true
If a parameter value is complex or spans multiple lines, it may be provided in a file and used like this:
$ br app WebCluster effector start invoke -P locations=@data.txt
Shortcut commands are available for the 3 standard effectors of start
, restart
and stop
.
These commands can be used directly with an app-scope or entity-scope:
$ br app WebCluster entity NginxController:CZ8Q restart
$ br app WebCluster stop
The policies associated with an application or entity can be listed with the policy
command:
$ br app WebCluster entity NginxController:CZ8Q policy
Id Name State
VcZ0cfeO Controller targets tracker RUNNING
Details of an individual policy may be viewed by using the PolicyID as a parameter to
the policy
command:
$ br app WebCluster entity NginxController:CZ8Q policy VcZ0cfeO
Name Value Description
group DynamicWebAppClusterImpl{id=TpbkaK4D} group
notifyOnDuplicates false Whether to notify listeners when
a sensor is published with the
same value as last time
sensorsToTrack [Sensor: host.subnet.hostname Sensors of members to be monitored
(java.lang.String), Sensor: http.port (implicitly adds service-up
(java.lang.Integer)] to this list, but that
behaviour may be deleted in a
subsequent release!)
The activities for an application or entity may be listed with the activity
command:
$ br app WebCluster activity
Id Task Submitted Status Streams
Wb6GV5rt start Sat Dec 19 11:08:01 GMT 2015 Completed
q2MbyyTo invoking start[locations] on 2 nodes Sat Dec 19 11:08:01 GMT 2015 Completed
$ br app WebCluster entity NginxController:CZ8Q activity
Id Task Submitted Status Streams
GVh0pyKG start Sun Dec 20 19:18:06 GMT 2015 Completed
WJm908rA provisioning (FixedListMachineProvisi... Sun Dec 20 19:18:06 GMT 2015 Completed
L0cKFBrW pre-start Sun Dec 20 19:18:06 GMT 2015 Completed
D0Ab2esP ssh: initializing on-box base dir ./b... Sun Dec 20 19:18:06 GMT 2015 Completed env,stderr,stdin,stdout
tumLAdo4 start (processes) Sun Dec 20 19:18:06 GMT 2015 Completed
YbF2czKM copy-pre-install-resources Sun Dec 20 19:18:06 GMT 2015 Completed
o3YdqxsQ pre-install Sun Dec 20 19:18:06 GMT 2015 Completed
TtGw4qMZ pre-install-command Sun Dec 20 19:18:06 GMT 2015 Completed
duPvOSDB setup Sun Dec 20 19:18:06 GMT 2015 Completed
WLtkbhgW copy-install-resources Sun Dec 20 19:18:06 GMT 2015 Completed
ZQtrImnl install Sun Dec 20 19:18:06 GMT 2015 Completed
hzi49YD6 ssh: setting up sudo Sun Dec 20 19:18:06 GMT 2015 Completed env,stderr,stdin,stdout
eEUHcpfi ssh: Getting machine details for: Ssh... Sun Dec 20 19:18:07 GMT 2015 Completed env,stderr,stdin,stdout
juTe2qLG ssh: installing NginxControllerImpl{i... Sun Dec 20 19:18:08 GMT 2015 Completed env,stderr,stdin,stdout
hXqwEZJl post-install-command Sun Dec 20 19:18:08 GMT 2015 Completed
vZliYwBI customize Sun Dec 20 19:18:08 GMT 2015 Completed
O4Wwb0bP ssh: customizing NginxControllerImpl{... Sun Dec 20 19:18:08 GMT 2015 Completed env,stderr,stdin,stdout
sDwMSkE2 copy-runtime-resources Sun Dec 20 19:18:08 GMT 2015 Completed
yDYkdkS8 ssh: create run directory Sun Dec 20 19:18:08 GMT 2015 Completed env,stderr,stdin,stdout
W7dI8r1c pre-launch-command Sun Dec 20 19:18:08 GMT 2015 Completed
OeZKwM5z launch Sun Dec 20 19:18:08 GMT 2015 Completed
y50Gne5E scheduled:nginx.url.answers.nicely @ ... Sun Dec 20 19:18:08 GMT 2015 Scheduler,
ARTninGE scheduled:service.process.isRunning @... Sun Dec 20 19:18:08 GMT 2015 Scheduler,
tvZoNUTN ssh: launching NginxControllerImpl{id... Sun Dec 20 19:18:08 GMT 2015 Completed env,stderr,stdin,stdout
YASrjA4w post-launch-command Sun Dec 20 19:18:09 GMT 2015 Completed
jgLYv8pE post-launch Sun Dec 20 19:18:09 GMT 2015 Completed
UN9OcWLS post-start Sun Dec 20 19:18:09 GMT 2015 Completed
nmiv97He reload Sun Dec 20 19:18:09 GMT 2015 Completed
FJfPbNtp ssh: restarting NginxControllerImpl{i... Sun Dec 20 19:18:10 GMT 2015 Completed env,stderr,stdin,stdout
Xm1tjvKf update Sun Dec 20 19:18:40 GMT 2015 Completed
Row67vfa reload Sun Dec 20 19:18:40 GMT 2015 Completed
r8QZXlxJ ssh: restarting NginxControllerImpl{i... Sun Dec 20 19:18:40 GMT 2015 Completed env,stderr,stdin,stdout
The detail for an individual activity can be viewed by providing the ActivityID as a
parameter to the activity
command (an app-scope or entity-scope is not not needed for viewing
the details of an activity):
$ br activity tvZoNUTN
Id: tvZoNUTN
DisplayName: ssh: launching NginxControllerImpl{id=OxPUBk1p}
Description:
EntityId: OxPUBk1p
EntityDisplayName: NginxController:OxPU
Submitted: Sun Dec 20 19:18:08 GMT 2015
Started: Sun Dec 20 19:18:08 GMT 2015
Ended: Sun Dec 20 19:18:09 GMT 2015
CurrentStatus: Completed
IsError: false
IsCancelled: false
SubmittedByTask: OeZKwM5z
Streams: stdin: 1133, stdout: 162, stderr: 0, env 0
DetailedStatus: "Completed after 1.05s
Result: 0"
The activity command output shows whether any streams were associated with it. The streams
and environment for an activity can be viewed with the commands stdin
, stdout
,
stderr
and env
:
$ br activity tvZoNUTN stdin
export RUN_DIR="/home/vagrant/brooklyn-managed-processes/apps/V5GQCpIT/entities/NginxController_OxPUBk1p"
mkdir -p $RUN_DIR
cd $RUN_DIR
cd /home/vagrant/brooklyn-managed-processes/apps/V5GQCpIT/entities/NginxController_OxPUBk1p
{ which "./sbin/nginx" || { EXIT_CODE=$? && ( echo "The required executable \"./sbin/nginx\" does not exist" | tee /dev/stderr ) && exit $EXIT_CODE ; } ; }
nohup ./sbin/nginx -p /home/vagrant/brooklyn-managed-processes/apps/V5GQCpIT/entities/NginxController_OxPUBk1p/ -c conf/server.conf > /home/vagrant/brooklyn-managed-processes/apps/V5GQCpIT/entities/NginxController_OxPUBk1p/console 2>&1 &
for i in {1..10}
do
test -f /home/vagrant/brooklyn-managed-processes/apps/V5GQCpIT/entities/NginxController_OxPUBk1p/logs/nginx.pid && ps -p `cat /home/vagrant/brooklyn-managed-processes/apps/V5GQCpIT/entities/NginxController_OxPUBk1p/logs/nginx.pid` && exit
sleep 1
done
echo "No explicit error launching nginx but couldn't find process by pid; continuing but may subsequently fail"
cat /home/vagrant/brooklyn-managed-processes/apps/V5GQCpIT/entities/NginxController_OxPUBk1p/console | tee /dev/stderr
$ br activity tvZoNUTN stdout
./sbin/nginx
PID TTY TIME CMD
6178 ? 00:00:00 nginx
Executed /tmp/brooklyn-20151220-191808796-CaiI-launching_NginxControllerImpl_.sh, result 0
The child activities of an activity may be listed by providing an activity-scope for the
activity
command:
$ br activity OeZKwM5z
Id: OeZKwM5z
DisplayName: launch
Description:
EntityId: OxPUBk1p
EntityDisplayName: NginxController:OxPU
Submitted: Sun Dec 20 19:18:08 GMT 2015
Started: Sun Dec 20 19:18:08 GMT 2015
Ended: Sun Dec 20 19:18:09 GMT 2015
CurrentStatus: Completed
IsError: false
IsCancelled: false
SubmittedByTask: tumLAdo4
Streams:
DetailedStatus: "Completed after 1.06s
No return value (null)"
$ br activity OeZKwM5z activity
Id Task Submitted Status Streams
tvZoNUTN ssh: launching NginxControllerImpl{id... Sun Dec 20 19:18:08 GMT 2015 Completed env,stderr,stdin,stdout
or by using the -c
(or --children
) flag with the activity
command:
$ br activity -c OeZKwM5z
Id Task Submitted Status Streams
tvZoNUTN ssh: launching NginxControllerImpl{id... Sun Dec 20 19:18:08 GMT 2015 Completed env,stderr,stdin,stdout
This the YAML blueprint used for this document.
name: WebCluster
location:
byon:
user: vagrant
password: vagrant
hosts:
- 192.168.52.101
- 192.168.52.102
- 192.168.52.103
- 192.168.52.104
- 192.168.52.105
services:
- type: org.apache.brooklyn.entity.webapp.ControlledDynamicWebAppCluster
name: WebApp
brooklyn.config:
wars.root: http://search.maven.org/remotecontent?filepath=org/apache/brooklyn/example/brooklyn-example-hello-world-sql-webapp/0.8.0-incubating/brooklyn-example-hello-world-sql-webapp-0.8.0-incubating.war
java.sysprops:
brooklyn.example.db.url: >
$brooklyn:formatString("jdbc:%s%s?user=%s&password=%s",
component("db").attributeWhenReady("datastore.url"),
"visitors", "brooklyn", "br00k11n")
brooklyn.policies:
- type: org.apache.brooklyn.policy.autoscaling.AutoScalerPolicy
brooklyn.config:
metric: webapp.reqs.perSec.windowed.perNode
metricLowerBound: 2
metricUpperBound: 10
minPoolSize: 1
maxPoolSize: 2
resizeUpStabilizationDelay: 1m
resizeDownStabilizationDelay: 5m
- type: org.apache.brooklyn.entity.database.mysql.MySqlNode
id: db
name: WebDB
brooklyn.config:
creationScriptUrl: https://bit.ly/brooklyn-visitors-creation-script
This guide will walk you through connecting to the AMP Server Graphical User Interface and performing various tasks.
For an explanation of common AMP Concepts see the Getting Started Guide.
This guide assumes that you are using Linux or Mac OS X and that AMP Server will be running on your local system.
If you haven’t already done so, you will need to start AMP Server using the commands shown below.
It is not necessary at this time, but depending on what you are going to do,
you may wish to set up some other configuration options first,
Now start AMP with the following command:
$ cd apache-brooklyn-0.10.0-SNAPSHOT
$ bin/amp launch
Please refer to the Server CLI Reference for details of other possible command line options.
AMP will output the address of the management interface:
INFO No security provider options specified. ... INFO Starting AMP web-console with passwordless access on localhost ... INFO Starting AMP web-console on loopback interface because no security config is set INFO Started AMP console at http://127.0.0.1:8081/, running classpath://brooklyn.war
Next, open the web console on http://127.0.0.1:8081. No applications have been deployed yet, so the “Create Application” dialog opens automatically.
The next section will show how to deploy a blueprint.
When you first access the web console on http://127.0.0.1:8081 you will be requested to create your first application.
We’ll start by deploying an application via a YAML blueprint consisting of the following layers.
Switch to the YAML tab and copy the blueprint below into the large text box.
But before you submit it, modify the YAML to specify the location where the application will be deployed.
name: My Web Cluster
location:
jclouds:aws-ec2:
identity: ABCDEFGHIJKLMNOPQRST
credential: s3cr3tsq1rr3ls3cr3tsq1rr3ls3cr3tsq1rr3l
services:
- type: org.apache.brooklyn.entity.webapp.ControlledDynamicWebAppCluster
name: My Web
id: webappcluster
brooklyn.config:
wars.root: http://search.maven.org/remotecontent?filepath=org/apache/brooklyn/example/brooklyn-example-hello-world-sql-webapp/0.8.0-incubating/brooklyn-example-hello-world-sql-webapp-0.8.0-incubating.war
java.sysprops:
brooklyn.example.db.url: >
$brooklyn:formatString("jdbc:%s%s?user=%s&password=%s",
component("db").attributeWhenReady("datastore.url"),
"visitors", "brooklyn", "br00k11n")
- type: org.apache.brooklyn.entity.database.mysql.MySqlNode
name: My DB
id: db
brooklyn.config:
creationScriptUrl: https://bit.ly/brooklyn-visitors-creation-script
Replace the location:
element with values for your chosen target environment, for example to use SoftLayer rather than AWS (updating with your own credentials):
location:
jclouds:softlayer:
identity: ABCDEFGHIJKLMNOPQRST
credential: s3cr3tsq1rr3ls3cr3tsq1rr3ls3cr3tsq1rr3l
NOTE: See Locations in the Operations section of the User Guide for instructions on setting up alternate cloud providers, bring-your-own-nodes, or localhost targets, and storing credentials/locations in a file on disk rather than in the blueprint.
With the modified YAML in the dialog, click “Finish”. The dialog will close and AMP will begin deploying your application. Your application will be shown as “Starting” on the web console’s front page.
Depending on your choice of location it may take some time for the application nodes to start, the next page describes how you can monitor the progress of the application deployment and verify its successful deployment.
Instead of pasting the YAML blueprint each time, it can be added to the AMP Catalog where it will be accessible from the Catalog tab of the Create Application dialog.
See Catalog in the Operations section of the User Guide for instructions on creating a new Catalog entry from your Blueprint YAML.
So far we have touched on AMP’s ability to deploy an application blueprint to a cloud provider.
The next section will show how to Monitor and Manage Applications.
From the Home page, click on the application name or open the Applications tab.
We can explore the management hierarchy of the application, which will show us the entities it is composed of. Starting from the application use the arrows to expand out the list of entities, or hover over the arrow until a menu popup is displayed so that you can select Expand All
.
BasicApplication
)
MySqlNode
)ControlledDynamicWebAppCluster
)
DynamicWebAppCluster
)
QuarantineGroup
)TomcatServer
)NginxController
)Clicking on the “My Web Cluster” entity will show the “Summary” tab, giving a very high level of what that component is doing. Click on each of the child components in turn for more detail on that component. Note that the cluster of web servers includes a “quarantine group”, to which members of the cluster that fail will be added. These are excluded from the load-balancer’s targets.
The Activity tab allows us to drill down into the tasks each entity is currently executing or has recently completed. It is possible to drill down through all child tasks, and view the commands issued, along with any errors or warnings that occurred.
For example clicking on the NginxController in the left hand tree and opening its Activity tab you can observe the ‘start’ task is ‘In progress’.
Note: You may observe different tasks depending on how far your deployment has progressed).
Clicking on the ‘start’ task you can discover more details on the actions being carried out by that task (a task may consist of additional subtasks).
Continuing to drill down into the ‘In progress’ tasks you will eventually reach the currently active task where you can investigate the ssh command executed on the target node including the current stdin, stdout and stderr output.
Now click on the “Sensors” tab: these data feeds drive the real-time picture of the application. As you navigate in the tree at the left, you can see more targeted statistics coming in in real-time.
Explore the sensors and the tree to find the URL where the NginxController for the webapp we just deployed is running. This can be found in ‘My Web Cluster -> My Web -> NginxController -> main.uri’.
Quickly return to the ‘AMP JS REST client’ web browser tab showing the “Sensors” and observe the ‘My Web Cluster -> My Web -> Cluster of TomcatServer -> webapp.reqs.perSec.last’ sensor value increase.
To stop an application, select the application in the tree view (the top/root entity), click on the Effectors tab, and invoke the “Stop” effector. This will cleanly shutdown all components in the application and return any cloud machines that were being used.
AMP’s real power is in using Policies to automatically manage applications.
To see an example of policy based management, please deploy the following blueprint (changing the location details as for the example shown earlier):
name: My Web Cluster
location: localhost
services:
- type: org.apache.brooklyn.entity.webapp.ControlledDynamicWebAppCluster
name: My Web
brooklyn.config:
wars.root: http://search.maven.org/remotecontent?filepath=org/apache/brooklyn/example/brooklyn-example-hello-world-sql-webapp/0.8.0-incubating/brooklyn-example-hello-world-sql-webapp-0.8.0-incubating.war
java.sysprops:
brooklyn.example.db.url: >
$brooklyn:formatString("jdbc:%s%s?user=%s&password=%s",
component("db").attributeWhenReady("datastore.url"),
"visitors", "brooklyn", "br00k11n")
brooklyn.policies:
- type: org.apache.brooklyn.policy.autoscaling.AutoScalerPolicy
brooklyn.config:
metric: webapp.reqs.perSec.windowed.perNode
metricLowerBound: 0.1
metricUpperBound: 10
minPoolSize: 1
maxPoolSize: 4
resizeUpStabilizationDelay: 10s
resizeDownStabilizationDelay: 1m
- type: org.apache.brooklyn.entity.database.mysql.MySqlNode
id: db
name: My DB
brooklyn.config:
creationScriptUrl: https://bit.ly/brooklyn-visitors-creation-script
The app server cluster has an AutoScalerPolicy
, and the loadbalancer has a targets
policy.
Use the Applications tab in the web console to drill down into the Policies section of the ControlledDynamicWebAppCluster. You will see that the AutoScalerPolicy
is running.
This policy automatically scales the cluster up or down to be the right size for the cluster’s current load. One server is the minimum size allowed by the policy.
The loadbalancer’s targets
policy ensures that the loadbalancer is updated as the cluster size changes.
Sitting idle, this cluster will only contain one server, but you can use a tool like jmeter pointed at the nginx endpoint to create load on the cluster. Download a jmeter test plan here.
As load is added, Cloudsoft AMP requests a new cloud machine, creates a new app server, and adds it to the cluster. As load is removed, servers are removed from the cluster, and the infrastructure is handed back to the cloud.
The AutoScalerPolicy
here is configured to respond to the sensor
reporting requests per second per node, invoking the default resize
effector.
By clicking on the policy, you can configure it to respond to a much lower threshhold
or set long stabilization delays (the period before it scales out or back).
An even simpler test is to manually suspend the policy, by clicking “Suspend” in the policies list.
You can then switch to the “Effectors” tab and manually trigger a resize
.
On resize, new nodes are created and configured,
and in this case a policy on the nginx node reconfigures nginx whenever the set of active
targets changes.
This guide has given a quick overview of using the Cloudsoft AMP GUI to deploy, monitor and manage applications. The GUI also allows you to perform various Advanced management tasks and to explore and use the REST API (from the Script tab). Please take some time now to become more familiar with the GUI.
The file ~/.brooklyn/brooklyn.properties
is read when AMP starts
to load server configuration values.
A different properties file can be specified either additionally or instead
through CLI options.
A template brooklyn.properties file is available, with abundant comments.
The most common properties set in this file are for access control. Without this, AMP will bind only to localhost or will create a random password written to the log for use on other networks. The simplest way to specify users and passwords is:
brooklyn.webconsole.security.users=admin,bob
brooklyn.webconsole.security.user.admin.password=AdminPassw0rd
brooklyn.webconsole.security.user.bob.password=BobPassw0rd
The properties file must have permissions 600 (i.e. readable and writable only by the file’s owner), for some security.
In many cases, it is preferable instead to use an external credentials store such as LDAP or at least to have passwords in this file. Information on configuring these is below.
If coming over a network it is highly recommended additionally to use https
.
This can be configured with:
brooklyn.webconsole.security.https.required=true
More information, including setting up a certificate, is described further below.
Values in brooklyn.properties
can use the Camp YAML syntax. Any value starting $brooklyn:
is
parsed as a Camp YAML expression.
This allows externalized configuration to be used from brooklyn.properties. For example:
brooklyn.location.jclouds.aws-ec2.identity=$brooklyn:external("vault", "aws-identity")
brooklyn.location.jclouds.aws-ec2.credential=$brooklyn:external("vault", "aws-credential")
If for some reason one requires a literal value that really does start with $brooklyn:
(i.e.
for the value to not be parsed), then this can be achieved by using the syntax below. This
example returns the property value $brooklyn:myexample
:
example.property=$brooklyn:literal("$brooklyn:myexample")
Information on defining locations in the brooklyn.properties
file is available here.
Arbitrary data can be set in the brooklyn.properties
.
This can be accessed in java using ManagementContext.getConfig(KEY)
.
Security Providers are the mechanism by which different authentication authorities are plugged in to AMP.
These can be configured by specifying brooklyn.webconsole.security.provider
equal
to the name of a class implementing SecurityProvider
.
An implementation of this could point to Spring, LDAP, OpenID or another identity management system.
The default implementation, ExplicitUsersSecurityProvider
, reads from a list of users and passwords
which should be specified as configuration parameters e.g. in brooklyn.properties
.
This configuration could look like:
brooklyn.webconsole.security.users=admin
brooklyn.webconsole.security.user.admin.salt=OHDf
brooklyn.webconsole.security.user.admin.sha256=91e16f94509fa8e3dd21c43d69cadfd7da6e7384051b18f168390fe378bb36f9
The users
line should contain a comma-separated list. The special value *
is accepted to permit all users.
To generate this, the AMP CLI can be used:
brooklyn generate-password --user admin
Enter password:
Re-enter password:
Please add the following to your brooklyn.properies:
brooklyn.webconsole.security.users=admin
brooklyn.webconsole.security.user.admin.salt=OHDf
brooklyn.webconsole.security.user.admin.sha256=91e16f94509fa8e3dd21c43d69cadfd7da6e7384051b18f168390fe378bb36f9
Alternatively, in dev/test environments where a lower level of security is required,
the syntax brooklyn.webconsole.security.user.<username>=<password>
can be used for
each <username>
specified in the brooklyn.webconsole.security.users
list.
Other security providers available include:
brooklyn.webconsole.security.provider=org.apache.brooklyn.rest.security.provider.BlackholeSecurityProvider
will block all logins (e.g. if not using the web console)
brooklyn.webconsole.security.provider=org.apache.brooklyn.rest.security.provider.AnyoneSecurityProvider
will allow logins with no credentials (e.g. in secure dev/test environments)
brooklyn.webconsole.security.provider=org.apache.brooklyn.rest.security.provider.LdapSecurityProvider
will cause AMP to call to an LDAP server to authenticate users;
The other things you need to set in brooklyn.properties
are:
brooklyn.webconsole.security.ldap.url
- ldap connection urlbrooklyn.webconsole.security.ldap.realm
- ldap dc parameter (domain)brooklyn.webconsole.security.ldap.ou
optional, by default it set to Users - ldap ou parameterbrooklyn.properties example configuration:
brooklyn.webconsole.security.provider=org.apache.brooklyn.rest.security.provider.LdapSecurityProvider
brooklyn.webconsole.security.ldap.url=ldap://localhost:10389/????X-BIND-USER=uid=admin%2cou=system,X-BIND-PASSWORD=secret,X-COUNT-LIMIT=1000
brooklyn.webconsole.security.ldap.realm=example.com
After you setup the AMP connection to your LDAP server, you can authenticate in AMP using your cn (e.g. John Smith) and your password.
org.apache.brooklyn.rest.security.provider.LdapSecurityProvider
searches in the LDAP tree in LDAP://cn=John Smith,ou=Users,dc=example,dc=com
If you want to customize the ldap path or something else which is particular to your LDAP setup you
can extend LdapSecurityProvider
class or implement from scratch the SecurityProvider
interface.
In addition to login access, fine-grained permissions – including seeing entities, creating applications, seeing sensors, and invoking effectors – can be defined on a per-user and per-target (e.g. which entity/effector) basis using a plug-in Entitlement Manager.
This can be set globally with the property:
brooklyn.entitlements.global=<class>
The default entitlement manager is one which responds to per-user entitlement rules, and understands:
root
: full access, including to the Groovy consoleuser
: access to everything but actions that affect the server itself. Such actions include the
Groovy console, stopping the server and retrieving management context configuration.readonly
: read-only access to almost all informationminimal
: access only to server stats, for use by monitoring systemsThese keywords are also understood at the global
level, so to grant full access to admin
,
read-only access to support
, limited access to metrics
and regular access to user
you can write:
brooklyn.entitlements.global=user
brooklyn.entitlements.perUser.admin=root
brooklyn.entitlements.perUser.support=readonly
brooklyn.entitlements.perUser.metrics=minimal
Under the covers this invokes the PerUserEntitlementManager
,
with a default
set (and if not specified default
defaults to minimal
);
so the above can equivalently be written:
brooklyn.entitlements.global=org.apache.brooklyn.core.mgmt.entitlement.PerUserEntitlementManager
brooklyn.entitlements.perUser.default=user
brooklyn.entitlements.perUser.admin=root
brooklyn.entitlements.perUser.support=readonly
brooklyn.entitlements.perUser.metrics=minimal
For more information, see Java: Entitlements.
To enable https, you will need a server certificate in a java keystore. To create a self-signed certificate, you can use the following command:
% keytool -genkey -keyalg RSA -alias AMP -keystore <path-to-keystore-directory>/server.key -storepass mypassword -validity 360 -keysize 2048
You will then be prompted to enter you name and organization details. This will create a keystore with the password mypassword
- you should use your own secure password, which will be the same password used in your brooklyn.properties (below).
You will also need to replace <path-to-keystore-directory>
with the full path of the folder where you wish to store your
keystore.
The certificate generated will be a self-signed certificate and will not have a CN field identifying the website server name, which will cause a warning to be displayed by the browser when viewing the page. For production servers, a valid signed certificate from a trusted certifying authority should be used instead
To enable HTTPS in AMP, add the following to your brooklyn.properties:
brooklyn.webconsole.security.https.required=true
brooklyn.webconsole.security.keystore.url=<path-to-keystore-directory>/server.key
brooklyn.webconsole.security.keystore.password=mypassword
brooklyn.webconsole.security.keystore.certificate.alias=brooklyn
Locations are the environments to which AMP deploys applications, including:
AMP supports a wide range of locations:
ssh
to localhost
for rapid testingConfiguration can be set in ~/.brooklyn/brooklyn.properties
, through the
location wizard tool available within the web console
or directly in YAML when specifying a location.
On some entities, config keys determining matching selection and provisioning behavior
can also be set in provisioning.properties
.
For most cloud provisioning tasks, AMP uses Apache jclouds. The identifiers for some of the most commonly used jclouds-supported clouds are (or see the full list):
jclouds:aws-ec2:<region>
: Amazon EC2, where :<region>
might be us-east-1
or eu-west-1
(or omitted)jclouds:softlayer:<region>
: IBM Softlayer, where :<region>
might be dal05
or ams01
(or omitted)jclouds:google-compute-engine
: Google Compute Enginejclouds:openstack-nova:<endpoint>
: OpenStack, where :<endpoint>
is the access URL (required)jclouds:cloudstack:<endpoint>
: Apache CloudStack, where :<endpoint>
is the access URL (required)For any of these, of course, AMP needs to be configured with an identity
and a credential
:
location:
jclouds:aws-ec2:
identity: ABCDEFGHIJKLMNOPQRST
credential: s3cr3tsq1rr3ls3cr3tsq1rr3ls3cr3tsq1rr3l
The above YAML can be embedded directly in blueprints, either at the root or on individual services.
If you prefer to keep the credentials separate, you can instead store them as a catalog entry or set them in brooklyn.properties
in the jclouds.<provider>
namespace:
brooklyn.location.jclouds.aws-ec2.identity=ABCDEFGHIJKLMNOPQRST
brooklyn.location.jclouds.aws-ec2.credential=s3cr3tsq1rr3ls3cr3tsq1rr3ls3cr3tsq1rr3l
And in this case you can reference the location in YAML with location: jclouds:aws-ec2
.
Alternatively, you can use the location wizard tool available within the web console to create any cloud location supported by Apache jclouds. This location will be saved as a catalog entry for easy reusability.
AMP irons out many of the differences between clouds so that blueprints run similarly in a wide range of locations, including setting up access and configuring images and machine specs. The configuration options are described in more detail below.
In some cases, cloud providers have special features or unusual requirements. These are outlined in More Details for Specific Clouds.
Once a machine is provisioned, AMP will normally attempt to log in via SSH and configure the machine sensibly.
The credentials for the initial OS log on are typically discovered from the cloud,
but in some environments this is not possible.
The keys loginUser
and either loginUser.password
or loginUser.privateKeyFile
can be used to force
AMP to use specific credentials for the initial login to a cloud-provisioned machine.
(This custom login is particularly useful when using a custom image templates where the cloud-side account
management logic is not enabled. For example, a vCloud (vCD) template can have guest customization that will change
the root password. This setting tells Cloudsoft AMP to only use the given password, rather than the initial
randomly generated password that vCD returns. Without this property, there is a race for such templates:
does AMP manage to create the admin user before the guest customization changes the login and reboots,
or is the password reset first (the latter means AMP can never ssh to the VM). With this property,
AMP will always wait for guest customization to complete before it is able to ssh at all. In such
cases, it is also recommended to use useJcloudsSshInit=false
.)
Following a successful logon, AMP performs the following steps to configure the machine:
creates a new user with the same name as the user brooklyn
is running as locally
(this can be overridden with user
, below).
install the local user’s ~/.ssh/id_rsa.pub
as an authorized_keys
on the new machine,
to make it easy for the operator to ssh
in
(override with privateKeyFile
; or if there is no id_{r,d}sa{,.pub}
an ad hoc keypair will be generated
for the regular AMP user;
if there is a passphrase on the key, this must be supplied)
give sudo
access to the newly created user (override with grantUserSudo: false
)
disable direct root
login to the machine
These steps can be skipped or customized as described below.
The following is a subset of the most commonly used configuration keys used to customize
cloud provisioning.
For more keys and more detail on the keys below, see
JcloudsLocationConfig
(javadoc,
src)
.
Most providers require exactly one of either region
(e.g. us-east-1
) or endpoint
(the URL, usually for private cloud deployments)
Hardware requirements can be specified, including
minRam
, minCores
, and os64Bit
; or as a specific hardwareId
VM image constraints can be set using osFamily
(e.g. Ubuntu
, CentOS
, Debian
, RHEL
)
and osVersionRegex
, or specific VM images can be specified using imageId
or imageNameRegex
Specific VM images can be specified using imageId
or imageNameRegex
Specific Security Groups can be specified using securityGroups
, as a list of strings (the existing security group names),
or inboundPorts
can be set, as a list of numeric ports (selected clouds only)
Where a key pair is registered with a target cloud for logging in to machines,
AMP can be configured to request this when provisioning VMs by setting keyPair
(selected clouds only).
Note that if this keyPair
does not correspond your default ~/.ssh/id_rsa
, you must typically
also specify the corresponding loginUser.privateKeyFile
as a file or URL accessible from AMP.
A specific VM name (often the hostname) base to be used can be specified by setting groupId
.
By default, this name is constructed based on the entity which is creating it,
including the ID of the app and of the entity.
(As many cloud portals let you filter views, this can help find a specific entity or all machines for a given application.)
For more sophisticated control over host naming, you can supply a custom
CloudMachineNamer
(javadoc,
src)
,
for example
cloudMachineNamer: CustomMachineNamer
.
CustomMachineNamer
(javadoc,
src)
will use the entity’s name or following a template you supply.
On many clouds, a random suffix will be appended to help guarantee uniqueness;
this can be removed by setting vmNameSaltLength: 0
(selected clouds only).
A DNS domain name where this host should be placed can be specified with domainName
(in selected clouds only)
User metadata can be attached using the syntax userMetadata: { key: value, key2: "value 2" }
(or userMetadata=key=value,key2="value 2"
in a properties file)
By default, several pieces of user metadata are set to correlate VMs with AMP entities,
prefixed with brooklyn-
.
This user metadata can be omitted by setting includeAMPUserMetadata: false
.
You can specify the number of attempts AMP should make to create
machines with machineCreateAttempts
(jclouds only). This is useful as an efficient low-level fix
for those occasions when cloud providers give machines that are dead on arrival.
You can of course also resolve it at a higher level with a policy such as
ServiceRestarter
(javadoc,
src)
.
If you want to investigate failures, set destroyOnFailure: false
to keep failed VM’s around. (You’ll have to manually clean them up.)
The default is false: if a VM fails to start, or is never ssh’able, then the VM will be terminated.
user
and password
can be used to configure the operating user created on cloud-provisioned machines
The loginUser
config key (and subkeys) control the initial user to log in as,
in cases where this cannot be discovered from the cloud provider
Private keys can be specified using privateKeyFile
;
these are not copied to provisioned machines, but are required if using a local public key
or a pre-defined authorized_keys
on the server.
(For more information on SSH keys, see here.)
If there is a passphrase on the key file being used, you must supply it to AMP for it to work, of course!
privateKeyPassphrase
does the trick (as in brooklyn.location.jclouds.privateKeyPassphrase
, or other places
where privateKeyFile
is valid). If you don’t like keys, you can just use a plain old password
.
Public keys can be specified using publicKeyFile
,
although these can usually be omitted if they follow the common pattern of being
the private key file with the suffix .pub
appended.
(It is useful in the case of loginUser.publicKeyFile
, where you shouldn’t need,
or might not even have, the private key of the root
user when you log in.)
Provide a list of URLs to public keys in extraSshPublicKeyUrls
,
or the data of one key in extraSshPublicKeyData
,
to have additional public keys added to the authorized_keys
file for logging in.
(This is supported in most but not all locations.)
Use dontCreateUser
to have AMP run as the initial loginUser
(usually root
),
without creating any other user.
A post-provisioning setup.script
can be specified (as a URL) to run an additional script,
before making the Location
available to entities,
optionally also using setup.script.vars
(set as key1:value1,key2:value2
)
Use openIptables: true
to automatically configure iptables
, to open the TCP ports required by
the software process. One can alternatively use stopIptables: true
to entirely stop the
iptables service.
Use installDevUrandom: true
to fall back to using /dev/urandom
rather than /dev/random
. This setting
is useful for cloud VMs where there is not enough random entropy, which can cause /dev/random
to be
extremely slow (causing ssh
to be extremely slow to respond).
Use useJcloudsSshInit: false
to disable the use of the native jclouds support for initial commands executed
on the VM (e.g. for creating new users, setting root passwords, etc.). Instead, AMP’s ssh support will
be used. Timeouts and retries are more configurable within AMP itself. Therefore this option is particularly
recommended when the VM startup is unusual (for example, if guest customizations will cause reboots and/or will
change login credentials).
Use brooklyn.ssh.config.noDeleteAfterExec: true
to keep scripts on the server after execution.
The contents of the scripts and the stdout/stderr of their execution are available in the AMP web console,
but sometimes it can also be useful to have them on the box.
This setting prevents scripts executed on the VMs from being deleted on completion.
Note that some scripts run periodically so this can eventually fill a disk; it should only be used for dev/test.
jclouds supports many additional options for configuring how a virtual machine is created and deployed, many of which are for cloud-specific features and enhancements. AMP supports some of these, but if what you are looking for is not supported directly by AMP, we instead offer a mechanism to set any parameter that is supported by the jclouds template options for your cloud.
Part of the process for creating a virtual machine is the creation of a jclouds TemplateOptions
object. jclouds
providers extends this with extra options for each cloud - so when using the AWS provider, the object will be of
type AWSEC2TemplateOptions
. By examining the source code,
you can see all of the options available to you.
The templateOptions
config key takes a map. The keys to the map are method names, and AMP will find the method on
the TemplateOptions
instance; it then invokes the method with arguments taken from the map value. If a method takes a
single parameter, then simply give the argument as the value of the key; if the method takes multiple parameters, the
value of the key should be an array, containing the argument for each parameter.
For example, here is a complete blueprint that sets some AWS EC2 specific options:
location: AWS_eu-west-1
services:
- type: org.apache.brooklyn.entity.software.base.EmptySoftwareProcess
provisioningProperties:
templateOptions:
subnetId: subnet-041c8373
mapNewVolumeToDeviceName: ["/dev/sda1", 100, true]
securityGroupIds: ['sg-4db68928']
Here you can see that we set three template options:
subnetId
is an example of a single parameter method. AMP will effectively try to run the statement
templateOptions.subnetId("subnet-041c88373");
mapNewVolumeToDeviceName
is an example of a multiple parameter method, so the value of the key is an array.
AMP will effectively true to run the statement templateOptions.mapNewVolumeToDeviceName("/dev/sda1", 100, true);
securityGroupIds
demonstrates an ambiguity between the two types; AMP will first try to parse the value as
a multiple parameter method, but there is no method that matches this parameter. In this case, AMP will next try
to parse the value as a single parameter method which takes a parameter of type List
; such a method does exist so
the operation will succeed.If the method call cannot be matched to the template options available - for example if you are trying to set an AWS EC2 specific option but your location is an OpenStack cloud - then a warning is logged and the option is ignored.
See the following resources for more information:
To connect to a Cloud, AMP requires appropriate credentials. These comprise the identity
and
credential
in AMP terminology.
For private clouds (and for some clouds being targeted using a standard API), the endpoint
must also be specified, which is the cloud’s URL. For public clouds, AMP comes preconfigured
with the endpoints, but many offer different choices of the region
where you might want to deploy.
Clouds vary in the format of the identity, credential, endpoint, and region. Some also have their own idiosyncracies. More details for configuring some common clouds is included below. You may also find these sources helpful:
AWS has an “access key” and a “secret key”, which correspond to AMP’s identity and credential respectively.
These keys are the way for any programmatic mechanism to access the AWS API.
To generate an access key and a secret key, see jclouds instructions and AWS IAM instructions.
An example of the expected format is shown below:
brooklyn.location.jclouds.aws-ec2.identity=ABCDEFGHIJKLMNOPQRST
brooklyn.location.jclouds.aws-ec2.credential=abcdefghijklmnopqrstu+vwxyzabcdefghijklm
Security groups are not always deleted by jclouds. This is due to a limitation in AWS (see https://issues.apache.org/jira/browse/JCLOUDS-207). In brief, AWS prevents the security group being deleted until there are no VMs using it. However, there is eventual consistency for recording which VMs still reference those security groups: after deleting the VM, it can sometimes take several minutes before the security group can be deleted. jclouds retries for 3 seconds, but does not block for longer.
Cloudsoft AMP can run with AWS VPC and both public and private subnets.
Simply provide the subnet-a1b2c3d4
as the networkName
when deploying:
location:
jclouds:aws-ec2:
region: us-west-1
networkName: subnet-a1b2c3d4 # use your subnet ID
Subnets are typically used in conjunction with security groups.
AMP does not attempt to open additional ports
when private subnets or security groups are supplied,
so the subnet and ports must be configured appropriately for the blueprints being deployed.
You can configure a default security group with appropriate (or all) ports opened for
access from the appropriate (or all) CIDRs and security groups,
or you can define specific securityGroups
on the location
or as provisioning.properties
on the entities.
Make sure that AMP has access to the machines under management. This includes SSH, which might be done with a public IP created with inbound access on port 22 permitted for a CIDR range including the IP from which AMP contacts it. Alternatively you can run AMP on a machine in that same subnet, or set up a VPN or jumphost which AMP will use.
If you have a pre-2014 Amazon account, it is likely configured in some regions to run in “EC2 Classic” mode
by default, instead of the more modern “VPC” default mode. This can cause failures when requesting certain hardware
configurations because many of the more recent hardware “instance types” only run in “VPC” mode.
For instance when requesting an instance with minRam: 8gb
, AMP may opt for an m4.large
,
which is a VPC-only instance type. If you are in a region configured to use “EC2 Classic” mode,
you may see a message such as this:
400 VPCResourceNotSpecified: The specified instance type can only be used in a VPC.
A subnet ID or network interface ID is required to carry out the request.
This is a limitation of “legacy” accounts. The easiest fixes are either:
m3.xlarge
(see below)eu-central-1
should work as it only offers VPC mode,
irrespective of the age of your AWS account)To understand the situation, the following resources may be useful:
If you want to solve this problem with your existing account, you can create a VPC and instruct AMP to use it:
default
security group for that VPC.
Modify its “Inbound Rules” to allow “All traffic” from “Anywhere”.
(Or for more secure options, see the instructions in the previous section,
“Using Subnets”.)subnet-a1b2c3d4
) for use in AMP.You can then deploy blueprints to the subnet, allowing VPC hardware instance types,
by specifying the subnet ID as the networkName
in your YAML blueprint.
This is covered in the previous section, “Using Subnets”.
GCE uses a service account e-mail address for the identity and a private key as the credential.
To obtain these from GCE, see the jclouds instructions.
An example of the expected format is shown below.
Note that when supplying the credential in a properties file, it should be one long line
with \n
representing the new line characters:
brooklyn.location.jclouds.google-compute-engine.identity=123456789012@developer.gserviceaccount.com
brooklyn.location.jclouds.google-compute-engine.credential=-----BEGIN RSA PRIVATE KEY-----\nabcdefghijklmnopqrstuvwxyznabcdefghijk/lmnopqrstuvwxyzabcdefghij\nabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghij+lm\nnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklm\nnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxy\nzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijk\nlmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvw\nxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghi\njklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstu\nvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefg\nhijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrs\ntuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcde\nfghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvw\n-----END RSA PRIVATE KEY-----
GCE accounts can have low default quotas.
It is easy to requesta quota increase by submitting a quota increase form.
GCE accounts often have a limit to the number of networks that can be created. One work around is to manually create a network with the required open ports, and to refer to that named network in AMP’s location configuration.
To create a network, see GCE network instructions.
For example, for dev/demo purposes an “everything” network could be created that opens all ports.
Name | everything | ||
Description | opens all tcp ports | ||
Source IP Ranges | 0.0.0.0/0 | ||
Allowed protocols and ports | tcp:0-65535 and udp:0-65535 |
SoftLayer may provision VMs in different VLANs, even within the same region. Some applications require VMs to be on the same internal subnet; blueprints for these can specify this behaviour in SoftLayer in one of two ways.
The VLAN ID can be set explicitly using the fields
primaryNetworkComponentNetworkVlanId
and
primaryBackendNetworkComponentNetworkVlanId
of SoftLayerTemplateOptions
when specifying the location being used in the blueprint, as follows:
location:
jclouds:softlayer:
region: ams01
templateOptions:
# Enter your preferred network IDs
primaryNetworkComponentNetworkVlanId: 1153481
primaryBackendNetworkComponentNetworkVlanId: 1153483
This method requires that a VM already exist and you look up the IDs of its VLANs, for example in the SoftLayer console UI, and that subsequently at least one VM in that VLAN is kept around. If all VMs on a VLAN are destroyed SoftLayer may destroy the VLAN. Creating VLANs directly and then specifying them as IDs here may not work. Add a line note
The second method tells AMP to discover VLAN information automatically: it will provision one VM first, and use the VLAN information from it when provisioning subsequent machines. This ensures that all VMs are on the same subnet without requiring any manual VLAN referencing, making it very easy for end-users.
To use this method, we tell AMP to use SoftLayerSameVlanLocationCustomizer
as a location customizer. This can be done on a location as follows:
location:
jclouds:softlayer:
region: lon02
customizers:
- $brooklyn:object:
type: org.apache.brooklyn.location.jclouds.softlayer.SoftLayerSameVlanLocationCustomizer
softlayer.vlan.scopeUid: "my-custom-scope"
softlayer.vlan.timeout: 10m
Usually you will want the scope to be unique to a single application, but if you need multiple applications to share the same VLAN, simply configure them with the same scope identifier.
It is also possible with many blueprints to specify this as one of the
provisioning.properties
on an application:
services:
- type: org.apache.brooklyn.entity.stock.BasicApplication
id: same-vlan-application
brooklyn.config:
provisioning.properties:
customizers:
- $brooklyn:object:
type: org.apache.brooklyn.location.jclouds.softlayer.SoftLayerSameVlanLocationCustomizer
softlayer.vlan.scopeUid: "my-custom-scope"
softlayer.vlan.timeout: 10m
If you are writing an entity in Java, you can also use the helper
method forScope(String)
to create the customizer. Configure the
provisioning flags as follows:
JcloudsLocationCustomizer vlans = SoftLayerSameVlanLocationCustomizer.forScope("my-custom-scope");
flags.put(JcloudsLocationConfig.JCLOUDS_LOCATION_CUSTOMIZERS.getName(), ImmutableList.of(vlans));
The allowed configuration keys for the SoftLayerSameVlanLocationCustomizer
are:
softlayer.vlan.scopeUid The scope identifier for locations whose VMs will have the same VLAN.
softlayer.vlan.timeout The amount of time to wait for a VM to be configured before timing out without setting the VLAN ids.
softlayer.vlan.publicId A specific public VLAN ID to use for the specified scope.
softlayer.vlan.privateId A specific private VLAN ID to use for the specified scope.
An entity being deployed to a customized location will have the VLAN ids set as sensors, with the same names as the last two configuration keys.
NOTE If the SoftLayer location is already configured with specific VLANs then this customizer will have no effect.
When multiple networks are available you should indicate which ones machines should join. Do this by setting the desired values id as an option in the templateOptions configuration:
location:
jclouds:openstack-nova:
...
templateOptions:
# Assign the node to all networks in the list.
networks:
- network-one-id
- network-two-id
- ...
Configuration of floating IPs is as networks; specify the pools to use as another template option:
location:
jclouds:openstack-nova:
...
templateOptions:
# Pool names to use when allocating a floating IP
floatingIpPoolNames:
- "pool name"
Consult jclouds’ Nova template options for futher options when configuring Openstack locations.
Azure is a cloud computing platform and infrastructure created by Microsoft. AMP includes support for Azure, as
one of the Apache jclouds supported clouds Microsoft Azure Compute
.
The “classic deployment” model is used, as opposed to the newer “resource manager deployment” model. See https://azure.microsoft.com/en-gb/documentation/articles/resource-manager-deployment-model/ for details.
Microsoft Azure requests are signed by via SSL certificate. You need to upload one into your account in order to use Azure location.
# create the certificate request
mkdir -m 700 $HOME/.amp
openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout $HOME/.amp/azure.pem -out $HOME/.amp/azure.pem
# create the p12 file, and note your export password. This will be your test credentials.
openssl pkcs12 -export -out $HOME/.amp/azure.p12 -in $HOME/.amp/azure.pem -name "amp :: $USER"
# create a cer file
openssl x509 -inform pem -in $HOME/.amp/azure.pem -outform der -out $HOME/.amp/azure.cer
Finally, upload .cer file to the management console at https://manage.windowsazure.com/@myId#Workspaces/AdminTasks/ListManagementCertificates to authorize this certificate.
Please note, you can find the “myId” value for this link by looking at the URL when logged into the Azure management portal.
Notice, you will need to use .p12
format in the brooklyn.properties
.
First, in your brooklyn.properties
define a location as follows:
brooklyn.location.jclouds.azurecompute.identity=$HOME/.amp/azure.p12
brooklyn.location.jclouds.azurecompute.credential=<P12_EXPORT_PASSWORD>
brooklyn.location.jclouds.azurecompute.endpoint=https://management.core.windows.net/<YOUR_SUBSCRIPTION_ID>
brooklyn.location.jclouds.azurecompute.vmNameMaxLength=45
brooklyn.location.jclouds.azurecompute.jclouds.azurecompute.operation.timeout=120000
brooklyn.location.jclouds.azurecompute.user=<USER_NAME>
brooklyn.location.jclouds.azurecompute.password=<PASSWORD>
During the VM provisioning, Azure will set up the account with <USER_NAME>
and <PASSWORD>
automatically.
Notice, <PASSWORD>
must be 8 characters mininum and must contain 3 of the following: a lowercase character, an uppercase
character, a number, a special character.
To force AMP to use a particular image in Azure, say Ubuntu 14.04.1 64bit, one can add:
brooklyn.location.jclouds.azurecompute.imageId=b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20150123-en-us-30GB
From $AMP_HOME, you can list the image IDs available using the following command:
./bin/brooklyn cloud-compute list-images --location azure-west-europe
To force AMP to use a particular hardwareSpec in Azure, one can add something like:
brooklyn.location.jclouds.azurecompute.hardwareId=BASIC_A2
From $AMP_HOME, you can list the hardware profile IDs available using the following command:
./bin/brooklyn cloud-compute list-hardware-profiles --location azure-west-europe
At the time of writing, the classic deployment model has the possible values shown below. See https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-size-specs/ for further details, though that description focuses on the new “resource manager deployment” rather than “classic”.
Basic_A0
to Basic_A4
Standard_D1
to Standard_D4
Standard_G1
to Standard_G5
ExtraSmall
, Small
, Medium
, Large
, ExtraLarge
For convenience, you can define a named location, like:
brooklyn.location.named.azure-west-europe=jclouds:azurecompute:West Europe
brooklyn.location.named.azure-west-europe.displayName=Azure West Europe
brooklyn.location.named.azure-west-europe.imageId=b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20150123-en-us-30GB
brooklyn.location.named.azure-west-europe.hardwareId=BASIC_A2
brooklyn.location.named.azure-west-europe.user=test
brooklyn.location.named.azure-west-europe.password=MyPassword1!
This will create a location named azure-west-europe
. It will inherit all the configuration
defined on brooklyn.location.jclouds.azurecompute
. It will also augment and override this
configuration (e.g. setting the display name, image id and hardware id).
On Linux VMs, The user
and password
will create a user with that name and set its password,
disabling the normal login user and password defined on the azurecompute
location.
The following configuration options are important for provisioning Windows VMs in Azure:
osFamily: windows
tells AMP to consider it as a Windows machine
useJcloudsSshInit: false
tells jclouds to not try to connect to the VM
vmNameMaxLength: 15
tells the cloud client to strip the VM to maximum 15 characters.
This is the maximum size supported by Azure Windows VMs.
winrm.useHttps
tells AMP to configure the WinRM client to use HTTPS.
This is currently not supported in the default configuration for other clouds, where AMP is deploying Windows VMs.
If the parameter value is false
the default WinRM port is 5985; if true
the default port
for WinRM will be 5986. Use of default ports is stongly recommended.
winrm.useNtlm
tells AMP to configure the WinRM client to use NTLM protocol.
For Azure, this is mandatory.
For other clouds, this value is used in the cloud init script to configure WinRM on the VM.
If the value is true
then Basic Authentication will be disabled and the WinRM client will only use Negotiate plus NTLM.
If the value is false
then Basic Authentication will be enabled and the WinRM client will use Basic Authentication.
NTLM is the default Authentication Protocol.
The format of this configuration option is subject to change: WinRM supports several authentication mechanisms, so this may be changed to a prioritised list so as to provide fallback options.
user
tells AMP the user to login as (in this case using WinRM).
For Windows on Azure, the value should match that supplied in the overrideLoginUser
of
the templateOptions
.
password
: tells AMP the password to use when connecting (in this case using WinRM).
For Windows on Azure, the value should match that supplied in the overrideLoginPassword
of
the templateOptions
.
templateOptions: { overrideLoginUser: adminuser, overrideLoginPassword: Pa55w0rd! }
tells the Azure Cloud to provision a VM with the given admin username and password. Note that
no “Administrator” user will be created.
If this config is not set then the VM will have a default user named “jclouds” with password “Azur3Compute!”. It is Strongly Recommended that these template options are set.
Notice: one cannot use Administrator
as the user in Azure.
This configuration is subject to change in future releases.
Below is an example for provisioning a Windows-based entity on Azure. Note the placeholder values for the identity, credential and password.
name: Windows Test @ Azure
location:
jclouds:azurecompute:West Europe:
identity: /home/users/amp/.amp/azure.p12
credential: xxxxxxxp12
endpoint: https://management.core.windows.net/12345678-1234-1234-1234-123456789abc
imageId: 3a50f22b388a4ff7ab41029918570fa6__Windows-Server-2012-Essentials-20141204-enus
hardwareId: BASIC_A2
osFamily: windows
useJcloudsSshInit: false
vmNameMaxLength: 15
winrm.useHttps: true
user: amp
password: secretPass1!
templateOptions:
overrideLoginUser: amp
overrideLoginPassword: secretPass1!
services:
- type: org.apache.brooklyn.entity.software.base.VanillaWindowsProcess
brooklyn.config:
install.command: echo install phase
launch.command: echo launch phase
checkRunning.command: echo launch phase
Below is an example named location for Azure, configured in brooklyn.properties
. Note the
placeholder values for the identity, credential and password.
brooklyn.location.named.myazure=jclouds:azurecompute:West Europe
brooklyn.location.named.myazure.displayName=Azure West Europe (windows)
brooklyn.location.named.myazure.identity=$HOME/.amp/azure.p12
brooklyn.location.named.myazure.credential=<P12_EXPORT_PASSWORD>
brooklyn.location.named.myazure.endpoint=https://management.core.windows.net/<YOUR_SUBSCRIPTION_ID>
brooklyn.location.named.myazure.vmNameMaxLength=15
brooklyn.location.named.myazure.jclouds.azurecompute.operation.timeout=120000
brooklyn.location.named.myazure.imageId=3a50f22b388a4ff7ab41029918570fa6__Windows-Server-2012-Essentials-20141204-enus
brooklyn.location.named.myazure.hardwareId=BASIC_A2
brooklyn.location.named.myazure.osFamily=windows
brooklyn.location.named.myazure.useJcloudsSshInit=false
brooklyn.location.named.myazure.winrm.useHttps=true
brooklyn.location.named.myazure.user=amp
brooklyn.location.named.myazure.password=secretPass1!
brooklyn.location.named.myazure.templateOptions={ overrideLoginUser: amp, overrideLoginPassword: secretPass1! }
As described under the configuration options, the username and password must be explicitly supplied in the configuration.
This is passed to the Azure Cloud during provisioning, to create the required user. These values
correspond to the options AdminUsername
and AdminPassword
in the Azure API.
If a hard-coded password is not desired, then within Java code a random password could be
auto-generated and passed into the call to location.obtain(Map<?,?>)
to override these values.
This approach differs from the behaviour of clouds like AWS, where the password is auto-generated by the cloud provider and is then retrieved via the cloud provider’s API after provisioning the VM.
The WinRM initialization in Azure is achieved through configuration options in the VM provisioning request. The required configuration is to enabled HTTPS (if Azure is told to use http, the VM comes preconfigured with winrm encrypted over http). The default is then to support NTLM protocol.
The setup of Windows VMs on Azure differs from that on other clouds, such as AWS. In contrast, on AWS an init script is passed to the cloud API to configure WinRM appropriately.
Windows initialization scripts in Azure are unfortunately not supported in “classic deployment”
model, but are available in the newer “resource manager deployment” model as an “Azure VM Extension”.
jclouds-numergy is a jclouds provider modelled on openstack-nova.
A provider in jclouds represents a particular vendor cloud service that supports one or more APIs.
Often, a jclouds provider is simply the appropriate API together with vendor-specific instantiation values, such as the endpoint URL. In some cases, the vendor offers additional functionality that goes beyond the API, like Numergy which implements Openstack Neutron FWaaS extension instead of the Openstack Nova security groups.
This provider assumes that you have configured your Numergy tenant. In particular, you need to have defined, at least:
network
where jclouds will attach VMs torouter
with an interface on the network above, that has an external gateway configured on a public networkpool of floating IPs
still available (one per VM you want to provision using jclouds at least)brooklyn.location.named.Numergy=jclouds:numergy-compute:tr2
brooklyn.location.named.Numergy.identity=completeTenantName:accountName
brooklyn.location.named.Numergy.credential=password
#centos65_x86_64_LVM
brooklyn.location.named.Numergy.imageId=tr2/8145b96f-4845-4814-979c-32cb7ad12b8a
brooklyn.location.named.Numergy.templateOptions={ floatingIpPoolNames: [['PublicNetwork-02']], overrideLoginUser: stack }
where:
tr2
is the region name,PublicNetwork-02
is the name of the floatingIp pool,stack
is the suggested user to login into the public images available in Numergy.identity
could be something like Numergy-Demo:POC_ACME_CORP:jo.blogs@acme.com
.If there are authentication problems with AMP, it can be useful to use the Nova CLI.
First log into cloud.numergy.com with your username:password, and (from the “Access and Security” page,
on the “API Access” tab) click Download OpenStack RC File
. This gives a file like:
#!/bin/bash
# With the addition of Keystone, to use an openstack cloud you should
# authenticate against keystone, which returns a **Token** and **Service
# Catalog**. The catalog contains the endpoint for all services the
# user/tenant has access to - including nova, glance, keystone, swift.
#
# *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0. We
# will use the 1.1 *compute api*
export OS_AUTH_URL=https://cloud.numergy.com/identity/v2.0
# With the addition of Keystone we have standardized on the term **tenant**
# as the entity that owns the resources.
export OS_TENANT_ID=123456789012345678901234567890ab
export OS_TENANT_NAME="Numergy-Demo:POC_ACME_CORP"
# In addition to the owning entity (tenant), openstack stores the entity
# performing the action as the **user**.
export OS_USERNAME="jo.blogs@acme.net"
# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
Installing the nova CLI (following the instruction at http://docs.openstack.org/user-guide/common/cli_install_openstack_command_line_clients.html):
sudo easy_install pip
pip install python-novaclient
Then test out the credentials using the file downloaded above:
source /path/to/openstack.rc.txt
nova list
nova image-list
vCloud director enables the provisioning and control of VMware based clouds. These are supported through Apache jclouds, VMware vCloud Director
v.1.5
First, in your brooklyn.properties
define a location as follows:
brooklyn.location.named.my-vcloud-director.identity=<V_ORG@USERNAME>
brooklyn.location.named.my-vcloud-director.credential=<PASSWORD>
brooklyn.location.named.my-vcloud-director.endpoint=https://<YOUR_ENDPOINT>/api
To force AMP to use a particular image in vCloud Director, one can add:
brooklyn.location.named.my-vcloud-director.imageNameRegex=centos6.4x64
From $AMP_HOME, you can list the image IDs available using the following command:
./bin/brooklyn cloud-compute list-images --location my-vcloud-director
To force AMP to use a particular hardwareSpec in vCloud Director, one can add something like:
brooklyn.location.named.my-vcloud-director.hardwareId=1CPU_1GB_RAM
From $AMP_HOME, you can list the hardware profile IDs available using the following command:
./bin/brooklyn cloud-compute list-hardware-profiles --location my-vcloud-director
Notice that the hardware profiles are synthetically generated using the following properties:
jclouds.vcloud-director.hardware-profiles.max-cpu, 8 max CPUs, by default
jclouds.vcloud-director.hardware-profiles.min-ram, 512 mb minimum ram, by default
jclouds.vcloud-director.hardware-profiles.max-ram, 8192 mb maximum ram, by default
By default, the following hardware profiles are generated:
{id=1CPU_0.5GB_RAM, providerId=1CPU_0.5GB_RAM, name=1CPU_0.5GB_RAM, processors=[{cores=1.0, speed=1.0}], ram=512, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=1CPU_1GB_RAM, providerId=1CPU_1GB_RAM, name=1CPU_1GB_RAM, processors=[{cores=1.0, speed=1.0}], ram=1024, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=1CPU_2GB_RAM, providerId=1CPU_2GB_RAM, name=1CPU_2GB_RAM, processors=[{cores=1.0, speed=1.0}], ram=2048, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=1CPU_4GB_RAM, providerId=1CPU_4GB_RAM, name=1CPU_4GB_RAM, processors=[{cores=1.0, speed=1.0}], ram=4096, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=1CPU_8GB_RAM, providerId=1CPU_8GB_RAM, name=1CPU_8GB_RAM, processors=[{cores=1.0, speed=1.0}], ram=8192, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=2CPU_0.5GB_RAM, providerId=2CPU_0.5GB_RAM, name=2CPU_0.5GB_RAM, processors=[{cores=2.0, speed=1.0}], ram=512, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=2CPU_1GB_RAM, providerId=2CPU_1GB_RAM, name=2CPU_1GB_RAM, processors=[{cores=2.0, speed=1.0}], ram=1024, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=2CPU_2GB_RAM, providerId=2CPU_2GB_RAM, name=2CPU_2GB_RAM, processors=[{cores=2.0, speed=1.0}], ram=2048, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=2CPU_4GB_RAM, providerId=2CPU_4GB_RAM, name=2CPU_4GB_RAM, processors=[{cores=2.0, speed=1.0}], ram=4096, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=2CPU_8GB_RAM, providerId=2CPU_8GB_RAM, name=2CPU_8GB_RAM, processors=[{cores=2.0, speed=1.0}], ram=8192, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=4CPU_0.5GB_RAM, providerId=4CPU_0.5GB_RAM, name=4CPU_0.5GB_RAM, processors=[{cores=4.0, speed=1.0}], ram=512, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=4CPU_1GB_RAM, providerId=4CPU_1GB_RAM, name=4CPU_1GB_RAM, processors=[{cores=4.0, speed=1.0}], ram=1024, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=4CPU_2GB_RAM, providerId=4CPU_2GB_RAM, name=4CPU_2GB_RAM, processors=[{cores=4.0, speed=1.0}], ram=2048, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=4CPU_4GB_RAM, providerId=4CPU_4GB_RAM, name=4CPU_4GB_RAM, processors=[{cores=4.0, speed=1.0}], ram=4096, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=4CPU_8GB_RAM, providerId=4CPU_8GB_RAM, name=4CPU_8GB_RAM, processors=[{cores=4.0, speed=1.0}], ram=8192, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=8CPU_0.5GB_RAM, providerId=8CPU_0.5GB_RAM, name=8CPU_0.5GB_RAM, processors=[{cores=8.0, speed=1.0}], ram=512, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=8CPU_1GB_RAM, providerId=8CPU_1GB_RAM, name=8CPU_1GB_RAM, processors=[{cores=8.0, speed=1.0}], ram=1024, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=8CPU_2GB_RAM, providerId=8CPU_2GB_RAM, name=8CPU_2GB_RAM, processors=[{cores=8.0, speed=1.0}], ram=2048, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=8CPU_4GB_RAM, providerId=8CPU_4GB_RAM, name=8CPU_4GB_RAM, processors=[{cores=8.0, speed=1.0}], ram=4096, hypervisor=esxi, supportsImage=ALWAYS_TRUE},
{id=8CPU_8GB_RAM, providerId=8CPU_8GB_RAM, name=8CPU_8GB_RAM, processors=[{cores=8.0, speed=1.0}], ram=8192, hypervisor=esxi, supportsImage=ALWAYS_TRUE}
If one needs to generate more hardware profiles with more RAM, by adding to brooklyn.propeties
something like:
brooklyn.location.named.my-vcloud-director.jclouds.vcloud-director.hardware-profiles.max-ram: 20480
will generate hardare profiles up to 20 GB RAM.
location:
jclouds:vcloud-director:
endpoint: https://<YOUR_ENDPOINT>/api
identity: <V_ORG@USERNAME>
credential: <PASSWORD>
jclouds.vcloud-director.hardware-profiles.max-cpu: 12
jclouds.vcloud-director.hardware-profiles.max-ram: 20480
templateOptions:
networks: [ "<MY_NETWORK" ]
For VMware environments, vRealise Automation (vRA) is an important target. AMP includes support for vRA, version 6.1, through a custom jclouds provider.
Below is a YAML example of a vRA location configurion.
location:
jclouds:vcac:
endpoint: https://<YOUR_ENDPOINT>
identity: <USERNAME@TENANT>
credential: <PASSWORD>
templateOptions:
requestFor: <REQUEST_FOR>
networkProfileName: <NETWORK_PROFILE_NAME>
subtenantRef: <SUBTENANT_REF>
Alternatively, this can be defined as a named location in brooklyn.properties
as follows:
brooklyn.location.named.my-vcac=jclouds:vcac
brooklyn.location.named.my-vcac.endpoint=https://<YOUR_ENDPOINT>
brooklyn.location.named.my-vcac.identity=<USERNAME@TENANT>
brooklyn.location.named.my-vcac.credential=<PASSWORD>
brooklyn.location.named.my-vcac.templateOptions={ requestFor: <REQUEST_FOR>, networkProfileName: <NETWORK_PROFILE_NAME>, subtenantRef: <SUBTENANT_REF> }
Based on experimentation and on the vRA Programming Guide, e.g. in the “Request a Machine” section, these parameters have the following descriptions:
requestedFor
is a String that “Specifies the ID of the user for whom this request is logged.”
The networkProfileName
is undocumented for requesting a machine, but provisioning seems to fail
without it. It is equivalent to the Virtual Machine profile name
in the vRA web-console.
The subtenantRef
is the “ID of the business group.” It corresponds to the key
provider-provisioningGroupId
in the request payload (RequestData
).
To force AMP to use a particular image in vRA, one can add a line like:
brooklyn.location.named.my-vcac.imageId=225be644-a0f0-4d7b-85d5-a399b86ab5ad
From $AMP_HOME, you can list the image IDs available using the following command:
./bin/amp cloud-compute list-images --location my-vcac
To force AMP to use a particular hardwareSpec in vRA, one can add something like:
brooklyn.location.named.my-vcac.hardwareId=medium
From $AMP_HOME, you can list the hardware profile IDs available using the following command:
./bin/amp cloud-compute list-hardware-profiles --location my-vcac
By default, this will require a manual approval step to approve the request created by jclouds on behalf of the user.
To enable auto approval workflow, please add the following options, in addition to the previous (obvoiusly substituting the correct approver name and password):
brooklyn.location.named.my-vcac.templateOptions={ ..., shouldAutoApprove:true, approverName: "tcaapprover@dtacorp@tca", approverPassword: "pa55word" }
Named locations can be defined for commonly used groups of properties,
with the syntax brooklyn.location.named.your-group-name.
followed by the relevant properties.
These can be accessed at runtime using the syntax named:your-group-name
as the deployment location.
Some illustrative examples using named locations and showing the syntax and properties above are as follows:
# Production pool of machines for my application (deploy to named:prod1)
brooklyn.location.named.prod1=byon:(hosts="10.9.1.1,10.9.1.2,produser2@10.9.2.{10,11,20-29}")
brooklyn.location.named.prod1.user=produser1
brooklyn.location.named.prod1.privateKeyFile=~/.ssh/produser_id_rsa
brooklyn.location.named.prod1.privateKeyPassphrase=s3cr3tCOMPANYpassphrase
# AWS using my company's credentials and image standard, then labelling images so others know they're mine
brooklyn.location.named.company-jungle=jclouds:aws-ec2:us-west-1
brooklyn.location.named.company-jungle.identity=BCDEFGHIJKLMNOPQRSTU
brooklyn.location.named.company-jungle.privateKeyFile=~/.ssh/public_clouds/company_aws_id_rsa
brooklyn.location.named.company-jungle.imageId=ami-12345
brooklyn.location.named.company-jungle.minRam=2048
brooklyn.location.named.company-jungle.userMetadata=application=my-jungle-app,owner="Bob Johnson"
brooklyn.location.named.company-jungle.machineCreateAttempts=2
brooklyn.location.named.AWS\ Virginia\ Large\ Centos = jclouds:aws-ec2
brooklyn.location.named.AWS\ Virginia\ Large\ Centos.region = us-east-1
brooklyn.location.named.AWS\ Virginia\ Large\ Centos.imageId=us-east-1/ami-7d7bfc14
brooklyn.location.named.AWS\ Virginia\ Large\ Centos.user=root
brooklyn.location.named.AWS\ Virginia\ Large\ Centos.minRam=4096
Named locations can refer to other named locations using named:xxx
as their value.
These will inherit the configuration and can override selected keys.
Properties set in the namespace of the provider (e.g. b.l.jclouds.aws-ec2.KEY=VALUE
)
will be inherited by everything which extends AWS
Sub-prefix strings are also inherited up to brooklyn.location.*
,
except that they are filtered for single-word and other
known keys
(so that we exclude provider-scoped properties when looking at sub-prefix keys).
The precedence for configuration defined at different levels is that the value
defined in the most specific context will apply.
This is rather straightforward and powerful to use,
although it sounds rather more complicated than it is!
The examples below should make it clear.
You could use the following to install
a public key on all provisioned machines,
an additional public key in all AWS machines,
and no extra public key in prod1
:
brooklyn.location.extraSshPublicKeyUrls=http://me.com/public_key
brooklyn.location.jclouds.aws-ec2.extraSshPublicKeyUrls="[ \"http://me.com/public_key\", \"http://me.com/aws_public_key\" ]"
brooklyn.location.named.prod1.extraSshPublicKeyUrls=
And in the example below, a config key is repeatedly overridden.
Deploying location: named:my-extended-aws
will result in an aws-ec2
machine in us-west-1
(by inheritance)
with VAL6
for KEY
:
brooklyn.location.KEY=VAL1
brooklyn.location.jclouds.KEY=VAL2
brooklyn.location.jclouds.aws-ec2.KEY=VAL3
brooklyn.location.jclouds.aws-ec2@us-west-1.KEY=VAL4
brooklyn.location.named.my-aws=jclouds:aws-ec2:us-west-1
brooklyn.location.named.my-aws.KEY=VAL5
brooklyn.location.named.my-extended-aws=named:my-aws
brooklyn.location.named.my-extended-aws.KEY=VAL6
“Bring-your-own-nodes” mode is useful in production, where machines have been provisioned by someone else, and during testing, to cut down provisioning time.
Your nodes must meet the following prerequisites:
To deploy to machines with known IP’s in a blueprint, use the following syntax:
location:
byon:
user: brooklyn
privateKeyFile: ~/.ssh/brooklyn.pem
hosts:
- 192.168.0.18
- 192.168.0.19
Some of the login properties as described above for jclouds are supported,
but not loginUser
(as no users are created), and not any of the
VM creation parameters such as minRam
and imageId
.
(These clearly do not apply in the same way, and they are not
by default treated as constraints, although an entity can confirm these
where needed.)
As before, if the AMP user and its default key are authorized for the hosts,
those fields can be omitted.
Named locations can also be configured in your brooklyn.properties
,
using the format byon:(key=value,key2=value2)
.
For convenience, for hosts wildcard globs are supported.
brooklyn.location.named.On-Prem\ Iron\ Example=byon:(hosts="10.9.1.1,10.9.1.2,produser2@10.9.2.{10,11,20-29}")
brooklyn.location.named.On-Prem\ Iron\ Example.user=produser1
brooklyn.location.named.On-Prem\ Iron\ Example.privateKeyFile=~/.ssh/produser_id_rsa
brooklyn.location.named.On-Prem\ Iron\ Example.privateKeyPassphrase=s3cr3tpassphrase
Alternatively, you can create a specific BYON location through the location wizard tool available within the web console. This location will be saved as a catalog entry for easy reusability.
For more complex host configuration, one can define custom config values per machine. In the example
below, there will be two machines. The first will be a machine reachable on
ssh -i ~/.ssh/brooklyn.pem -p 8022 myuser@50.51.52.53
. The second is a windows machine, reachable
over WinRM. Each machine has also has a private address (e.g. for within a private network).
location:
byon:
hosts:
- ssh: 50.51.52.53:8022
privateAddresses: [10.0.0.1]
privateKeyFile: ~/.ssh/brooklyn.pem
user: myuser
- winrm: 50.51.52.54:8985
privateAddresses: [10.0.0.2]
password: mypassword
user: myuser
osFamily: windows
The BYON location also supports a machine chooser, using the config key byon.machineChooser
.
This allows one to plugin logic to choose from the set of available machines in the pool. For
example, additional config could be supplied for each machine. This could be used (during the call
to location.obtain()
) to find the config that matches the requirements of the entity being
provisioned. See FixedListMachineProvisioningLocation.MACHINE_CHOOSER
.
SSH keys are one of the simplest and most secure ways to access remote servers. They consist of two parts:
A private key (e.g. id_rsa
) which is known only to one party or group
A public key (e.g. id_rsa.pub
) which can be given to anyone and everyone,
and which can be used to confirm that a party has a private key
(or has signed a communication with the private key)
In this way, someone – such as you – can have a private key,
and can install a public key on a remote machine (in an authorized_keys
file)
for secure automated access.
Commands such as ssh
(and AMP) can log in without
revealing the private key to the remote machine,
the remote machine can confirm it is you accessing it (if no one else has the private key),
and no one snooping on the network can decrypt of any of the traffic.
If you don’t have an SSH key, create one with:
$ ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
If you want to deploy to localhost
, ensure that you have a public and private key,
and that your key is authorized for ssh access:
# _Appends_ id_rsa.pub to authorized_keys. Other keys are unaffected.
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Now verify that your setup by running the command: ssh localhost echo hello world
If your setup is correct, you should see hello world
printed back at you.
On the first connection, you may see a message similar to this:
The authenticity of host 'localhost (::1)' can't be established. RSA key fingerprint is 7b:e3:8e:c6:5b:2a:05:a1:7c:8a:cf:d1:6a:83:c2:ad. Are you sure you want to continue connecting (yes/no)?
Simply answer ‘yes’ and then repeat the command again.
If this isn’t the case, see below.
MacOS user? In addition to the above, enable “Remote Login” in “System Preferences > Sharing”.
Got a passphrase? Set brooklyn.location.localhost.privateKeyPassphrase
as described here.
If you’re not sure, or you don’t know what a passphrase is, you can test this by executing ssh-keygen -y
.
If it does not ask for a passphrase, then your key has no passphrase.
If your key does have a passphrase, you can remove it by running ssh-keygen -p
.
Check that you have an ~/.ssh/id_rsa
file (or id_dsa
) and a corresponding public key with a .pub
extension;
if not, create one as described above
~/.ssh/
or files in that directory may have permissions they shouldn’t:
they should be visible only to the user (apart from public keys),
both on the source machine and the target machine.
You can verify this with ls -l ~/.ssh/
: lines should start with -rw-------
or -r--------
(or -rwx------
for directories).
If it does not, execute chmod go-rwx ~/.ssh ~/.ssh/*
.
Sometimes machines are configured with different sets of support SSL/TLS versions and ciphers;
if command-line ssh
and scp
work, but AMP/java does not, check the versions enabled in Java and on both servers.
Missing entropy: creating and using ssh keys requires randomness available on the servers,
usually in /dev/random
; see here for more information
If passwordless ssh login to localhost
and passwordless sudo
is enabled on your
machine, you should be able to deploy blueprints with no special configuration,
just by specifying location: localhost
in YAML.
If you use a passpharse or prefer a different key, these can be configured as follows:
brooklyn.location.localhost.privateKeyFile=~/.ssh/brooklyn_key
brooklyn.location.localhost.privateKeyPassphrase=s3cr3tPASSPHRASE
Alternatively, you can create a specific localhost location through the location wizard tool available within the web console. This location will be saved as a catalog entry for easy reusability.
If you encounter issues or for more information, see SSH Keys Localhost Setup.
If you are normally prompted for a password when executing sudo
commands, passwordless sudo
must also be enabled. To enable passwordless sudo
for your account, a line must be added to the system /etc/sudoers
file. To edit the file, use the visudo
command:
sudo visudo
Add this line at the bottom of the file, replacing username
with your own user:
username ALL=(ALL) NOPASSWD: ALL
If executing the following command does not ask for your password, then sudo
should be setup correctly:
sudo ls
Some additional location types are supported for specialized situations:
The spec host
, taking a string argument (the address) or a map (host
, user
, password
, etc.),
provides a convenient syntax when specifying a single host.
For example:
location: host:(192.168.0.1)
services:
- type: org.apache.brooklyn.entity.webapp.jboss.JBoss7Server
Or, in brooklyn.properties
, set brooklyn.location.named.host1=host:(192.168.0.1)
.
The spec multi
allows multiple locations, specified as targets
,
to be combined and treated as one location.
In its simplest form, this will use the first target location where possible, and will then switch to the second and subsequent locations when there are no machine available.
In the example below, it provisions the first node to 192.168.0.1
, then it provisions into AWS
us-east-1 region (because the bring-your-own-nodes region will have run out of nodes).
location:
multi:
targets:
- byon:(hosts=192.168.0.1)
- jclouds:aws-ec2:us-east-1
services:
- type: org.apache.brooklyn.entity.group.DynamicCluster
brooklyn.config:
initialSize: 3
memberSpec:
$brooklyn:entitySpec:
type: org.apache.brooklyn.entity.machine.MachineEntity
The multi
location also supports the “availability zone” location extension: it presents each
target location as an “availability zone”. This means that a cluster can be configured to
round-robin across the targets.
For example, in the blueprint below the cluster will request VMs round-robin across the three zones
(where zone1
etc are locations already added to the catalog, or defined in brooklyn.properties).
The configuration option dynamiccluster.zone.enable
on DynamicCluster
tells it to query the
given location for the AvailabilityZoneExtension
. If available, it will query for the list of
zones (in this case the list of targets), and then use them round-robin. Custom alternatives to
round-robin are also possible using the configuration option dynamiccluster.zone.placementStrategy
on DynamicCluster
.
location:
multi:
targets:
- zone1
- zone2
- zone3
services:
- type: org.apache.brooklyn.entity.group.DynamicCluster
brooklyn.config:
dynamiccluster.zone.enable: true
initialSize: 4
memberSpec:
$brooklyn:entitySpec:
type: org.apache.brooklyn.entity.machine.MachineEntity
entity type allows defining an entity which becomes available as a location.
AMP can be configured to persist its state so that the AMP server can be restarted, or so that a high availability standby server can take over.
AMP can persist its state to one of two places: the file system, or to an Object Store of your choice.
To configure brooklyn, the relevant command line options for the launch
commands are:
--persist
--persistenceDir
--persistenceLocation
For the persistence mode, the possible values are:
disabled
means that no state will be persisted or read; when AMP stops all state is lost.rebind
means that it will read existing state, and recreate entities, locations and policies
from that. If there is no existing state, startup will fail.clean
means that any existing state will be deleted, and AMP will be started afresh.auto
means AMP will rebind if there is any existing state, or will start afresh if
there is no state.The persistence directory and location can instead be specified from brooklyn.properties
using
the following config keys:
brooklyn.persistence.dir
brooklyn.persistence.location.spec
To persist to the file system, start AMP with:
brooklyn launch --persist auto --persistenceDir /path/to/myPersistenceDir
If there is already data at /path/to/myPersistenceDir
, then a backup of the directory will
be made. This will have a name like /path/to/myPersistenceDir.20140701-142101345.bak
.
The state is written to the given path. The file structure under that path is:
./entities/
./locations/
./policies/
./enrichers/
In each of those directories, an XML file will be created per item - for example a file per
entity in ./entities/
. This file will capture all of the state - for example, an
entity’s: id; display name; type; config; attributes; tags; relationships to locations, child
entities, group membership, policies and enrichers; and dynamically added effectors and sensors.
If using the default persistence dir (i.e. no --persistenceDir
was specified), then AMP will
write its state to ~/.brooklyn/brooklyn-persisted-state/data
. Copies of this directory
will be automatically created in ~/.brooklyn/brooklyn-persisted-state/backups/
each time AMP
is restarted (or if a standby AMP instances takes over as master).
A custom directory for AMP state can also be configured in brooklyn.properties
using:
# For all AMP files
brooklyn.base.dir=/path/to/base/dir
# Sub-directory of base.dir for writing persisted state (if relative). If directory
# starts with "/" (or "~/", or something like "c:\") then assumed to be absolute.
brooklyn.persistence.dir=data
# Sub-directory of base.dir for creating backup directories (if relative). If directory
# starts with "/" (or "~/", or something like "c:\") then assumed to be absolute.
brooklyn.persistence.backups.dir=backups
This base.dir
will also include temporary files such as the OSGi cache.
If persistence.dir
is not specified then it will use the sub-directory
brooklyn-persisted-state/data
of the base.dir
. If the backups.dir
is not specified
the backup directories will be created in the sub-directory backups
of the persistence dir.
AMP can persist its state to any Object Store API that jclouds supports including S3, Swift and Azure. This gives access to any compatible Object Store product or cloud provider including AWS-S3, SoftLayer, Rackspace, HP and Microsoft Azure. For a complete list of supported providers, see jclouds.
To configure the Object Store, add the credentials to ~/.brooklyn/brooklyn.properties
such as:
brooklyn.location.named.aws-s3-eu-west-1=aws-s3:eu-west-1
brooklyn.location.named.aws-s3-eu-west-1.identity=ABCDEFGHIJKLMNOPQRSTU
brooklyn.location.named.aws-s3-eu-west-1.credential=abcdefghijklmnopqrstuvwxyz1234567890ab/c
or:
brooklyn.location.named.softlayer-swift-ams01=jclouds:swift:https://ams01.objectstorage.softlayer.net/auth/v1.0
brooklyn.location.named.softlayer-swift-ams01.identity=ABCDEFGHIJKLM:myname
brooklyn.location.named.softlayer-swift-ams01.credential=abcdefghijklmnopqrstuvwxyz1234567890abcdefghijklmnopqrstuvwxyz12
Start AMP pointing at this target object store, e.g.:
nohup AMP launch --persist auto --persistenceDir myContainerName --persistenceLocation named:softlayer-swift-ams01 &
The following brooklyn.properties
options can also be used:
# Location spec string for an object store (e.g. jclouds:swift:URL) where persisted state
# should be kept; if blank or not supplied, the file system is used.
brooklyn.persistence.location.spec=<location>
# Container name for writing persisted state
brooklyn.persistence.dir=/path/to/dataContainer
# Location spec string for an object store (e.g. jclouds:swift:URL) where backups of persisted
# state should be kept; defaults to the local file system.
brooklyn.persistence.backups.location.spec=<location>
# Container name for writing backups of persisted state;
# defaults to 'backups' inside the default persistence container.
brooklyn.persistence.backups.dir=/path/to/backupContainer
When AMP starts up pointing at existing state, it will recreate the entities, locations and policies based on that persisted state.
Once all have been created, AMP will “manage” the entities. This will bind to the underlying entities under management to update the each entity’s sensors (e.g. to poll over HTTP or JMX). This new state will be reported in the web-console and can also trigger any registered policies.
AMP includes a command to copy persistence state easily between two locations.
The copy-state
CLI command takes the following arguments:
--persistenceDir
--persistenceLocation
--destinationDir
--destinationLocation
--transformations
If rebind fails fail for any reason, details of the underlying failures will be reported
in the brooklyn.debug.log
. There are several approaches to resolving problems.
The problems reported in brooklyn.debug.log will indicate where the problem lies - which entities, locations or policies, and in what way it failed.
The ~/.brooklyn/brooklyn.properties
has several configuration options:
rebind.failureMode.danglingRef=continue
rebind.failureMode.loadPolicy=continue
rebind.failureMode.addPolicy=continue
rebind.failureMode.rebind=fail_at_end
rebind.failureMode.addConfig=fail_at_end
For each of these configuration options, the possible values are:
fail_fast
: stop rebind immediately upon errors; do not try to rebind other entitiesfail_at_end
: continue rebinding all entities, but then fail so that all errors
encountered are reportedcontinue
: log a warning, but ignore the error to continue rebinding. Depending on the
type of error, this can cause serious problems later (e.g. if the state of an entity
was entirely missing, then all its children would be orphaned).The meaning of the configuration options is:
rebind.failureMode.dangingRef
: if there is a reference to an entity, location or policy
that is missing… whether to continue (discarding the reference) or fail.rebind.failureMode.loadPolicy
: if there is an error instantiate or reconstituting the
state of a policy or enricher… whether to continue (discarding the policy or enricher)
or fail.rebind.failureMode.addPolicy
: if there is an error re-adding the policy or enricher to
its associated entity… whether to continue (discarding the policy or enricher)
or fail.rebind.failureMode.addConfig
: if there is invalid config value, or some other error occurs when adding a config.rebind.failureMode.rebind
: any errors on rebind not covered by the more specific error cases described above.Help can be found at dev@brooklyn.apache.org
, where folk will be able to investigate
issues and suggest work-arounds.
By sharing the persisted state (with credentials removed), AMP developers will be able to reproduce and debug the problem.
The state of each entity, location, policy and enricher is persisted in XML. It is thus human readable and editable.
After first taking a backup of the state, it is possible to modify the state. For example, an offending entity could be removed, or references to that entity removed, or its XML could be fixed to remove the problem.
The final (powerful and dangerous!) tool is to execute Groovy code on the running AMP instance. If authorized, the REST api allows arbitrary Groovy scripts to be passed in and executed. This allows the state of entities to be modified (and thus fixed) at runtime.
If used, it is strongly recommended that Groovy scripts are run against a disconnected AMP instance. After fixing the entities, locations and/or policies, the AMP instance’s new persisted state can be copied and used to fix the production instance.
The most common problem on rebind is that custom entity code has not been written in a way that can be persisted and/or rebound.
The rule of thumb when implementing new entities, locations, policies and enrichers is that all state must be persistable. All state must be stored as config or as attributes, and must be serializable. For making backwards compatibility simpler, the persisted state should be clean.
Below are tips and best practices for when implementing an entity in Java (or any other JVM language).
How to store entity state:
getAttribute(MY_LIST).add("a")
is bad).
The value may not be persisted unless setAttribute() is called.entity.requestPerist()
which will trigger
asynchronous persistence of the entity.getRebindSupport()
is discouraged - this will change
in a future version.How to store policy/enricher/location state:
@SetFromFlag
for it be persisted.
When you call requestPersist()
then values of these fields will be scheduled to be
persisted. Warning: the @SetFromFlag
functionality may change in future versions.Persistable state:
Serializable
.ManagementContext
.Behaviour on rebind:
SoftwareProcess
, entities get a lot of the rebind logic for free. For
example, the default rebind()
method will call connectSensors()
.
See SoftwareProcess
Lifecycle
for more details.entity.rebind()
is called automatically by the
AMP framework on rebind, after configuring the entity’s config/attributes but before
the entity is managed.
Note that init()
will not be called on rebind.entity.addFeed(...)
was called. Otherwise the
feed needs to be re-registered on rebind. Warning: this behaviour may change in future version.subscribe(...)
for sensor events) are not persisted.
They must be re-registered on rebind. Warning: this behaviour may change in future version.Below are tips to make backwards-compatibility easier for persisted state:
transient
whenever you don’t want it persisted).
The field names are part of the persisted state.When using the file system it is important to ensure it is backed up regularly.
One could use rsync
to regularly backup the contents to another server.
It is also recommended to periodically create a complete archive of the state. A simple mechanism is to run a CRON job periodically (e.g. every 30 minutes) that creates an archive of the persistence directory, and uploads that to a backup facility (e.g. to S3).
Optionally, to avoid excessive load on the AMP server, the archive-generation could be done
on another “data” server. This could get a copy of the data via an rsync
job.
An example script to be invoked by CRON is shown below:
DATE=`date "+%Y%m%d.%H%M.%S"`
BACKUP_FILENAME=/path/to/archives/back-${DATE}.tar.gz
DATA_DIR=/path/to/base/dir/data
tar --exclude '*/backups/*' -czvf $BACKUP_FILENAME $DATA_DIR
# For s3cmd installation see http://s3tools.org/repositories
s3cmd put $BACKUP_FILENAME s3://mybackupbucket
rm $BACKUP_FILENAME
Object Stores will normally handle replication. However, many such object stores do not handle versioning (i.e. to allow access to an old version, if an object has been incorrectly changed or deleted).
The state can be downloaded periodically from the object store, archived and backed up.
An example script to be invoked by CRON is shown below:
DATE=`date "+%Y%m%d.%H%M.%S"`
BACKUP_FILENAME=/path/to/archives/back-${DATE}.tar.gz
TEMP_DATA_DIR=/path/to/tempdir
AMP copy-state \
--persistenceLocation named:my-persistence-location \
--persistenceDir /path/to/bucket \
--destinationDir $TEMP_DATA_DIR
tar --exclude '*/backups/*' -czvf $BACKUP_FILENAME $TEMP_DATA_DIR
# For s3cmd installation see http://s3tools.org/repositories
s3cmd put $BACKUP_FILENAME s3://mybackupbucket
rm $BACKUP_FILENAME
rm -r $TEMP_DATA_DIR
AMP will automatically run in HA mode if multiple AMP instances are started pointing at the same persistence store. One AMP node (e.g. the first one started) is elected as HA master: all write operations against AMP entities, such as creating an application or invoking an effector, should be directed to the master.
Once one node is running as MASTER
, other nodes start in either STANDBY
or HOT_STANDBY
mode:
In STANDBY
mode, a AMP instance will monitor the master and will be a candidate
to become MASTER
should the master fail. Standby nodes do not attempt to rebind
until they are elected master, so the state of existing entities is not available at
the standby node. However a standby server consumes very little resource until it is
promoted.
In HOT_STANDBY
mode, a AMP instance will read and make available the live state of
entities. Thus a hot-standby node is available as a read-only copy.
As with the standby node, if a hot-standby node detects that the master fails,
it will be a candidate for promotion to master.
In HOT_BACKUP
mode, a AMP instance will read and make available the live state of
entities, as a read-only copy. However this node is not able to become master,
so it can safely be used to test compatibility across different versions.
To explicitly specify what HA mode a node should be in, the following CLI options are available
for the parameter --highAvailability
:
disabled
: management node works in isolation; it will not cooperate with any other standby/master nodes in management planeauto
: will look for other management nodes, and will allocate itself as standby or master based on other nodes’ statesmaster
: will startup as master; if there is already a master then fails immediatelystandby
: will start up as lukewarm standby; if there is not already a master then fails immediatelyhot_standby
: will start up as hot standby; if there is not already a master then fails immediatelyhot_backup
: will start up as hot backup; this can be done even if there is not already a master; this node will not be a master The REST API offers live detection and control of the HA mode, including setting priority to control which nodes will be promoted on master failure:
/server/ha/state
: Returns the HA state of a management node (GET),
or changes the state (POST)/server/ha/states
: Returns the HA states and detail for all nodes in a management plane/server/ha/priority
: Returns the HA node priority for MASTER failover (GET),
or sets that priority (POST)Note that when POSTing to a non-master server it is necessary to pass a AMP-Allow-Non-Master-Access: true
header.
For example, the following cURL command could be used to change the state of a STANDBY
node on localhost:8082
to HOT_STANDBY
:
curl -v -X POST -d mode=HOT_STANDBY -H "AMP-Allow-Non-Master-Access: true" http://localhost:8082/v1/server/ha/state
This document supplements the High Availability documentation and provides an example of how to configure a pair of Cloudsoft AMP servers to run in master-standby mode with a shared NFS datastore
/mnt/brooklyn-persistence
and both machines can write to the folder* AMP can be configured to use either an object store such as S3, or a shared NFS mount. The recommended option is to use an object store as described in the Object Store Persistence documentation. For simplicity, a shared NFS folder is assumed in this example
To start, download and install the latest Cloudsoft AMP release on both VMs following the instructions in Running Cloudsoft AMP
On the first VM, which will be the master node, run the following to start AMP in high availability mode:
$ bin/brooklyn launch --highAvailability master --https --persist auto --persistenceDir /mnt/brooklyn-persistence
If you are using RPMs/deb to install, please see the Running Cloudsoft AMP documentation for the appropriate launch commands
Once AMP has launched, on the second VM, run the following command to launch AMP in standby mode:
$ bin/brooklyn launch --highAvailability auto --https --persist auto --persistenceDir /mnt/brooklyn-persistence
When running as a HA standby node, each standby AMP server (in this case there is only one standby) will check the shared persisted state
every one second to determine the state of the HA master. If no heartbeat has been recorded for 30 seconds, then an election will be performed
and one of the standby nodes will be promoted to master. At this point all requests should be directed to the new master node.
If the master is terminated gracefully, the secondary will be immediately promoted to mater. Otherwise, the secondary will be promoted after
heartbeats are missed for a given length of time. This defaults to 30 seconds, and is configured in brooklyn.properties using
brooklyn.ha.heartbeatTimeout
In the event that tasks - such as the provisioning of a new entity - are running when a failover occurs, the new master will display the current state of the entity, but will not resume its provisioning or re-run any partially completed tasks. In this case it may be necessary to remove the entity and reprovision it. In the case of a failover whilst executing a task called by an effector, it may be possible to simple call the effector again
It is the responsibility of the client to connect to the master AMP server. This can be accomplished in a variety of ways:
To allow the client application to automatically fail over in the event of a master server becoming unavailable, or the promotion of a new master,
a reverse proxy can be configured to route traffic depending on the response returned by https://<ip-address>:8443/v1/server/ha/state
(see above).
If a server returns "MASTER"
, then traffic should be routed to that server, otherwise it should not be. The client software should be configured
to connect to the reverse proxy server and no action is required by the client in the event of a failover. It can take up to 30 seconds for the
standby to be promoted, so the reverse proxy should retry for at least this period, or the failover time should be reconfigured to be shorter
If the cloud provider you are using supports Elastic or Floating IPs, then the IP address should be allocated to the HA master, and the client
application configured to connect to the floating IP address. In the event of a failure of the master node, the standby node will automatically
be promoted to master, and the floating IP will need to be manually re-allocated to the new master node. No action is required by the client
in the event of a failover. It is possible to automate the re-allocation of the floating IP if the AMP servers are deployed and managed
by AMP using the entity org.apache.brooklyn.entity.brooklynnode.AMPCluster
In this scenario, the responsibilty for determining the AMP master server falls on the client application. When configuring the client application, a list of all servers in the cluster is passed in at application startup. On first connection, the client application connects to any of the members of the cluster to retrieve the HA states (see above). The JSON object returned is used to determine the addresses of all members of the cluster, and also to determine which node is the HA master
In the event of a failure of the master node, the client application should then retrieve the HA states of the cluster from any of the other cluster members. This is the same process as when the application first connects to the cluster. The client should refresh its list of cluster memebers and determine which node is the HA master
It is also recommended that the client application periodically checks the status of the cluster and updates its list of addresses. This will ensure that failover is still possible if the standby server(s) has been replaced. It also allows additional standby servers to be added at any time
You can confirm that AMP is running in high availibility mode on the master by logging into the web console at https://<ip-address>:8443
.
Similarly you can log into the web console on the standby VM where you will see a warning that the server is not the high availability master.
To test a failover, you can simply terminate the process on the first VM and log into the web console on the second VM. Upon launch, AMP will
output its PID to the file pid.txt
; you can force an immediate (non-graceful) termination of the process by running the following command
from the same directory from which you launched AMP:
$ kill -9 $(cat pid.txt)
It is also possiblity to check the high availability state of a running AMP server using the following curl command:
$ curl -k -u myusername:mypassword https://<ip-address>:8443/v1/server/ha/state
This will return one of the following states:
"INITIALIZING"
"STANDBY"
"HOT_STANDBY"
"HOT_BACKUP"
"MASTER"
"FAILED"
"TERMINATED"
Note: The quotation characters will be included in the reply
To obtain information about all of the nodes in the cluster, run the following command against any of the nodes in the cluster:
$ curl -k -u myusername:mypassword https://<ip-address>:8443/v1/server/ha/states
This will return a JSON document describing the AMP nodes in the cluster. An example of two HA AMP nodes is as follows (whitespace formatting has been added for clarity):
{
ownId: "XkJeXUXE",
masterId: "yAVz0fzo",
nodes: {
yAVz0fzo: {
nodeId: "yAVz0fzo",
nodeUri: "https://<server1-ip-address>:8443/",
status: "MASTER",
localTimestamp: 1466414301065,
remoteTimestamp: 1466414301000
},
XkJeXUXE: {
nodeId: "XkJeXUXE",
nodeUri: "https://<server2-ip-address>:8443/",
status: "STANDBY",
localTimestamp: 1466414301066,
remoteTimestamp: 1466414301000
}
},
links: { }
}
The examples above show how to use curl
to manually check the status of AMP via its REST API. The same REST API calls can also be used by
automated third party monitoring tools such as Nagios
Cloudsoft AMP provides a catalog, which is a persisted collection of versioned blueprints and other resources.
A set of blueprints is loaded from the default.catalog.bom
in the AMP folder by default and additional ones can be added through the web console or CLI.
Blueprints in the catalog can be deployed directly, via the AMP CLI or the web console,
or referenced in other blueprints using their id
.
An item or items to be added to the catalog is defined by a YAML file, specifying the catalog metadata for the items and the actual blueprint or resource definition.
A single catalog item can be defined following this general structure:
brooklyn.catalog:
<catalog-metadata>
item:
<blueprint-or-resource-definition>
To define multiple catalog items in a single YAML, where they may share some metadata, use the following structure:
brooklyn.catalog:
<catalog-metadata>
items:
- <additional-catalog-metadata>
item:
<blueprint-or-resource-definition>
- <additional-catalog-metadata>
item:
<blueprint-or-resource-definition>
Catalog metadata fields supply the additional information required in order to register an item in the catalog.
These fields can be supplied as key: value
entries
where either the <catalog-metadata>
or <additional-catalog-metadata>
placeholders are,
with the latter overriding the former unless otherwise specfied below.
The following metadata is required for all items:
id
: a human-friendly unique identifier for how this catalog item will be referenced from blueprintsversion
: multiple versions of a blueprint can be installed and used simultaneously;
this field disambiguates between blueprints of the same id
.
Note that this is typically not the version of the software being installed,
but rather the version of the blueprint. For more information on versioning, see below.
(Also note YAML treats numbers differently to Strings. Explicit quotes may sometimes be required.)To reference a catalog item in another blueprint, simply reference its ID and optionally its version number.
For instance, if we’ve added an item with metadata { id: datastore, version: "1.0" }
(such as the example below),
we could refer to it in another blueprint with:
services:
- type: datastore:1.0
In addition to the above fields, exactly one of the following is also required:
item
: the YAML for a service or policy or location specification
(a map containing type
and optional brooklyn.config
)
or a full application blueprint (in the usual YAML format) for a template; *oritems
: a list of catalog items, where each entry in the map follows the same schema as
the brooklyn.catalog
value, and the keys in these map override any metadata specified as
a sibling of this items
key (or, in the case of brooklyn.libraries
they add to the list);
if there are references between items, then order is important,
items
are processed in order, depth-first, and forward references are not supported. Entries
can be URL to another catalog file to include, inheriting the meta from the current hierarchy.
Libraries defined so far in the meta will be used to load classpath entries. For example:brooklyn.catalog:
displayName: Foo
brooklyn.libraries:
- http://some.server.or.other/path/my.jar
items:
- classpath://my-catalog-entries-inside-jar.bom
- some-property: value
include: classpath://more-catalog-entries-inside-jar.bom
- id: use-from-my-catalog
item:
type: some-type-defined-in-my-catalog-entries
The following optional catalog metadata is supported:
itemType
: the type of the item being defined.
When adding a template (see below) this must be set.
In most other cases this can be omitted and type type will be inferred.
The supported item types are:
entity
template
policy
location
name
: a nicely formatted display name for the item, used when presenting it in a GUIdescription
: supplies an extended textual description for the itemiconUrl
: points to an icon for the item, used when presenting it in a GUI.
The URL prefix classpath
is supported but these URLs may not refer items in any OSGi bundle in the brooklyn.libraries
section
(to prevent requiring all OSGi bundles to be loaded at launch).
Icons are instead typically installed either at the server from which the OSGi bundles or catalog items are supplied
or in the conf
folder of the AMP distro.scanJavaAnnotations
[experimental]: if provided (as true
), this will scan any locally provided
library URLs for types annotated @Catalog
and extract metadata to include them as catalog items.
If no libraries are specified this will scan the default classpath.
This feature is experimental and may change or be removed.
Also note that external OSGi dependencies are not supported
and other metadata (such as versions, etc) may not be applied.brooklyn.libraries
: a list of pointers to OSGi bundles required for the catalog item.
This can be omitted if blueprints are pure YAML and everything required is included in the classpath and catalog.
Where custom Java code or bundled resources is needed, however, OSGi JARs supply
a convenient packaging format and a very powerful versioning format.
Libraries should be supplied in the form
brooklyn.libraries: [ "http://...", "http://..." ]
,
or as
brooklyn.libraries: [ { name: symbolic-name, version: 1.0, url: http://... }, ... ]
if symbolic-name:1.0
might already be installed from a different URL and you want to skip the download.
Note that these URLs should point at immutable OSGi bundles;
if the contents at any of these URLs changes, the behaviour of the blueprint may change
whenever a bundle is reloaded in a AMP server,
and if entities have been deployed against that version, their behavior may change in subtle or potentially incompatible ways.
To avoid this situation, it is highly recommended to use OSGi version stamps as part of the URL.include
: A URL to another catalog file to include, inheriting the meta from the current hierarchy.
Libraries defined so far in the meta will be used to load classpath entries. include
must be used
when you have sibling properties. If it’s the only property it may be skipped by having the URL as the
value - see items
example above.The following example installs the RiakNode
entity, making it also available as an application template,
with a nice display name, description, and icon.
It can be referred in other blueprints to as datastore:1.0
,
and its implementation will be the Java class org.apache.brooklyn.entity.nosql.riak.RiakNode
included with AMP.
brooklyn.catalog:
id: datastore
version: 1.0
itemType: template
iconUrl: classpath://org/apache/brooklyn/entity/nosql/riak/riak.png
name: Datastore (Riak)
description: Riak is an open-source NoSQL key-value data store.
item:
type: org.apache.brooklyn.entity.nosql.riak.RiakNode
name: Riak Node
This YAML will install three items:
brooklyn.catalog:
version: 1.1
iconUrl: classpath://org/apache/brooklyn/entity/nosql/riak/riak.png
description: Riak is an open-source NoSQL key-value data store.
items:
- id: riak-node
item:
type: org.apache.brooklyn.entity.nosql.riak.RiakNode
name: Riak Node
- id: riak-cluster
item:
type: org.apache.brooklyn.entity.nosql.riak.RiakCluster
name: Riak Cluster
- id: datastore
name: Datastore (Riak Cluster)
itemType: template
item:
services:
- type: riak-cluster
location:
jclouds:softlayer:
region: sjc01
# identity and credential must be set unless they are specified in your brooklyn.properties
# identity: XXX
# credential: XXX
brooklyn.config:
# the default size is 3 but this can be changed to suit your requirements
initial.size: 3
provisioning.properties:
# you can also define machine specs
minRam: 8gb
The items this will install are:
riak-node
, as before, but with a different nameriak-cluster
as a convenience short name for the org.apache.brooklyn.entity.nosql.riak.RiakCluster
classdatastore
, now pointing at the riak-cluster
blueprint, in SoftLayer and with the given size and machine spec,
as the default implementation for anyone
requesting a datastore
(and if installed atop the previous example, new references to datastore
will access this version because it is a higher number);
because it is a template, users will have the opportunity to edit the YAML (see below).
(This must be supplied after riak-cluster
, because it refers to riak-cluster
.)In addition to blueprints, locations can be added to the Cloudsoft AMP catalog. The example below shows a location for the vagrant configuration used in the getting started guide, formatted as a catalog entry.
brooklyn.catalog:
id: vagrant
version: 1.0
itemType: location
name: Vagrant getting started location
item:
type: byon
brooklyn.config:
user: vagrant
password: vagrant
hosts:
- 10.10.10.101
- 10.10.10.102
- 10.10.10.103
- 10.10.10.104
Once this has been added to the catalog it can be used as a named location in yaml blueprints using:
location: vagrant
The following legacy and experimental syntax is also supported:
<blueprint-definition>
brooklyn.catalog:
<catalog-metadata>
In this format, the brooklyn.catalog
block is optional;
and an id
in the <blueprint-definition>
will be used to determine the catalog ID.
This is primarily supplied for OASIS CAMP 1.1 compatibility,
where the same YAML blueprint can be POSTed to the catalog endpoint to add to a catalog
or POSTed to the applications endpoint to deploy an instance.
(This syntax is discouraged as the latter usage,
POSTing to the applications endpoint,
will ignored the brooklyn.catalog
information;
this means references to any item
blocks in the <catalog-metadata>
will not be resolved,
and any OSGi brooklyn.libraries
defined there will not be loaded.)
When a template
is added to the catalog, the blueprint will appear in the ‘Create Application’ dialog
as shown here:
The Catalog tab in the web console will show all versions of catalog items, and allow you to add new items.
On the UI the “add” button at the top of the menu panel allows the
addition of new Applications to the catalog, via YAML, and of new Locations.
In addition to the GUI, items can be added to the catalog via the REST API
with a POST
of the YAML file to /v1/catalog
endpoint.
To do this using curl
:
curl http://127.0.0.1:8081/v1/catalog --data-binary @/path/to/riak.catalog.bom
On the UI, if an item is selected, a ‘Delete’ button in the detail panel can be used to delete it from the catalog.
Using the REST API, you can delete a versioned item from the catalog using the corresponding endpoint.
For example, to delete the item with id datastore
and version 1.0
with curl
:
curl -X DELETE http://127.0.0.1:8081/v1/catalog/applications/datastore/1.0
Note: Catalog items should not be deleted if there are running apps which were created using the same item. During rebinding the catalog item is used to reconstruct the entity.
If you have running apps which were created using the item you wish to delete, you should instead deprecate the catalog item. Deprecated catalog items will not appear in the add application wizard, or in the catalog list but will still be available to AMP for rebinding. The option to display deprecated catalog items in the catalog list will be added in a future release.
Deprecation applies to a specific version of a catalog item, so the full id including the version number is passed to the REST API as follows:
curl -X POST http://127.0.0.1:8081/v1/catalog/entities/MySQL:1.0/deprecated/true
Version numbers follow the OSGi convention. This can have a major, minor, micro and qualifier part.
For example, 1.0
. 1.0.1
or 1.0.1-20150101
.
The combination of id:version
strings must be unique across the catalog.
It is an error to deploy the same version of an existing item:
to update a blueprint, it is recommended to increase its version number;
alternatively in some cases it is permitted to delete an id:version
instance
and then re-deploy.
If no version is specified, re-deploying will automatically
increment an internal version number for the catalog item.
When referencing a blueprint, if a version number is not specified the latest non-snapshot version will be loaded when an entity is instantiated.
The brooklyn
CLI includes several commands for working with the catalog.
--catalogAdd <file.bom>
will add the catalog items in the bom
file--catalogReset
will reset the catalog to the initial state
(based on brooklyn/default.catalog.bom
on the classpath, by default in a dist in the conf/
directory)--catalogInitial <file.bom>
will set the catalog items to use on first run,
on a catalog reset, or if persistence is offIf --catalogInitial
is not specified, the default initial catalog at brooklyn/default.catalog.bom
will be used.
As scanJavaAnnotations: true
is set in default.catalog.bom
, AMP will scan the classpath for catalog items,
which will be added to the catalog.
To launch AMP without initializing the catalog, use --catalogInitial classpath://brooklyn/empty.catalog.bom
If persistence is enabled, catalog additions will remain between runs. If items that were
previously added based on items in brooklyn/default.catalog.bom
or --catalogInitial
are
deleted, they will not be re-added on subsequent restarts of brooklyn. I.e. --catalogInitial
is ignored
if persistence is enabled and persistent state has already been created.
For more information on these commands, run brooklyn help launch
.
Cloudsoft AMP exposes a powerful REST API, allowing it to be scripted from bash or integrated with other systems.
For many commands, the REST call follows the same structure as the web console URL
scheme, but with the #
at the start of the path removed; for instance the catalog
item cluster
in the web console is displayed at:
http://localhost:8081/#v1/catalog/entities/cluster:0.10.0-SNAPSHOT
And in the REST API it is accessed at:
http://localhost:8081/v1/catalog/entities/cluster:0.10.0-SNAPSHOT
A full reference for the REST API is automatically generated by the server at runtime. It can be found in the AMP web console, under the Script tab.
Here we include some of the most common REST examples and other advice for working with the REST API.
For command-line access, we recommend curl
, with tips below.
For navigating in a browser we recommend getting a plugin for working with REST; these are available for most browsers and make it easier to authenticate, set headers, and see JSON responses.
For manipulating JSON responses on the command-line,
the library jq
from stedolan’s github
is very useful, and available in most package repositories, including port
and brew
on Mac.
Here are some useful snippets:
List applications
curl http://localhost:8081/v1/applications
Deploy an application from __FILE__
curl http://localhost:8081/v1/applications --data-binary @__FILE__
Get details of a task with ID __ID__
(where the id
is returned by the above,
optionally piped to jq .id
)
curl http://localhost:8081/v1/activities/__ID__
Get the value of sensor service.state
on entity e1
in application app1
(note you can use either the entity’s ID or its name)
curl http://localhost:8081/v1/applications/app1/entities/e1/sensors/service.state
Get all sensor values (using the pseudo-sensor current-state
)
curl http://localhost:8081/v1/applications/app1/entities/e1/sensors/service.state
Invoke an effector eff
on e1
, with argument arg1
equal to hi
(note if no arguments, you must specify -d ""
; for multiple args, just use multiple -d
entries,
or a JSON file with --data-binary @...
)
curl http://localhost:8081/v1/applications/app1/entities/e1/effectors/eff -d arg1=hi
Add an item to the catalog from __FILE__
curl http://localhost:8081/v1/catalog --data-binary @__FILE__
--user username:password
-v
-X POST
or -X DELETE
-d key=value
__FILE__
, use --data-binary @__FILE__
(implies a POST) or -T __FILE__ -X POST
-H "key: value"
, for example -H "AMP-Allow-Non-Master-Access: true"
-H "Content-Type: application/json"
(or application/yaml
)-H "Accept: application/json"
(or application/yaml
, or for sensor values, text/plain
)Sometimes it is useful that configuration in a blueprint, or in AMP itself, is not given explicitly, but is instead replaced with a reference to some other storage system. For example, it is undesirable for a blueprint to contain a plain-text password for a production system, especially if (as we often recommend) the blueprints are kept in the developer’s source code control system.
To handle this problem, Cloudsoft AMP supports externalized configuration. This allows a blueprint to refer to
a piece of information that is stored elsewhere. brooklyn.properties
defines the external suppliers of configuration
information. At runtime, when AMP finds a reference to externalized configuration in a blueprint, it consults
brooklyn.properties
for information about the supplier, and then requests that the supplier return the information
required by the blueprint.
Take, as a simple example, a web app which connects to a database. In development, the developer is running a local
instance of PostgreSQL with a simple username and password. But in production, an enterprise-grade cluster of PostgreSQL
is used, and a dedicated service is used to provide passwords. The same blueprint can be used to service both groups
of users, with brooklyn.properties
changing the behaviour depending on the deployment environment.
Here is the blueprint:
name: MyApplication
services:
- type: brooklyn.entity.webapp.jboss.JBoss7Server
name: AppServer HelloWorld
brooklyn.config:
wars.root: http://search.maven.org/remotecontent?filepath=org/apache/brooklyn/example/brooklyn-example-hello-world-sql-webapp/0.8.0-incubating/brooklyn-example-hello-world-sql-webapp-0.8.0-incubating.war
http.port: 8080+
java.sysprops:
brooklyn.example.db.url: $brooklyn:formatString("jdbc:postgresql://%s/myappdb?user=%s\\&password=%s",
external("servers", "postgresql"), external("credentials", "postgresql-user"), external("credentials", "postgresql-password"))
You can see that when we are building up the JDBC URL, we are using the external
function. This takes two parameters:
the first is the name of the configuration supplier, the second is the name of a key that is stored by the configuration
supplier. In this case we are using two different suppliers: servers
to store the location of the server, and
credentials
which is a security-optimized supplier of secrets.
Developers would add lines like this to the brooklyn.properties
file on their workstation:
brooklyn.external.servers=org.apache.brooklyn.core.config.external.InPlaceExternalConfigSupplier
brooklyn.external.servers.postgresql=127.0.0.1
brooklyn.external.credentials=org.apache.brooklyn.core.config.external.InPlaceExternalConfigSupplier
brooklyn.external.credentials.postgresql-user=admin
brooklyn.external.credentials.postgresql-password=admin
In this case, all of the required information is included in-line in the local brooklyn.properties
.
Whereas in production, brooklyn.properties
might look like this:
brooklyn.external.servers=org.apache.brooklyn.core.config.external.PropertiesFileExternalConfigSupplier
brooklyn.external.servers.propertiesUrl=https://ops.example.com/servers.properties
brooklyn.external.credentials=org.apache.brooklyn.core.config.external.vault.VaultAppIdExternalConfigSupplier
brooklyn.external.credentials.endpoint=https://vault.example.com
brooklyn.external.credentials.path=secret/enterprise-postgres
brooklyn.external.credentials.appId=MyApp
In this case, the list of servers is stored in a properties file located on an Operations Department web server, and the credentials are stored in an instance of Vault.
External configuration suppliers are defined in brooklyn.properties
. The minimal definition is of the form:
brooklyn.external.supplierName = className
This defines a supplier named supplierName. AMP will attempt to instantiate className; it is this class which will provide the behaviour of how to retrieve data from the supplier. AMP includes a number of supplier implementations; see below for more details.
Suppliers may require additional configuration options. These are given as additional properties in
brooklyn.properties
:
brooklyn.external.supplierName = className
brooklyn.external.supplierName.firstConfig = value
brooklyn.external.supplierName.secondConfig = value
Externalized configuration adds a new function to the AMP blueprint language DSL, $brooklyn:external
. This
function takes two parameters:
When resolving the external reference, AMP will first identify the supplier of the information, then it will give the supplier the key. The returned value will be substituted into the blueprint.
You can use $brooklyn:external
directly:
name: MyApplication
brooklyn.config:
example: $brooklyn:external("supplier", "key")
or embed the external
function inside another $brooklyn
DSL function, such as $brooklyn:formatString
:
name: MyApplication
brooklyn.config:
example: $brooklyn:formatString("%s", external("supplier", "key"))
The same blueprint language DSL can be used from brooklyn.properties
. For example:
brooklyn.location.jclouds.aws-ec2.identity=$brooklyn:external("mysupplier", "aws-identity")
brooklyn.location.jclouds.aws-ec2.credential=$brooklyn:external("mysupplier", "aws-credential")
The same blueprint language DSL can be used within YAML catalog items. For example:
brooklyn.catalog:
id: com.example.myblueprint
version: 1.2.3
brooklyn.libraries:
- >
$brooklyn:formatString("https://%s:%s@repo.example.com/libs/myblueprint-1.2.3.jar",
external("mysuppier", "username"), external("mysupplier", "password"))
item:
type: com.example.MyBlueprint
Note the >
in the example above is used to split across multiple lines.
AMP ships with a number of external configuration suppliers ready to use.
InPlaceExternalConfigSupplier embeds the configuration keys and values as properties inside brooklyn.properties
.
For example:
brooklyn.external.servers=org.apache.brooklyn.core.config.external.InPlaceExternalConfigSupplier
brooklyn.external.servers.postgresql=127.0.0.1
Then, a blueprint which referred to $brooklyn:external("servers", "postgresql")
would receive the value 127.0.0.1
.
PropertiesFileExternalConfigSupplier loads a properties file from a URL, and uses the keys and values in this file to respond to configuration lookups.
Given this configuration:
brooklyn.external.servers=org.apache.brooklyn.core.config.external.PropertiesFileExternalConfigSupplier
brooklyn.external.servers.propertiesUrl=https://ops.example.com/servers.properties
This would cause the supplier to download the given URL. Assuming that the file contained this entry:
postgresql=127.0.0.1
Then, a blueprint which referred to $brooklyn:external("servers", "postgresql")
would receive the value 127.0.0.1
.
Vault is a server-based tool for managing secrets. AMP provides suppliers that are able to query the Vault REST API for configuration values. The different suppliers implement alternative authentication options that Vault provides.
For all of the authentication methods, you must always set these properties in brooklyn.properties
:
brooklyn.external.supplierName.endpoint=<Vault HTTP/HTTPs endpoint>
brooklyn.external.supplierName.path=<path to a Vault object>
For example, if the path is set to secret/brooklyn
, then attempting to retrieve the key foo
would cause AMP
to retrieve the value of the foo
key on the secret/brooklyn
object. This value can be set using the Vault CLI
like this:
vault write secret/brooklyn foo=bar
The userpass
plugin for Vault allows authentication with username and password.
brooklyn.external.supplierName=org.apache.brooklyn.core.config.external.vault.VaultUserPassExternalConfigSupplier
brooklyn.external.supplierName.username=fred
brooklyn.external.supplierName.password=s3kr1t
The app_id
plugin for Vault allows you to specify an “app ID”, and then designate particular “user IDs” to be part
of the app. Typically the app ID would be known and shared, but user ID would be autogenerated on the client in some
way. AMP implements this by determining the MAC address of the server running AMP (expressed as 12 lower
case hexadecimal digits without separators) and passing this as the user ID.
brooklyn.external.supplierName=org.apache.brooklyn.core.config.external.vault.VaultAppIdExternalConfigSupplier
brooklyn.external.supplierName.appId=MyApp
If you do not wish to use the MAC address as the user ID, you can override it with your own choice of user ID:
brooklyn.external.supplierName.userId=server3.cluster2.europe
If you have a fixed token string, then you can use the VaultTokenExternalConfigSupplier class and provide the token
in brooklyn.properties
:
brooklyn.external.supplierName=org.apache.brooklyn.core.config.external.vault.VaultTokenExternalConfigSupplier
brooklyn.external.supplierName.token=1091fc84-70c1-b266-b99f-781684dd0d2b
This supplier is suitable for “smoke testing” the Vault supplier using the Initial Root Token or similar. However it is not suitable for production use as it is inherently insecure - should the token be compromised, an attacker could have complete access to your Vault, and the cleanup operation would be difficult. Instead you should use one of the other suppliers.
Supplier implementations must conform to the brooklyn.config.external.ExternalConfigSupplier interface, which is very simple:
String getName();
String get(String key);
Classes implementing this interface can be placed in the lib/dropins
folder of AMP, and then the supplier
defined in brooklyn.properties
as normal.
The size of server required by AMP depends on the amount of activity. This includes:
For dev/test or when there are only a handful of VMs being managed, a small VM is sufficient. For example, an AWS m3.medium with one vCPU, 3.75GiB RAM and 4GB disk.
For larger production uses, a more appropriate machine spec would be two or more cores, at least 8GB RAM and 100GB disk. The disk is just for logs, a small amount of persisted state, and any binaries for custom blueprints/integrations.
There are three main consumers of disk space:
lib
directory. Note that AMP requires that Java is installed which
you may have to consider when calculating disk space requirements.The Cloudsoft AMP distribution itself, when unpacked, consumes approximately 75MB of disk space. This includes everything needed to run AMP except for a Java VM. The space consumed by additional binaries for custom blueprints and integrations is application-specific.
Persisted state, excluding catalog data, is relatively small, starting at approximately 300KB for a clean, idle AMP server. Deploying blueprints will add to this - how much depends exactly on the entities involved and is therefore application specific, but as a guideline, a 3-node Riak cluster adds approximately 500KB to the persistence store.
Log data can be a large consumer of disk space. By default AMP generates two logfiles, one which logs notable information only, and another which logs at a debug level. Each logfile rotates when it hits a size of 100MB; a maximum of 10 log files are retained for each type. The two logging streams combined, therefore, can consume up to 2GB of disk space.
In the default configuration of AMP’s .tar.gz
and .zip
distributions,
logs are saved to the AMP installation directory. You will most likely want
to reconfigure AMP’s logging to save logs to a location
elsewhere. In the .rpm
and .deb
packaging, logging files will be located
under /var/log
. You can further reconfiguring the logging detail level and log
rotation according to your organisation’s policy.
The recommended operating system is CentOS 6.x or RedHat 6.x.
AMP has also been tested on Ubuntu 14.04 and OS X.
AMP requires Java (JRE or JDK) minimum version 1.7. OpenJDK is recommended. AMP has also been tested on IBM J9 and Oracle’s JVM.
The ports used by AMP are:
Whether to use https rather than http is configurable using the CLI option --https
;
the port to use is configurable using the CLI option --port <port>
.
To enable remote AMP access, ensure these ports are open in the firewall. For example, to open port 8443 in iptables, ues the command:
/sbin/iptables -I INPUT -p TCP --dport 8443 -j ACCEPT
AMP expects a sensible set of locale information and time zones to be available; without this, some time-and-date handling may be surprising.
AMP parses and reports times according to the time zone set at the server. If AMP is targetting geographically distributed users, it is normally recommended that the server’s time zone be set to UTC.
It is normally recommended that AMP run as a non-root user with keys installed to ~/.ssh/id_rsa{,.pub}
.
Check that the linux kernel entropy is sufficient.
To install Cloudsoft AMP on a production server:
This guide covers the basics. You may also wish to configure:
Check that the server meets the requirements. Then configure the server as follows:
~/.brooklyn
directory on the host with $ mkdir ~/.brooklyn
iptables
or other firewall service, making sure that incoming connections on port 8443 is not blockedDownload AMP and obtain a binary build as described on the download page.
Expand the tar.gz
archive:
% tar -zxf apache-brooklyn-0.9.0-dist.tar.gz
This will create a apache-brooklyn-0.9.0
folder.
Let’s setup some paths for easy commands.
% cd apache-brooklyn-0.9.0
% BROOKLYN_DIR="$(pwd)"
% export PATH=$PATH:$BROOKLYN_DIR/bin/
Set up brooklyn.properties
as described here:
It may be useful to use the following script to install an initial brooklyn.properties
:
% mkdir -p ~/.brooklyn
% wget -O ~/.brooklyn/brooklyn.properties http://0.0.0.0:4000/guide/start/brooklyn.properties
% chmod 600 ~/.brooklyn/brooklyn.properties
By default AMP loads the catalog of available application components and services from
default.catalog.bom
on the classpath. The initial catalog is in conf/brooklyn/
in the dist.
If you have a preferred catalog, simply replace that file.
More information on the catalog is available here.
Launch AMP in a disconnected session so it will remain running after you have logged out:
% nohup bin/amp launch > /dev/null 2&>1 &
Cloudsoft AMP should now be running on port 8081 (or other port if so specified).
Users are strongly encouraged to use HTTPS, rather than HTTP.
The use of LDAP is encouraged, rather than basic auth.
Configuration of “entitlements” is encouraged, to lock down access to the REST api for different users.
Users are strongly discouraged from running AMP as root.
For production use-cases (i.e. where AMP will never deploy to “localhost”), the user under
which AMP is running should not have sudo
rights.
Use of an object store is recommended (e.g. using S3 compliant or Swift API) - thus making use of the security features offered by the chosen object store.
File-based persistence is also supported. Permissions of the files will automatically be 600 (i.e. read-write only by the owner). Care should be taken for permissions of the relevant mount points, disks and directories.
For credential storage, users are strongly encouraged to consider using the “externalised configuration” feature. This allows credentials to be retrieved from a store managed by you, rather than being stored within YAML blueprints or brooklyn.properties.
A secure credential store is strongly recommended, such as use of
HashiCorp’s Vault - see
org.apache.brooklyn.core.config.external.vault.VaultExternalConfigSupplier
.
Users are strongly encouraged to create separate cloud credentials for AMP’s API access.
Users are also encouraged to (where possible) configure the cloud provider for only minimal API access (e.g. using AWS IAM).
Users are strongly discouraged from using hard-coded passwords within VM images. Most cloud
providers/APIs provide a mechanism to instead set an auto-generated password or to create an
entry in ~/.ssh/authorized_keys
(prior to the VM being returned by the cloud provider).
If a hard-coded credential is used, then AMP can be configured with this “loginUser” and “loginUser.password” (or “loginUser.privateKeyData”), and can change the password and disable root login.
It is strongly discouraged to use the root user on VMs being created or managed by AMP.
Users are strongly encouraged to use SSH keys for VM access, rather than passwords.
This SSH key could be a file on the AMP server. However, a better solution is to use the “externalised configuration” to return the “privateKeyData”. This better supports upgrading of credentials.
When AMP executes scripts on remote VMs to install software, it often requires downloading
the install artifacts. For example, this could be from an RPM repository or to retrieve .zip
installers.
By default, the RPM repositories will be whatever the VM image is configured with. For artifacts to be downloaded directly, these often default to the public site (or mirror) for that software product.
Where users have a private RPM repository, it is strongly encouraged to ensure the VMs are configured to point at this.
For other artifacts, users should consider hosting these artifacts in their own web-server and
configuring AMP to use this. See the documentation for
org.apache.brooklyn.core.entity.drivers.downloads.DownloadProducerFromProperties
.
Each REST api operation is authenticated to check if the user has the required privileges.
There is a plugin architecture to allow different entitlement mechanisms to be used.
A new entitlements checker implementation can be supplied by implementing
brooklyn.management.entitlement.EntitlementManager
.
Role entitlements can be set globally with the property:
brooklyn.webconsole.security.users=staff1,staff2,itil1,itil2
brooklyn.entitlements.global=io.cloudsoft.amp.entitlements.rbac.PerRoleEntitlementManager
io.cloudsoft.amp.entitlements.rbac.perRole.ContentRole=staff1,staff2
io.cloudsoft.amp.entitlements.rbac.perRole.PlatformRole=itil1,itil2
Each REST api operation (to list/view items, or to perform changes) is authenticated to check if the user has the required privileges.
There is a plugin architecture to allow different entitlement mechanisms to be used. One mechanism available is io.cloudsoft.amp.entitlements.rbac.PerRoleEntitlementManager. This allows plugins for the various decision points.
Note: these package names may change if the code moves to org.apache.brooklyn. Configuration
This reads from brooklyn.properties, such as:
brooklyn.entitlements.global=io.cloudsoft.amp.entitlements.rbac.PerRoleEntitlementManager
io.cloudsoft.amp.entitlements.rbac.roleCacheExpiryDuration=15m
io.cloudsoft.amp.entitlements.rbac.userToRole=com.acme.amp.rbac.MyCustomRoleResolver
io.cloudsoft.amp.entitlements.rbac.perRole.adminstaff=root
io.cloudsoft.amp.entitlements.rbac.perRole.supportstaff=readonly
io.cloudsoft.amp.entitlements.rbac.perRole.automatons=minimal
io.cloudsoft.amp.entitlements.rbac.perRole.specialpeople=com.acme.amp.rbac.MyCustomEntitlements
The userToRole refers to a class of type io.cloudsoft.amp.entitlements.rbac.RoleResolver
, which maps from a user to the role(s) for that user. If a user is in multiple roles, then the user has permission if any of the roles grant that permission.
The roleCacheExpiryDuration is the duration that the roles of a user will be cached for. Note that an extreme (!) way to flush the cache is to “reload properties”, which will replace this EntitlementManager with a new instance.
The perRole has an entry per role name. This value can be to a pre-defined build-in role (i.e. “root”, “readonly” and “minimal”). Alternatively, it can point to a custom brooklyn.management.entitlement.EntitlementManager class. RoleResolver
The io.cloudsoft.amp.entitlements.rbac.RoleResolver is used to map a user to the list of roles for that user.
An instance of the class will be instantiated reflectively. The constructor should have a signature that is one of:
(ManagementContext mgmt, AMPProperties properties)
(ManagementContext mgmt)
(AMPProperties properties)
()
If the class also implements {@link ManagementContextInjectable}
, then the management context will be injected immediately after construction.
The RBAC configuration allows one to plugin a custom entitlement manager to be associated with a given role, to meet your exact needs.
The EntitlementManager interface has a single method: isEntitled. This is passed details of the what is being done, and to what, allowing a boolean to be returned to indicate if it is permitted.
AMP supports LDAP integration for entitlements - i.e. the entitlements rules are stored in LDAP.
To use this, you will require the Amp-of-Amps project compiled in (or the JARs in your drop-ins folder),
and you must set in your brooklyn.properties
:
# requires LDAP used for authorization
brooklyn.webconsole.ldap.url=ldap://LDAP_SERVER/
brooklyn.webconsole.ldap.realm=AMP
brooklyn.webconsole.ldap.password=PASSWORD
# and set the entitlements to be this implementation (or a subclass, if necessary)
brooklyn.entitlements.global=io.cloudsoft.amp.entitlements.LdapEntitlementManager
In the LDAP schema, this entitlements scheme requires a new objectClass,
which in this guide we will call acmePermission
, with the following attributes (all marked optional):
entityTagRegexesForNavigating
: means you can navigate to entities with any tag matching any regex (multi-valued LDAP attribute)entityTagRegexesForReading
: means you can see sensors+config on entities with any tag matching any regex (multi-valued LDAP attribute)entityTagRegexesForWriting
: means you can invoke effectors on entities with any tag matching any regex (multi-valued LDAP attribute)deployAllowed
: means you are allowed to deploy new applications (boolean / present or absent)serverInfoAllowed
: means you are allowed to see AMP information (boolean / present or absent)root
: means you are root, having all the permissions above and all others (boolean / present or absent)The DIT will contain:
group
defines
user
membersacmePermission
object, defining permissions for all members of the groupuser
defines
password
attributeacmePermission
object, defining specific permissions for that user (in addition to all permissions from all groups)This structure allows to have single User Account with (1) permissions defined specific to that user, and
(2) permissions defined on groups of which he/she is a member.
As is often recommended for entitlements, these are purely additive:
a user will be entitled to access anything which is entitled by any acmePermission
object on the user or any of his/her groups.
An example for (1) is:
entityTagRegexesForNavigating: acme.tenant.entity:${user}
attributewhile an example of (2) is:
administrators
group has 2 values for the entityTagRegexesForNavigating
attribute: acme.tenant.entity.master
and acme.tenant.entity:${user}
user2
is memeber of administrators
group
so user2 inherits the acme.tenant.entity.master
to see the AMP blueprint and acme.tenant.entity:${user}
to see his own child AMP.Please see the LDAP command reference section for instructions for configuring LDAP with the acmePermission
schema.
Using the LDAP structure described above with the new AMP feature to manage entitlements, it is possible to cover the following scenarios of interest:
In order to enforce entitlement on the master AMP, we need to add tags to the entities so that when we create
Tenant-Foo
AMP at master, we tag Tenant (AMPNode)
and Service (AMPMirror)
entities as acme.tenant.entity:${user}
(where ${user}
is replaced with the tenant name).
Also we add tag acme.tenant.entity.master
to the master blueprint so that all users can navigate through it
(to access their tenant) and so that controllers can invoke effectors there.
These tags are done by the AMP Master
blueprint (no manual steps needed).
In LDAP, we define the following permissions:
tenant
group:
entityTagRegexesForNavigating
: acme.tenant.master
, acme.tenant.entity:${user}
(where ${user}
is literal, the
substitution done by the permissions engine)entityTagRegexesForReading
: acme.tenant.entity:${user}
entityTagRegexesForWriting
: acme.tenant.entity:${user}
admin
group:
entityTagRegexesFor...
: .*
(all permissions)deployAllowed
serverInfoAllowed
root
(with this permission the others are redundant)controller
group (WIP):
entityTagRegexesFor...
: acme.tenant.master
, acme.tenant.entity:.*
(controllers given all rights to access master root node and all tenant entities at master, but not webapps)Then in AMP, when a tenant logs in to the master, it will enforce:
acmePermission
attributes attached to the user and for those groups of which he/she is a membertags
associated
to the tenant entity.A tenant can manage everything tagged with acme.tenant.entity:${USER}
This is deferred to the second iteration. Two options are: * Updating LDAP on Tenant AMP creation and installing this at the tenant AMP * A signed secret that the master AMP returns to the user to login to the tenant AMP that she claims to own
In the first iteration access to Tenant AMPs is driven by credentials stored in the Master AMP.
The entitlements for the entity/sensors/effectors on the tenant AMP will be addressed on the second iteration.
Currently, we added to the ldap schema a acmePermission
objectClass with 6 attributes:
entityTagRegexesForNavigating
entityTagRegexesForReading
entityTagRegexesForWriting
deployAllowed
serverInfoAllowed
root
Starting from a new openldap 2.4 instance, it is possible to add the acmePermission
objectClass by issuing the following commands:
ldapadd -Q -Y EXTERNAL -H ldapi:/// -f acme.ldif
where acme.ldif
This ldif file has been created starting from the following acme.schema
placed in (/etc/ldap/schema/acme.schema)
attributetype (1.3.6.1.4.1.42.2.27.4.1.30
NAME 'root'
DESC 'regex to match entity tag that allows browsing entities'
EQUALITY booleanMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.7 )
attributetype (1.3.6.1.4.1.42.2.27.4.1.31
NAME 'deployAllowed'
DESC 'deployAllowed'
EQUALITY booleanMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.7 )
attributetype (1.3.6.1.4.1.42.2.27.4.1.32
NAME 'serverInfoAllowed'
DESC 'serverInfoAllowed'
EQUALITY booleanMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.7 )
attributetype ( 1.3.6.1.4.1.42.2.27.4.1.20
NAME 'entityTagRegexesForNavigating'
DESC 'regex to match entity tag that allows browsing entities'
EQUALITY caseExactMatch
SUBSTR caseIgnoreSubstringsMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 )
attributetype ( 1.3.6.1.4.1.42.2.27.4.1.21
NAME 'entityTagRegexesForReading'
DESC 'regex to match entity tag that allows reading entities'
EQUALITY caseExactMatch
SUBSTR caseIgnoreSubstringsMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 )
attributetype ( 1.3.6.1.4.1.42.2.27.4.1.22
NAME 'entityTagRegexesForWriting'
DESC 'regex to match entity tag that allows writing entities'
EQUALITY caseExactMatch
SUBSTR caseIgnoreSubstringsMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 )
objectclass ( 1.1.2.2.1 NAME 'acmePermission'
DESC 'permissions for Acme'
SUP top
AUXILIARY
MAY ( root $ deployAllowed $ serverInfoAllowed $ entityTagRegexesForNavigating $
entityTagRegexesForReading $ entityTagRegexesForWriting ) )
by using slaptest
utility.
cd /tmp/ldap
cat > schema_convert.conf <<EOT
include /etc/ldap/schema/core.schema
include /etc/ldap/schema/collective.schema
include /etc/ldap/schema/corba.schema
include /etc/ldap/schema/cosine.schema
include /etc/ldap/schema/duaconf.schema
include /etc/ldap/schema/dyngroup.schema
include /etc/ldap/schema/inetorgperson.schema
include /etc/ldap/schema/java.schema
include /etc/ldap/schema/misc.schema
include /etc/ldap/schema/nis.schema
include /etc/ldap/schema/openldap.schema
include /etc/ldap/schema/ppolicy.schema
include /etc/ldap/schema/ldapns.schema
include /etc/ldap/schema/pmi.schema
include /etc/ldap/schema/acme.schema
EOT
mkdir ldif_output
slapd -f schema_convert.conf -F .
slapcat -f schema_convert.conf -F ldif_output -n 0 | grep acme,cn=schema
# get the output
slapcat -f schema_convert.conf -F ldif_output -n0 -H \
ldap:///<output> -l cn=acme.ldif
# Edit cn=acme.ldif to arrive at the following attributes:
dn: cn=acme,cn=schema,cn=config
...
cn: acme
Also remove the following lines from the bottom:
structuralObjectClass: olcSchemaConfig
entryUUID: 52109a02-66ab-1030-8be2-bbf166230478
creatorsName: cn=config
createTimestamp: 20110829165435Z
entryCSN: 20110829165435.935248Z#000000#000#000000
modifiersName: cn=config
modifyTimestamp: 20110829165435Z
and finally
sudo ldapadd -Q -Y EXTERNAL -H ldapi:/// -f cn\=brooklyn.ldif
AMP can be used to deploy and manage other AMP instances. This can be used to give a scalable and secure architecture.
The remainder of this chapter focuses on the use-case of multiple enterprise customers.
It is increasingly common for a service provider to offer to their enterprise customers a set of service blueprints. These could be single-VM applications or multi-VM applications that the customer can order and that are automatically deployed. Some of these applications may also be managed automatically (i.e. offered as a dedicated SaaS, rather than allowing direct access to the underlying VMs).
The customer’s applications (and thus their VMs and services) are in an isolated network. Each tenant will have an isolated network in each cloud + region that they use. The VMs running in these isolated network will be private by default (i.e. not reachable from outside that isolated network), with options for using NAT rules, security groups, and cloud load-balancers to expose particular ports / services.
The points where end-users interact with their blueprints are:
The customer never directly interacts with the management plane. Usage of the AMP web console is often limited to second-line support by the service provider. The AMP REST API is used to interact with AMP, and to invoke AMP operations from the customer dashboard or from ticketing systems etc.
Term | Definition |
---|---|
AMP Overmaster | AMP management plane that looks after the AMP master(s). For example, there could be separate master AMP clusters for production, staging and development. |
AMP Master | AMP management plane that looks after the AMPs for each tenant. Provisioning requests and queries can be routed through the AMP master, which will forward them to the correct tenant AMP. |
Tenant AMP | The AMP node(s) responsible for a given tenant in a given cloud/region. |
AMP Cluster | An AMP management plane, which usually consists of two AMP nodes where the second is a stand-by to take over automatically if the first node fails. |
Service provider | The company/organization offering the service to the enterprise customers. |
Tenant | A customer of the service provider. Often an enterprise customer who may wish to run many applications in their isolated network. |
Service blueprint | The blueprint of a service to be provisioned and optionally managed; normally represents an application which could be single-VM or a multi-VM (e.g. a Java enterprise application with JBoss app-server and MySQL database, or a MongoDB cluster). |
Customer marketplace/dashboard | The customer-facing UI for ordering services; the customer portal. |
Controller WAR | A webapp that is specific to a type of blueprint, used as a dashboard for instances of that blueprint. It can pick out specific attributes and operations to be exposed to the end-customer. |
Isolated network | A network, associated with one single tenant, that isolates the VMs within it from the outside world. |
Each tenant’s isolated network has a set of tenant AMPs, which run within the isolated network. The tenant AMPs are responsible for deploying and managing all applications within this isolated network. The set of tenant AMPs can be grown (and shrunk) as the load changes for this customer. The set of customer applications is easily sharded across the tenant AMPs, as each application is (currently) deployed and managed independently.
The master AMP cluster manages all the tenant AMPs. The master AMP is responsible for provisioning new tenant AMPs as required, and for routing traffic to them.
When a tenant orders a new application through the customer dashboard, a REST API call is made to the master AMP. This is then routed to the least-loaded tenant AMP within appropriate isolated network.
When a new tenant is being created, the master AMP creates a new isolated network for that customer. When the first application is ordered by that tenant, a new tenant AMP is created which will then handle the application provisioning.
The following steps illustrate the interactions.
To simplify the deployment and management (e.g. upgrading, or replacing failed servers) of the master AMP clusters, an Overmaster AMP can optionally be used.
Deployment of a master AMP cluster involves deploying the appropriate blueprint at the Overmaster AMP. This blueprint specifies where to deploy the master AMP cluster, the size of the cluster (i.e. how many standby AMP instances), and the persistence options (e.g. the Object Store to use).
Upgrade of a master AMP cluster is performed through an operation on the blueprint instance within the Overmaster AMP. This will create new AMP instances for the new version, test that the persisted state is still valid, and then switch over to the new AMP instances.
AMP-of-AMPs is in the process of being generalised, from the customer-specific project in which it was originally created.
The current code supports deploying a master AMP cluster (i.e. standby nodes for HA), and having that deploy AMP tenants on-demand. Currently a single AMP tenant is deployed per tenant.
The recommended setup for evaluations is:
There are a number of features that are either still being extracted from customer-specific code, still under development, or that could be added based on customer requirements:
A common deployment pattern is for the service provider to develop (or reuse) their own customer dashboard. This dashboard is the “marketplace” (i.e. the on-line store where one can browse from a catalog of service blueprints, and choose what should be deployed). The dashboard is also used by customers to view the list of running applications, to view their status, and to manage those applications.
New versions of the blueprints can be uploaded. A blueprint update adds a new item in the catalog which will be available for launching instances. The old versions are still available. New instances will be launched with the latest stable version only, with the option to override it (i.e. testing snapshot versions or rolling back to a previous stable version). Once an instance is launched it is fixed to the blueprint used for creation. Blueprint versions can be deleted.
AMP can be configured to persist its state to an object store, or to the file system (e.g. to an NFS mount). See the AMP documentation. Any object store supported by the jclouds blobstore abstraction is supported.
AMP can run with one or more standby nodes. The standby nodes monitor the health of the AMP that is currently “master”. If it fails, the standby nodes elect a new master and this takes over management, reading the persisted state.
For a given tenant, there can be one or more tenant AMP instances.
The load on AMP is easily split across instances - sharded with an independent set of apps on each AMP instance. The number of AMP instances per tenant can be increased over time as required, based on the number of apps/VMs for that tenant.
The AMP master can choose the least-loaded tenant AMP instance when a new application is to be deployed.
A policy can trigger adding of tenant AMP instances automatically, based on metrics such as the number of applications or VMs under management.
Scaling down (i.e. reducing the number of tenant AMPs) is more difficult because the applications remaining on a given instance must be moved to a new tenant AMP instance on shutdown.
Scaling down is not yet supported; the state is persisted so this is certainly feasible, but is not yet implemented.
The tenant AMPs do not need to run in HA mode (i.e. with a standby) if a management outage of a few minutes is acceptable. A policy could be used that automatically started a replacement Tenant AMP, binding to the persisted state.
Alternatively, there could be a smaller number of “stem cell AMPs” for a tenant. AMP itself could monitor+manage the running AMP instances, and on failure launch one of the standby AMPs configured to point at the correct persisted state. This would greatly decrease the number of standbys required for a large deployment.
AMP can maintain multiple versions of a “service blueprint” (i.e. of the AMP services, including the class or YAML and the OSGi bundles).
Semantic versioning (semver.org) is recommended for versioning of blueprints within AMP. However, the blueprint author is responsible for choosing the version numbers, and thus for what version numbers to choose.
For an AMP-of-AMPs, the new blueprint is added to all tenant AMPs by calling the deploy_service
effector on the AMP master.
Note that under-the-covers, this uses the AMP REST API of each tenant AMP to add the blueprints to the catalog (by posting to https://<endpoint>/v1/catalog
).
Many blueprints may make use of external artifacts such as Chef recipes, VM images (base OS installs), RPM files, and other resources. These will typically have their own versioning schemes. In order to ensure consistency in blueprint behaviour, it is recommended that:
This ensures that a blueprint version never changes. If an updated recipe or artifact is desired as part of a blueprint, a new blueprint should be registered. As a separate feature, existing blueprints can also be updated to that version. This allows us to track exactly which artifacts/versions are in use as part of which service instances.
Unpack the amp-of-amps tar.gz from FIXME: download URL?, and unpack.
All setup can be done by creating a brooklyn.properties
containing the runtime configuration.
This can be in ~/.brooklyn/brooklyn.properties
(the default) or another file specified
on the command line with --localAMPProperties FILE
.
For more information on the contents of this file, see http://brooklyncentral.github.io/use/guide/quickstart/brooklyn.properties. and http://brooklyncentral.github.io/use/guide/management/.
The recommended settings are described below.
The AMP-of-AMPs redistributable archive must be available at runtime in order to set up the tenants. The location of this archive can be set using:
VERSION=0.2.0-SNAPSHOT
ampofamps.tenant.amp.download_url=http://path/to/ampofamps-${VERSION}-dist.tar.gz
If this key is not specified, the master AMP blueprint will default to looking in
/tmp/ampofamps-${VERSION}-dist.tar.gz
(on the master AMP machine, where it will be
uploaded to the tenant AMP machine).
Addition archives can be added within the archive in lib/dropins
.
For localhost-only dev/test deployments, see the “Debug” section. One can include in the master AMP blueprint:
use_localhost: true
Alternatively, as a lightweight form of authorization using statically defined users
and passwords is possible as follows
(note that brooklyn.webconsole.security.provider
must not be set):
brooklyn.webconsole.security.users=admin,myname
brooklyn.webconsole.security.user.admin.password=P5ssW0rd
# See https://github.com/apache/incubator-brooklyn/blob/master/docs/use/guide/management/index.md
# for details of generating hashed password, using `brooklyn generate-password --user myname`
brooklyn.webconsole.security.user.myname.salt=Qshb
brooklyn.webconsole.security.user.myname.sha256=72fa67c29ceec55858fdbd9df171733236aea00f556f3ccf92566a21f36ded19
To require SSL encryption, set:
brooklyn.webconsole.security.https.required=true
If using HTTPS, set the certificate which the AMP server will serve by including:
brooklyn.webconsole.security.keystore.url=...
brooklyn.webconsole.security.keystore.password=...
# alias is optional, if keystore has multiple certs
brooklyn.webconsole.security.keystore.certificate.alias=...
To use LDAP for authorization, set:
# Substitute the IP, password and realm for those in your own LDAP
brooklyn.webconsole.security.ldap.url=ldap://1.2.3.4:389
brooklyn.webconsole.security.ldap.password=mypassword
brooklyn.webconsole.security.ldap.realm=ou\=users,dc\=acme,dc\=com
brooklyn.webconsole.security.provider=brooklyn.rest.security.provider.LdapSecurityProvider
brooklyn.entitlements.global=io.cloudsoft.amp.entitlements.LdapEntitlementManager
Ensure that the master AMP machine has an ssh key in the default location of ~/.ssh/id_rsa
or
~/.ssh/id_dsa
. If necessary, generate a new key with ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
.
Optionally, to use a different key location then set:
# The publicKeyFile can be omitted if it is just the private key + .pub
brooklyn.location.jclouds.privateKeyFile=~/.ssh/amp_rsa
brooklyn.location.jclouds.publicKeyFile=~/.ssh/amp_rsa.pub
Note that if using non-jclouds locations (e.g. bring-your-own-nodes), then use brooklyn.location.privateKeyFile=
instead of the brooklyn.location.jclouds
prefix.
To run with high-availability, you must also turn on persistence and set the high-availability mode.
All instances should point at the same persistence store (e.g. a blobstore),
with one server nominated as the initial master and others as the failover server
(or alternatively you can use auto
for all to let the servers elect the master,
but in that case you will have to manually start the master AMP blueprint at the master node).
First you must configure the HA object store on each machine by placing this in ~/.brooklyn/brooklyn.properties
,
replacing the object store target and credentials with your own:
brooklyn.location.named.amp-master-persistence-store=jclouds:swift:https://ams01.objectstorage.softlayer.net/auth/v1.0
brooklyn.location.named.amp-master-persistence-store.identity=abcd
brooklyn.location.named.amp-master-persistence-store.credential=0123456789abcdef...
You can then set the following brooklyn.properties
:
brooklyn.persistence.location.spec=named:amp-master-persistence-store
brooklyn.persistence.dir=amp-master-debug
The dir
string can be anything you like, to uniquely identify the bucket that a
given AMP plane uses inside the store; the -debug
suffix is given as an example.
(Alternatively you can apply these as command-line arguments as
--persistenceLocation named:amp-master-persistence-store
and --persistenceDir amp-master-debug
.
If you prefer to run without persistence, pass --persist disabled
, or if you need to wipe the persistence store,
with all other servers stopped, run --persist clean
.)
In a high-availability deployment, all the AMP servers must have access to the same
SSH public and private key. The easiest way to do this is to ensure that
~/.ssh/id_rsa
and ~/.ssh/id_rsa.pub
are the same across all master AMP servers.
For more information on configuring persistence, see the AMP persistence docs.
_TODO: not yet supported _
To set the persistence configuration that will be used for the tenant AMPs…
To setup an HA cluster for a tenant AMP…
We will launch an AMP instance, which will be the master AMP. We will deploy
the master AMP application into this AMP instance. Unlike most entities, deploying
the MasterAmp
entity does not deploy a separate software process. Instead, it
just runs code in the local AMP instance that exposes effectors for creating tenants,
deploying services, etc. When we subsequently invoke those effectors, the MasterAmp
will create more entities for the tenants - these really will launch new VMs and
processes for the tenants.
Start the AMP instance with the following command:
server1% bin/amp.sh launch
This will launch the AMP console, by default on localhost:8081
,
or if security has been configured then on *.*.*.*:8443
.
In the console, you can access the catalog and deploy applications to
configured locations.
If using an HA cluster, then after a short pause run the same command on additoinal servers:
server2% bin/amp.sh launch
server3% bin/amp.sh launch
The above sequence will force server1
to be the master. The selection of the master can also be
triggered explicitly using the --highAvailability [master|standby|auto]
CLI argument.
(When this argument is omitted, it runs in auto-detect
mode, where the first
server will become the master and subsequent servers will become standby servers.)
Many other CLI options are available, identical to the AMP CLI options.
These are described by running bin/amp.sh help launch
.
The first time master AMP is run, you will need to deploy the MasterAmp
blueprint
into that AMP node (i.e. to turn that AMP node into the “Master AMP”). On subsequent
runs if restoring from the persisted state this should not be done as the
application will be restored automatically.
From the web-console (of the primary, if running in HA mode), add a new application with the YAML below:
location: localhost
name: Master AMP
services:
- type: io.cloudsoft.ampofamps.master.MasterAmp
Alternatively, the first time master AMP is run, you can pass --master-amp
at the CLI
(to the instance chosen as primary for HA, e.g. server1
above).
The master AMP entity exposes effectors, including add_blueprint
, create_tenant
and
deploy_service
.
These take a number of parameters, documented fully in the GUI (clicking on the effectors tab).
Some brief examples of invoking these via curl
are shown below:
The master AMP entity exposes an effector that registers new blueprints with the master and all tenant AMPs.
The effector expects a JSON object with a single field, named “blueprint”, containing the YAML blueprint to be registered. For example, to invoke it with curl:
# These constants should be set for all curl commands.
# The id is that of the relevant entities within the master AMP blueprint instance.
# Note: command below assumes node.js is insalled
export MASTER_BASE_URL=https://localhost:8443
export AMP_MASTER_APP_ID=`curl -s $MASTER_BASE_URL/v1/applications | xargs -0 node -e 'console.log(JSON.parse(process.argv[1])[0].id);'`
export AMP_MASTER_ENTITY_ID=$AMP_MASTER_APP_ID
curl \
--data-urlencode blueprint@/path/to/app.yaml \
$MASTER_BASE_URL/v1/applications/${AMP_MASTER_APP_ID}/entities/${AMP_MASTER_ENTITY_ID}/effectors/add_blueprint
Where /path/to/app.yaml
contains a blueprint, for example:
brooklyn.catalog:
id: jboss
version: 1.0
services:
- type: brooklyn.entity.webapp.jboss.JBoss7Server
The new blueprint will be registered with all subsequently deployed tenant AMPs.
The YAML can also reference OSGi bundles that will be loaded when the blueprint is used. For example, it could look like:
brooklyn.catalog:
id: cassandra
version: 1.0.0
services:
- type: acme.amp.Cassandra
brooklyn.libraries:
- http://acme.com/osgi/acme/amp/cassandra/1.0.0/acme_amp_cassandra_1.0.0.jar
To create a new tenant:
curl \
--insecure \
--user admin:password \
-H "Content-Type: application/json" \
-d '{
"tenant_id": "00001",
"tenant_name": "mpstanley",
"tenant_location": "softlayer:ams01",
"tenant_location_account_identity": "yourIdHere",
"tenant_location_account_credential": "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef"
}' \
$MASTER_BASE_URL/v1/applications/${AMP_MASTER_APP_ID}/entities/${AMP_MASTER_ENTITY_ID}/effectors/create_tenant\?timeout=0
This returns a AMP “activities” object as JSON including an id
field whose status and result can be
tracked at /v1/activities/ID
. (If timeout=0
is omitted, the request blocks until completed and returns the result,
but that is not recommended here as provisioning can take several minutes and the http connection will probably timeout first!)
Usage notes:
tenant_name
is only needed for operator convenience. tenant_location
says where the tenant AMP instance should be created. The
tenant_location_account_identity
and tenant_location_account_credential
give
the credentials for this location. These credentials are not stored by the
tenant AMP.To deploy a service (i.e. a new application instance) for a given tenant:
curl \
--insecure \
--user admin:password \
-H "Content-Type: application/json" \
-d '{
"tenant_id": "00001",
"tenant_name": "mpstanley",
"tenant_service_id": "mpstanley-jboss1",
"service_display_name": "jboss 1",
"tenant_location": "localhost",
"tenant_location_account_identity": "yourIdHere",
"tenant_location_account_credential": "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef",
"blueprint_type": "simulated-jboss-entity"
}' \
$MASTER_BASE_URL/v1/applications/${AMP_MASTER_APP_ID}/entities/${AMP_MASTER_ENTITY_ID}/effectors/deploy_service\?timeout=0
Again, this returns a AMP “activities” object as JSON including an id
field whose status and result can be
tracked at /v1/activities/ID
.
Usage notes:
tenant_id
, tenant_location
and blueprint_type
.tenant_id
when deploy_service
is invoked,
the tenant AMP will be created using the same credentials.
tenant_name
is only needed for operator convenience.tenant_location
determines where the application will be deployed.
This (and the credentials) are not required to match those of the create_tenant
call.
The credentials are not cached.tenant_service_id
and service_display_name
parameters are optional and are for tenant_service_id
in particular is useful to query subsequently, as it is set in
the config of the service entity. (Alternatively you can poll the task response and you
will get the tenant or service entity ID in the result once completed.)There is a lookup effector on the master AMP entity which takes a single argument,
the id
to search for among the internal IDs of all entities and the caller-supplied
tenant_id
and tenant_service_id
fields. This can be a regular expression.
The result is a list of summary info on all matching entities, including fields
tenant_url
, tenant_url_user
and tenant_url_password
for tenant AMP instances,
and mirrored_entity_url{,_user,_password}
for mirrored service instances.
For example, this will lookup everything associated with the tenant_id “00001”:
curl \
--insecure \
--user admin:password \
-H "Content-Type: application/json" \
-d '{ "id": "00001" }' \
$MASTER_BASE_URL/v1/applications/${AMP_MASTER_APP_ID}/entities/${AMP_MASTER_ENTITY_ID}/effectors/lookup
The stop
effector exists on all nodes. However stopping a tenant is possible only when there
are no services running. Likewise stopping everything (the root entity) on the master AMP works
only if there are no tenants running. To forcibly shut down a tenant there are two effectors
that can be used:
stop_node_but_leave_apps
- shuts down the tenant but leaves all services runningstop_node_and_kill_apps
- shuts down the tenant and stops all running servicesTo apply the effectors at once for all running tenants use the Tenants
entity (which is a child of master AMP).
Analogously the effectors there are:
stop_tenants_but_leave_services
- shuts down all tenants but leaves their services runningstop_tenants_and_kill_services
- shuts down all tenants and stops all running services_TODO: not yet supported _
The load for a particular tenant can be sharded across multiple independent AMP instances.
To scale out the number of AMP instances for a given tenant…
To scale back…
The following is a very simple illustration of deploying and upgrading a versioned blueprint.
The YAML for the blueprint could look like:
brooklyn.catalog:
id: machine-entity-with-version
version: 1.1
services:
- type: brooklyn.entity.machine.MachineEntity
brooklyn.config:
postLaunchCommand: echo "launched v1.0" >> /tmp/amp-provisioning-log.txt
This could be deployed and used by a tenant:
curl \
--data-urlencode blueprint@/path/to/app-v1.0.yaml \
$MASTER_BASE_URL/v1/applications/${AMP_MASTER_APP_ID}/entities/${AMP_MASTER_ENTITY_ID}/effectors/add_blueprint
curl \
--insecure \
--user admin:password \
-H "Content-Type: application/json" \
-d '{
"tenant_id": "00001",
"tenant_name": "mpstanley",
"tenant_location": "localhost",
"tenant_location_account_identity": "yourIdHere",
"tenant_location_account_credential": "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef",
"blueprint_type": "machine-entity-with-version",
"service_display_name": "machine 2" }' \
$MASTER_BASE_URL/v1/applications/${AMP_MASTER_ID}/entities/${AMP_MASTER_ID}/effectors/deploy_service\?timeout=0
The blueprint might then be updated, for example to the YAML shown below (note the change
in version
number, and the change in the postLaunchCommand
):
brooklyn.catalog:
id: machine-entity-with-version
version: 1.1
services:
- type: brooklyn.entity.machine.MachineEntity
brooklyn.config:
postLaunchCommand: echo "launched v1.1" >> /tmp/amp-provisioning-log.txt
The new YAML could be deployed:
curl \
--data-urlencode blueprint@/path/to/app-v1.1.yaml \
$MASTER_BASE_URL/v1/applications/${AMP_MASTER_APP_ID}/entities/${AMP_MASTER_ENTITY_ID}/effectors/add_blueprint
A subsequent deploy_service
call (identical to that above) will use the updated YAML. The contents of
/tmp/amp-provisioning-log.txt
would be “launched v1.0” and then “launched v1.1”.
The “Overmaster” AMP plane looks after individual AMP master clusters.
You must follow steps 1 AND 2 in the instructions detailed below:
Instructions for updating and other information is described subsequently.
The Overmaster can be started by running the same amp.sh
script as done for starting the masters.
However do not deploy any blueprints into this. Follow the steps below to deploy blueprints.
Select one of the options below.
Before beginning:
~/.ssh/sample_rsa
and ~/.ssh/sample_rsa.pub
exist,
containing the right key for the nodes in the cluster to use when creating tenants.The blueprint is then:
name: AMP Overmaster - sample clustered
services:
- type: brooklyn.entity.brooklynnode.AMPNode
downloadUrl: <CHANGEME - URL where ampofamps-0.2.0-SNAPSHOT-dist.tar.gz lives>
subpathInArchive: ampofamps-0.2.0-SNAPSHOT
launchCommand: bin/amp.sh
extraCustomizationScript: |
[ -f ~/.ssh/id_rsa ] || ( mkdir -p ~/.ssh ; chmod 700 ~/.ssh ; ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa )
copyToRundir:
file:/~/.ssh/sample_rsa: sample_rsa
file:/~/.ssh/sample_rsa_pub: sample_rsa_pub
brooklynLocalPropertiesContents: |
# LDAP
brooklyn.webconsole.security.ldap.url=ldap://<IP-CHANGEME>:389
brooklyn.webconsole.security.ldap.password=<CHANGEME>
brooklyn.webconsole.security.ldap.realm=ou\=users,dc\=<acme-CHANGEME>,dc\=com
brooklyn.webconsole.security.provider=brooklyn.rest.security.provider.LdapSecurityProvider
# HTTPS
brooklyn.webconsole.security.https.required=true
# ssh key for connecting to remote machines
brooklyn.location.jclouds.privateKeyFile=sample_rsa
# cert for load balancer which fronts the controllers
ampofamps.cert.url=<CHANGEME e.g. classpath://sample/server.crt>
ampofamps.cert.key=<CHANGEME e.g. classpath://sample/server.key>
# persistence
brooklyn.location.named.amp-master-persistence-store=<CHANGEME e.g. jclouds:swift:https://ams01.objectstorage.softlayer.net/auth/v1.0>
brooklyn.location.named.amp-master-persistence-store.identity=<CHANGEME e.g. 1234:account>
brooklyn.location.named.amp-master-persistence-store.credential=<CHANGEME e.g. 0123456789abcdef...>
brooklyn.persistence.location.spec=named:amp-master-persistence-store
brooklyn.persistence.dir=amp-master-debug
# MISC
# disable localhost as a deployment target
brooklyn.location.localhost.enabled=false
location:
jclouds:softlayer:
identity: <softlayer username>
credential: <softlayer key>
region: <softlayer datacenter, eg dal05>
This simple example assumes that the tarball to set up the child node exists on the local server, at
/tmp/ampofamps-0.2.0-SNAPSHOT-dist.tar.gz
, and localhost
is a valid deployment target
(with passwordless ssh access configured).
location: localhost
name: AMP Overmaster - dev, single-node localhost
services:
- name: AMP Master Plane 01 Node
type: brooklyn.entity.brooklynnode.AMPNode
downloadUrl: file:///tmp/ampofamps-0.2.0-SNAPSHOT-dist.tar.gz
subpathInArchive: ampofamps-0.2.0-SNAPSHOT
launchCommand: bin/amp.sh
onExistingProperties: do_not_use
brooklynLocalPropertiesContents: |
brooklyn.persistence.dir=/tmp/amp-master-plane-01
It is suggested in step 2 below to deploy the debug configuration described in the sectdion “Running in Debug Mode”.
Once the control plane nodes are running, you must deploy the blueprint for the AMP master application into that control plane. This should be done once for each AMP master control plane set up in the previous section.
Connect to the remote master node and ensure there are no applications running there
Deploy the YAML for the master to run in this plane, e.g.
name: Master AMP
services:
- type: io.cloudsoft.ampofamps.master.MasterAmp
download_url:
The preferred way to upgrade (or rollback) the version of AMP or AMP is to drive this via a AMP management plane for that cluster. In other words, to upgrade a Tenant AMP, drive it from Master AMP; and to upgrade a Master AMP, drive it from Overmaster.
There are two types of upgrades:
For a single node, in-place, use the upgrade
effector on the node entity:
this spawns a new node on the same machine, ensuring that it can rebind in HOT STANDBY mode,
then stops both nodes, replaces the older version files with the newer files,
and restarts the node.
For a cluster, use the upgradeCluster
effector on the node entity:
this creates a new node in the cluster, then additional new nodes,
elects the first new node as the master, then stops the old nodes.
Note that persistence must be configured for both, either in the start script
or with brooklyn.config
of the form
brooklynnode.launch.parameters.extra: --persist auto --persistenceDir /tmp/brooklyn-persistence-example/`.
The known issues include:
WARN Setting AmpNodeImpl{id=xMZU6UaN} on-fire due to problems when expected running, up=false, not-up-indicators: {service.process.isRunning=The software process for this entity does not appear to be running}`
The Master AMP writes its logs to ./*.debug.log
.
The Master MAP web-console is the easiest way to see the overall status, and to drill into activity history. The activity history will show the effector history, including success/fail. Selecting a row will show details of that activity including failure info.
In the master AMP web console, select the tenant AMP and click on the Sensors tab.
The brooklynode.webconsole.url
sensor gives the URL.
To get the web console login credentials of the tenant AMP, click on the Summary tab and
expand the Config section. The brooklynnode.managementPassword
config key gives the
password to use. The username will be “admin”.
You will need to ssh to the machine where the tenant AMP is running, and go to the run directory. There you will find the log files.
In the master AMP web console, select the tenant AMP entity and look at its Sensors.
Get the values for the host.sshAddress
and the run.dir
.
Use these to ssh
to the tenant AMP machine, and look in the run directory for the
console
and *.debug.log
files.
If the run.dir
sensor has not been set, then it means it failed at an earlier stage
(e.g. during install). You can look at the install.dir
sensor to find where the
install artifacts should be.
Unfortunately there is not yet a simple way to retrieve the list of deployed blueprints programmatically.
In the web console (of master AMP or a tenant AMP), click on the Catalog tab, and the on Entities (on the left). This will expand to show the list of deployed blueprints. Click a blueprint’s name to see additional details. The “Plan” shows the YAML that was deployed.
Effectors invoked (via REST api) with \?timeout=0
are asynchronous.
The response will be the json for a task being executed. It will include the id
,
a description such as "Invoking effector create_tenant..."
, and a
"currentStatus"
(most likely of "In progress"
).
You can use the Master AMP web-console to view the task (by going to the Applications tab, selecting the Master AMP top-level in the tree, selecting the Activites tab, and clicking on the appropraite row).
Alternatively, you can use the REST api to retrieve the task details:
TASK_ID=<id returned in json>
curl $MASTER_BASE_URL/v1/activities/${TASK_ID}
Effectors invoked without \?timeout=0
are synchronous (i.e. blocking).
The json response will be the result of the effector. For example, add_blueprint
returns []
.
On failure, a json response describing the failure will be returned.
If the yaml is invalid, you’ll get a response such as:
{"message":"Invalid YAML: null; mapping values are not allowed here; in 'reader', line 6, column 13:\n - type: type: brooklyn.qa.load.SimulatedJBos ... \n ^"}
The following curl arguments can also be useful to diagnose problems. It will show the http status code, and will follow redirects:
curl -sL -w \"%{http_code} %{url_effective}\\n\" ...
If you run Master AMP without the configuration use_localhost: true
, then it does not expect
you to deploy Tenant AMPs to localhost. If you are using a global ~/.brooklyn/brooklyn.properties
then the Tenant AMP will also pick this up. It will therefore use the same persistence store.
With default settings, it will therefore become a standby node for the Master AMP!
There are three ways you can see his:
If the Tenant AMP fails to start (i.e. it shows “on-fire” in the Master AMP web-console), the following should be investigated.
If the application created by deploy_service
failed to start correctly, you can
first use the master AMP web-console to view the “mirrored entity” that represents
the application instance. This shows the sensors of the top-level application.
To drill into the activities and further details of the application, you’ll need to connect to the tenant AMP’s web-console (see “How do I connect to web console of Tenant AMP?”). From there, you can select the relevant application and its child entities, view the Sensors tab, and view the Activity tab to see the commands executes when launching the application.
A special debug mode where all deployments are localhost is available. It is not necessary to supply any cloud credentials, and all tenants and services will be created on localhost. (This requires passwordless localhost ssh access.)
This is configured on the master AMP by setting the config key ampofamps.debug.use_localhost
set to true. (The short name is use_localhost
.)
For example, the YAML could look like:
location: localhost
name: Master AMP
services:
- type: io.cloudsoft.ampofamps.master.MasterAmp
brooklyn.config:
use_localhost: true
skip_controllers: true
# Additional optional config includes:
tenant_amp_use_https: false
download_url: file:///tmp/ampofamps-0.2.0-SNAPSHOT-dist.tar.gz
brooklyn.mirror.poll_period: 5s
You can launch this using the command below (where the specified file contains the YAML above):
bin/amp.sh launch --app amp-master-debug.yaml
Note that the effectors are then available on the child of the blueprint, not at the root. This will impact the IDs needed in the curl commands. The following command will be useful (to find the id of the first child of the app:
export AMP_MASTER_ENTITY_ID=`curl -s $MASTER_BASE_URL/v1/applications/$AMP_MASTER_APP_ID/entities/ | xargs -0 node -e 'console.log(JSON.parse(process.argv[1])[0].id);'`
Alternatively, one can launch AMP without any applications, and then deploy via the REST API or web-console.
To run the project from within Eclipse, first ensure that the download_url
file exists, and
then create a launch configuration for io.cloudsoft.ampofamps.AmpOfAmpsMain
with the arguments
launch --persist disabled --app src/test/resources/master-amp-localhost.yaml
and,
recommended, with JVM arguments -Xmx1024m -Dbrooklyn.location.localhost.address=127.0.0.1
.
rm -rf /tmp/brooklyn-`whoami`/installs/AmpNode_*
blueprintType
which starts up very quickly (doing nothing) you can use: blueprintType: io.cloudsoft.ampofamps.blueprints.NoOp
A recommended pattern for production is to have a custom “controller WAR” for each type of blueprint. this would include a set of custom pages for provisioning the blueprint, and for viewing the status of instances of that blueprint.
The customer dashboard would infer which controller is responsible for each service. When users need to interact with existing instances or launch new ones they are redirected to the controller in charge. One controller may handle instances from different blueprint versions and different types of blueprints.
When new versions of the blueprints are uploaded, one can also specify which controller is responsible for it.
If there is a “controller WAR” associated with the blueprint (i.e. a WAR within the customer dashboard that is tied to this blueprint version), then that WAR will also have to be updated.
The following is a recommendation of how this can be achieved.
In brief, this proposal versions the two types of resource independently, using semantic versioning (semver.org) and a “context path” (string) to determine the version of controller webapp used for each version of the service blueprints.
This allows webapps to be updated, with the update used for existing blueprints, and for blueprints to be updated, using existing webapps.
The context path serves to prevent confusion when backwards-incompatible changes are made: this proposal requires all controller versions and all blueprint versions to explicitly specify the “context path” of the WAR which will be used. The “context path” is the path segment in the URL where the WAR is accessible. An update to a WAR at a given context path is applied immediately, replacing any WAR previously installed under that context path, and used for all service instances which specified that context path.
It is recommended that semantic versioning be followed, and that the “context path” include the “major”
and “minor” version numbers of the WAR file, but NOT the “patch” version. Thus a webapp v1.0.1
cassandra_1.0.1.war
and a v1.0.2 cassandra_1.0.2.war
would both specify a context path of cassandra_1.0
.
A patch version update should maintain compatibility and so such a replacement is safe and desirable.
(The WAR versions assigned to a context path are applied using most-recent-wins, not highest-version-wins,
so in the case of a bug in a WAR, rollback is possible simply by registering the previous WAR version.)
A major or minor version update may break backwards compatibility and so should get a distinct context
path.
Reasons services and controllers may become incompatible include changes in the effectors and sensors API exposed by a service as well as changes in AMP itself.
Existing instances are not updated as part of the blueprint update. This will lead to running instances based on a range of blueprint versions. Each running instance must have a corresponding controller which can handle its API.
A breaking change from the point of view of a controller is when the controller can’t talk to a blueprint version any more (i.e. removal of deprecated versions).
A breaking change from the point of view of a blueprint is when there is a change in its entities’ API which requires updates to the controller.
On the master AMP blueprint, there is a configuration option to include a web-app cluster:
ampofamps.debug.skip_controllers: false
The master AMP can be configured to deploy a cluster of custom web-app controllers for the various blueprints. The following keys allow you to provide an SSL certificate for installation at the nginx instance(s) for these controllers (shown below with their default values):
ampofamps.cert.url=classpath://sample/server.crt
ampofamps.cert.key=classpath://sample/server.key
When deploying a blueprint, it can reference the webapp for that blueprint version. For example, it could look like:
webapp:
url: http://server/cassandra/webapp/1.0.0/cassandra.war
version: 1.0.0
context_path: cassandra_1.0
blueprint:
id: my.Cassandra
version: 1.0.0
services:
- type: acme.amp.Cassandra
brooklyn.libraries:
- http://acme.com/osgi/acme/amp/cassandra/1.0.0/acme_amp_cassandra_1.0.0.jar
By using the AMP Jumphost, one can provision apps into a private network where there is no direct access to the VMs.
Each private network is (normally) dedicated to a single tenant. Within that private network, there is a jumphost that can access the other VMs within the private network. No network access is required to the jumphost from outside. This jumphost run an agent (the “AMP Jumphost” product).
A clustered message broker (e.g. RabbitMQ) is used send requests to the jumphost, and to receive responses from it. Through this mechanism, commands are executed on VMs within the private network.
The sequence for command execution (e.g. SSH or WinRM on a VM in the private network) is:
A RabbitMQ cluster is recommended, configured to ensure durability and high availability. This could be a centralised service, or could (if desired) be a separate RabbitMQ service per jumphost. It is assumed that the RabbitMQ cluster pre-exist and is pre-configured according to the enterprise’s security standards.
The AMP instances and the jumphost access the message broker via AMQP.
The jumphost must be able to reach the RabbitMQ broker’s AMQP port. The jumphost must be able to reach each VM on which commands are to be executed (normally on port 22 or 5985 for ssh and WinRM respectively).
Each AMP instance must be able to reach the RabbitMQ broker’s AMQP port.
The AMP jumphost is a stand-alone product.
Given the assumption that AMP does not have direct access to any VMs within the private network, it is assumed that the Jumphost is installed manually.
The size of server required by AMP Jumphost depends on the amount of activity: the number of WinRM and SSH commands to be executed concurrently, and the size of these commands (e.g. large file uploads are more expensive than simple commands).
For dev/test or when there are only a handful of VMs being managed, a small VM is sufficient. For example, an AWS m3.medium with one vCPU, 3.75GiB RAM and 4GB disk.
For larger production uses, a more appropriate machine spec would be two or more cores, at least 8GB RAM and 100GB disk. The disk is primarily for just for logs, and for some temporary files while uploading/downloading streams of data to/from VMs.
The recommended operating system is CentOS 6.x or RedHat 6.x.
The AMP Jumphost requires Java (JRE or JDK) minimum version 1.7. OpenJDK is recommended.
No inbound connections are required to the AMP Jumphost. It will make outbound connections to the RabbitMQ broker to subscribe and publish messages.
The AMP Jumphost will also make SSH and WinRM connections directly to the other machines within the private network.
AMP Jumphost expects a sensible set of locale information and time zones to be available; without this, some time-and-date reporting may be surprising.
It is recommended that the AMP Jumphost run as a non-root user.
No sudo
permissions are required.
Check that the Linux kernel entropy is sufficient. See https://brooklyn.apache.org/documentation/increase-entropy.html.
We recommend ensuring nproc and nofile are reasonably high (e.g. higher than 1024, which is often the default). We recommend setting it limits to a value above 16000.
If you want to check the current limits run ulimit -a
.
Here are instructions for how to increase the limits for RHEL like distributions.
Run sudo vi /etc/security/limits.conf
and add (if it is “brooklyn” user running Cloudsoft AMP):
amp soft nproc 16384
amp hard nproc 16384
amp soft nofile 16384
amp hard nofile 16384
Generally you do not have to reboot to apply ulimit values. They are set per session. So after you have the correct values, quit the ssh session and log back in.
For more details, see one of the many posts such as http://tuxgen.blogspot.co.uk/2014/01/centosrhel-ulimit-and-maximum-number-of.html
Install the AMP Jumphost by unpacking the .tar.gz
or .zip
installer.
Configure the jumphost using either the default properties file at ~/.brooklyn/jumphost.properties
,
or by supplying the path to a properties file via the command line argument --globalProperties
or --localProperties
.
The properties file should include configuration like that below (but with the values changed):
jumphost.id=JUMPHOST_123
tenant.id=TENANT_123
messageManager.rabbitmq.host=129.185.160.37
messageManager.rabbitmq.port=5672
messageManager.rabbitmq.username=myname
messageManager.rabbitmq.password=pa55w0rd
messageManager.crypto.secretKey=UmFuZG9tRW5jcnlwdEtleQ==
messageManager.crypto.initVector=UmFuZG9tSW5pdFZlY3Rvcg==
A full description of the configuration options is shown below:
jumphost.id
: a unique id for this jumphost; recommended to be used as part of the queue name,
and also in logging.tenant.id
: a unique id for this tenant; recommended to be used as part of the queue name,
and also in logging.messageManager.rabbitmq.host
: the IP or hostname of the RabbitMQ brokermessageManager.rabbitmq.port
: the AMQP port of the RabbitMQ brokermessageManager.rabbitmq.username
: the username for authenticating with the RabbitMQ broker, to
publish and to subscribe to messages.messageManager.rabbitmq.password
: the password for autneticating with the RabbitMQ broker.messageManager.crypto.secretKey
: the base-64 encoded secret key, used to encrypt and decrypt
messages (see the separate message encryption section).messageManager.crypto.initVector
: the base-64 encoded initialization vector, used when encrypting
and decrypting messages.messageManager.crypto.enabled
: if false, disables all message encryption. messageManager.crypto.secretKeyAlgorithm
: the algorithm for encrypting/decrypting messages;
defaults to “AES”, which is 128 bit.messageManager.crypto.transformation
: the Cipher transformation to use for padding ; defaults
to “AES/CBC/PKCS5Padding”.To launch the jumphost, run:
nohup ./bin/jumphost launch > /dev/null &
Note the redirect to /dev/null
is optional - it reduces the disk usage, versus it writing to
nohup.out.
A full list of the CLI options can be obtained by running ./bin/jumphost help
, or help on a
particular option such as .bin/jumphost help launch
.
The pid of the jumphost process will be written to the file pid_java
.
Logging is configured in ./conf/logback*
. See Logback Documentation
for further details.
By default, logs are written to ./jumphost.debug.log
and ./jumphost.info.log
.
Within AMP, location(s) can be configured to use the jumphost for all SSH-based and WinRM-based access to the VMs.
Example configuration for a location is shown below (focusing on the configuration relating to the jumphost; other cloud-specific configuration is omitted for brevity):
brooklyn.location.named.MyPrivateLocation=jclouds:vcloud-director:https://acme.com/api
brooklyn.location.named.MyPrivateLocation.useJcloudsSshInit=false
brooklyn.location.named.MyPrivateLocation.pollForFirstReachableAddress=false
brooklyn.location.named.MyPrivateLocation.sshToolClass=io.cloudsoft.amp.jumphost.ssh.client.SshProxiedTool
brooklyn.location.named.MyPrivateLocation.sshToolClass.jumphost.id=JUMPHOST_123
brooklyn.location.named.MyPrivateLocation.sshToolClass.tenant.id=TENANT_123
brooklyn.location.named.MyPrivateLocation.sshToolClass.messageManager.rabbitmq.host=129.185.160.37
brooklyn.location.named.MyPrivateLocation.sshToolClass.messageManager.rabbitmq.port=5672
brooklyn.location.named.MyPrivateLocation.sshToolClass.messageManager.rabbitmq.username=myname
brooklyn.location.named.MyPrivateLocation.sshToolClass.messageManager.rabbitmq.password=pa55w0rd
brooklyn.location.named.MyPrivateLocation.sshToolClass.messageManager.crypto.secretKey=UmFuZG9tRW5jcnlwdEtleQ==
brooklyn.location.named.MyPrivateLocation.sshToolClass.messageManager.crypto.initVector=UmFuZG9tSW5pdFZlY3Rvcg==
brooklyn.location.named.MyPrivateLocation.winrmToolClass=io.cloudsoft.amp.jumphost.winrm.client.WinRmProxiedTool
brooklyn.location.named.MyPrivateLocation.winrmToolClass.jumphost.id=JUMPHOST_123
brooklyn.location.named.MyPrivateLocation.winrmToolClass.tenant.id=TENANT_123
brooklyn.location.named.MyPrivateLocation.winrmToolClass.messageManager.rabbitmq.host=129.185.160.37
brooklyn.location.named.MyPrivateLocation.winrmToolClass.messageManager.rabbitmq.port=5672
brooklyn.location.named.MyPrivateLocation.winrmToolClass.messageManager.rabbitmq.username=myname
brooklyn.location.named.MyPrivateLocation.winrmToolClass.messageManager.rabbitmq.password=pa55w0rd
brooklyn.location.named.MyPrivateLocation.winrmToolClass.messageManager.crypto.secretKey=UmFuZG9tRW5jcnlwdEtleQ==
brooklyn.location.named.MyPrivateLocation.winrmToolClass.messageManager.crypto.initVector=UmFuZG9tSW5pdFZlY3Rvcg==
Note that the configuration values must match those of the jumphost (i.e. the same jumphost.id
,
tenant.id
, RabbitMQ broker, and cryptography configuration).
The previous amp.id
is now deprecated. If not supplied, the unique id of the AMP instance will
be used.
The useJcloudsSshInit=false
will disable any attempts by jclouds to execute SSH commands (which
would thus not have gone over the jumphost).
The pollForFirstReachableAddress=false
ensures that AMP will not try to reach the ip:port
of the VM, to determine which of the multiple possible addresses is reachable from the AMP
server. Instead, the first address returned by the Cloud provider is taken, and that is
subsequently passed to the AMP Jumphost for executing commands. Note this may cause strange
behaviour or fail in clouds that give the VM multiple IP addresses, where only one of those
IPs is accessible from the AMP jumphost.
With this configuration, VM provisioning will be done in the normal way. However, when attempting to execute SSH or WinRM commands on the VMs, the execution will be done via the AMP Jumphost.
It is also possible to use the AMP Jumphost for a BYON location. For example, in YAML:
location:
byon:
hosts:
- ssh: 10.0.0.1:22
privateKeyFile: ~/.ssh/brooklyn.pem
user: myuser
- winrm: 10.0.0.2:5985
password: mypassword
user: myuser
osFamily: windows
sshToolClass: io.cloudsoft.amp.jumphost.ssh.client.SshProxiedTool
sshToolClass.jumphost.id: JUMPHOST_123
sshToolClass.tenant.id: TENANT_123
sshToolClass.messageManager.rabbitmq.host: 129.185.160.37
sshToolClass.messageManager.rabbitmq.port: 5672
sshToolClass.messageManager.rabbitmq.username: myname
sshToolClass.messageManager.rabbitmq.password: pa55w0rd
sshToolClass.messageManager.crypto.secretKey: UmFuZG9tRW5jcnlwdEtleQ==
sshToolClass.messageManager.crypto.initVector: UmFuZG9tSW5pdFZlY3Rvcg==
winrmToolClass: io.cloudsoft.amp.jumphost.winrm.client.WinRmProxiedTool
winrmToolClass.jumphost.id: JUMPHOST_123
winrmToolClass.tenant.id: TENANT_123
winrmToolClass.messageManager.rabbitmq.host: 129.185.160.37
winrmToolClass.messageManager.rabbitmq.port: 5672
winrmToolClass.messageManager.rabbitmq.username: myname
winrmToolClass.messageManager.rabbitmq.password: pa55w0rd
winrmToolClass.messageManager.crypto.secretKey: UmFuZG9tRW5jcnlwdEtleQ==
winrmToolClass.messageManager.crypto.initVector: UmFuZG9tSW5pdFZlY3Rvcg==
The Jumphost only accepts SSH and WinRM commands. This means that no other types of interaction with the VMs inside the private network is possible from AMP (given that the primary use-case for the jumphost is to provision and manage applications within an isolated private network).
The entities within AMP must be configured to turn off other connection mechanisms (e.g. management over JMX, HTTP, etc).
For example:
httpMonitoring.enabled: false
.jmx.enabled: false
.jmx.enabled: false
.clientMonitoring.enabled: false
(though this will not work for clustered MongoDB).httpMonitoring.enabled: false
.thriftMonitoring.enabled: false
and jmx.enabled: false
.All messages sent and received via the message broker are encrypted/decrypted by AMP and by the
AMP Jumphost (unless crypto.enabled
is set to false).
By default, this uses the “AES” algorithm, which uses 128 bit keys.
For AES-256, it may require the Java Cryptography Extension (JCE) Unlimited Strength
Jurisdiction Policy, depending on the JVM’s provider and version. This must be installed
on the AMP Jumphost and on each AMP instance. Appropriate length private keys and initialization
vectors must be used.
The mechanism used to generate the secret key and the initialization vector is up to the enterprise.
It is assumed that the RabbitMQ cluster already exists, and has been configured according to the enterprise’s security standards.
The processes for operations staff (e.g. end-users or support staff ssh’ing to the VMs) is out-of-scope. Such considerations are dependent on the security model within the enterprise.
There are a number of enhancements
Longevity and stress testing is still on-going. Testing of failure scenarios is also on-going (e.g. temporary loss of connectivity to the RabbitMQ broker).
WinRM execution from the AMP Jumphost is intermittently unreliable. The underlying cause is a problem in the WinRM clinet (pywinrm, using jython).
The long-term fix is to switch to a pure Java WinRM client implementation. Work is underway for this, but is unfortunately not ready for this release.
On AMP Jumphost restart, it will automatically re-subscribe to its queues and execute messages on those queues. However, the tasks that were being executed with the Jumphost stopped will have been lost. Worst case is that no response message is sent for the request, causing the AMP server to continually wait. The AMP server will (by default) timeout after waiting three hours for the response.
The behaviour of pollForFirstReachableAddress=false
may be changed in future versions, to
better handle clouds that give multiple addresses to the VM. An improved design would be for
AMP to query the Jumphost, for which ip:port is reachable.
If there is a single location that can deploy both SSH and Windows machines, the need to duplicate the jumphost configuration for both the sshToolClass and winrmToolClass is sub-optimal. This may be changed in a future version.
The auto-populated amp.id
will change when the AMP server is restarted. This means the
response queue name will change. See https://issues.apache.org/jira/browse/BROOKLYN-202.
The SSH connections and WinRM connections are not re-used. Each new execution will create a new connection on the AMP Jumphost. This is CPU-intensive.
The only monitoring performed by AMP on entities within the private network is over SSH or WinRM (via the Jumphost, because that is all the Jumphost supports).
One approach to monitoring is to use a third party monitoring service such as NewRelic: have each VM push to that, and then have AMP query the monitoring (central) service to get metrics about each entity.
A future feature could be to support VM-provisioning via the jumphost. This would be useful for clouds where the cloud endpoint is private - i.e. where it can be accessed by the jumphost, but not directly by AMP.
AMP uses the SLF4J logging facade, which allows use of many popular frameworks including logback
,
java.util.logging
and log4j
.
The convention for log levels is as follows:
ERROR
and above: exceptional situations which indicate that something has unexpectedly failed or
some other problem has occured which the user is expected to attend toWARN
: exceptional situations which the user may which to know about but which do not necessarily indicate failure or require a responseINFO
: a synopsis of activity, but which should not generate large volumes of events nor overwhelm a human observerDEBUG
and lower: detail of activity which is not normally of interest, but which might merit closer inspection under certain circumstances.Loggers follow the package.ClassName
naming standard.
The default logging is to write INFO and above to the console. AMP also writes INFO+
messages to amp.info.log
, and DEBUG+ to amp.debug.log
. Each is a rolling log file,
where the past 10 files will be kept.
A logback.xml
file is included in the conf/
directly of the AMP distro;
this is read by brooklyn
at launch time. Changes to the logging configuration,
such as new appenders or different log levels, can be made directly in this file
or in a new file included from this.
The default logback.xml
file references a collection of other log configuration files
included in the AMP jars. It is necessary to understand the source structure
in the logback-includes project.
For example, to change the debug log inclusions, create a folder brooklyn
under conf
and create a file logback-debug.xml
based on the
brooklyn/logback-debug.xml
from that project.
Logback is highly configurable. For example, the syslog appender can be used. This provides a simple way to integrate with tools such as logstash.
This sub-section is a work in progress; feedback from the community is extremely welcome.
The default rolling log files can be backed up periodically, e.g. using a CRON job.
Note however that the rolling log file naming scheme will rename the historic zipped log files
such that brooklyn.debug-1.log.zip
is the most recent zipped log file. When the current
brooklyn.debug.log
is to be zipped, the previous zip file will be renamed
brooklyn.debug-2.log.zip
. This renaming of files can make RSYNC or backups tricky.
An option is to covert/move the file to a name that includes the last-modified timestamp. For example (on mac):
LOG_FILE=brooklyn.debug-1.log.zip
TIMESTAMP=`stat -f '%Um' $LOG_FILE`
mv $LOG_FILE /path/to/archive/brooklyn.debug-$TIMESTAMP.log.zip
Integration with systems like Logstash and Splunk is possible using standard logback configuration. Logback can be configured to write to the syslog, which can then feed its logs to Logstash.
There are several different areas of requirements:
The AMP REST api provides access to usage information. It lists the applications (including those now terminated), showing the start/end time and state transitions for each. It also lists the machines used (including those now terminated), linking each machine back to an application id.
Documentation for the REST api can be found within AMP itself (in the web-console, go to the Script -> REST API tab, and then browse the API). The annotations on the Java interfaces also make it easy to browse the API: UsageApi.java.
An example of retrieving all applications is shown below. For each application, it shows start/end time for each phase (e.g. when it was starting, when it was running, and when it stopped).
curl http://localhost:8081/v1/usage/applications
[
{
"statistics": [
{
"status": "STARTING",
"id": "htStRkN7",
"applicationId": "htStRkN7",
"start": "2014-10-09T11:00:13+0100",
"end": "2014-10-09T11:00:15+0100",
"duration": 2313,
"metadata": {}
},
{
"status": "RUNNING",
"id": "htStRkN7",
"applicationId": "htStRkN7",
"start": "2014-10-09T11:00:15+0100",
"end": "2014-10-09T11:00:22+0100",
"duration": 6495,
"metadata": {}
}
],
"links": {}
},
{
"statistics": [
{
"status": "STARTING",
"id": "Z3TTK4sM",
"applicationId": "Z3TTK4sM",
"start": "2014-10-09T10:59:55+0100",
"end": "2014-10-09T10:59:55+0100",
"duration": 33,
"metadata": {}
},
{
"status": "UNKNOWN",
"id": "Z3TTK4sM",
"applicationId": "Z3TTK4sM",
"start": "2014-10-09T10:59:55+0100",
"end": "2014-10-09T11:00:22+0100",
"duration": 26634,
"metadata": {}
}
],
"links": {}
}
]
An example of retrieving all machines is shown below, with each machine giving the associated application id:
curl http://localhost:8081/v1/usage/machines
[
{
"statistics": [
{
"status": "ACCEPTED",
"id": "yqqA9Moy",
"applicationId": "rhsLVvJs",
"start": "2014-10-09T11:11:13+0100",
"end": "2014-10-09T11:13:45+0100",
"duration": 151750,
"metadata": {
"id": "yqqA9Moy",
"displayName": "159.8.33.134",
"provider": "softlayer",
"account": "cloudsoft",
"serverId": "6488686",
"imageId": "4343926",
"instanceTypeName": "br-ked-aled-rhsl-j8mq-fe",
"instanceTypeId": "6488686",
"ram": "1024",
"cpus": "1",
"osName": "ubuntu",
"osArch": "x86_64",
"64bit": "true"
}
}
],
"links": {}
}
]
The start/end time to be retrieved can also be constrained. It is also possible to retrieve information about a specific machine or specific location. An exmaple is below:
curl http://localhost:8081/v1/usage/applications/rhsLVvJs?start=2014-10-09T11:07:18+0100&end=2014-10-09T11:11:14+0100
{
"statistics": [
{
"status": "STARTING",
"id": "rhsLVvJs",
"applicationId": "rhsLVvJs",
"start": "2014-10-09T11:07:18+0100",
"end": "2014-10-09T11:11:14+0100",
"duration": 236354,
"metadata": {},
{
"status": "RUNNING",
"id": "rhsLVvJs",
"applicationId": "rhsLVvJs",
"start": "2014-10-09T11:11:14+0100",
"end": "2014-10-09T11:16:13+0100",
"duration": 298443,
"metadata": {}
}
],
"links": {}
}
Audit trails are essential for determining why an event happened, and who was responsible.
Persisting the AMP logs is one approach to storing the audit trail. Subsequent offline analysis can then be performed. For example, logstash (via syslog) could be used to collect all logs.
AMP provides a web-console for monitoring the applications, and drilling into the current state and actions being performed. However, this is more a debug console. Most enterprises are keen to use their existing pane-of-glass for their operations staff. The AMP REST api provides access to all information shown in the web-console.
Integration with the monitoring dashboard could involve the dashboard making REST api calls into AMP to retrieve the required information.
Alterantively, AMP could push events to a given endpoint. This requires wiring up a listener for the desired events, and potentially bespoke code for pushing to the given endpoint (depending on the technology used).
The following resources may be useful when configuring logging:
AMP performance, load and longevity testing breaks down into three categories:
There are a range of performance tests against specific pieces of functionality. For example:
SshjToolPerformanceTest
and SshMachineLocationPerformanceTest
for sshEntityPersistencePerformanceTest
for persistence / HAEntityPerformanceTest
for effector invocations, setting sensors, and event subscriptions.Examples of AMP acceptance tests include:
brooklyn/qa/load/LoadTest.java
which deploys a large number of applications.brooklyn.qa.longevity.webcluster.WebClusterApp
which deploys a cluster that cycles
through a sinusoidal load pattern, causing repeated scaling out and scaling back.The easiest way to script AMP tests with strong assertions is to write tests that run with
AMP in-memory, allowing full access to the AMP ManagementContext
. However, to make tests
more realistic requires running AMP in the same mode as one would in production (i.e.
stand-alone process with the same persistence / HA features configured).
Use of the REST api makes it simple to drive AMP, to test a range of scenarios. The following
curl
commands are a useful starting point for testing.
To deploy an application:
curl \
--insecure \
--user admin:password \
-H "Content-Type: application/json" \
--data-binary @myblueprint.yaml \
https://$AMP_HOST:8443/v1/applications
To invoke an effector:
curl \
-H "Content-Type: application/json" \
-d '{ "desiredSize": 3 }' \
https://$AMP_HOST:8443/v1/applications/${APP_ID}/entities/${ENTITY_ID}/effectors/resize\?timeout=0
The load on AMP is very dependent on the type of application being deployed and managed. Considerations include:
bash
over ssh
is expensive: establishing connections is CPU intensive;
it requires a thread per ssh command being executed; it is network intensive.ssh
polling is very expensive when done for many entities.Recommended JVM settings include:
-verbose:gc
, to quickly determine if memory usage is an issue.-XX:MaxPermSize=256m
. The default level can result in OutOfMemoryError
s when there
are many applications.-Xms
and -Xmx
set to the same value, with at least 512m.
The amount of memory affects the number of entities that can be managed by a single AMP
node. If many entities are required, then at least 1024m is recommended.-Dlogback.configurationFile=/path/to/logback.xml
, if you have custom logback settings.The desired configuration is not always available for testing. For example, there may be insufficient resources to run 100s of JBoss app-servers, or one may be experimenting with possible configurations such as use of an external monitoring tool that is not yet available.
It is then possible to simulate aspects of the behaviour, for performance and load testing
purposes. The AMP usage/qa
project includes entities for this purpose. There are entities
for JBoss 7 app-server, MySQL, Nginx and a three-tier app using these entities. Each entity has
configuration options for:
sleep 100000
job will be run
and monitored.simulateEntity
is true it will execute comparable commands (e.g. execute
a command of the same size over ssh
or do a comparable number of http GET requests). simulateEntity
is false then normal monitoring will be done.simulateEntity
is true), then no ssh commands will be executed at
deploy-time. This is useful for speeding up load testing, to get to the desired number
of entities.Example yaml to deploy one of these entities is:
location: locahost
services:
- type: brooklyn.qa.load.SimulatedJBoss7ServerImpl
brooklyn.config:
simulateEntity: true
simulateExternalMonitoring: true
skipSshOnStart: false
Warning: this is a beta feature; efforts will be made to preserve backwards compatibility; however the configuration options and required database schema may change in future releases.
AMP can be configured to record application and location lifecycle events to a SQL database:
created
and destroyed
events.starting
, running
, stopping
, stopped
, destroyed
and on-fire
events.Metering is enabled and configured via the brooklyn.properties
file, usually located at ~/.brooklyn/brooklyn.properties
To enable metering to a database, add the following lines to your brooklyn.properties file
# MYSQL Metering DB Listener
brooklyn.usageManager.listeners=io.cloudsoft.metering.MeteringDbListener
amp.metering.listener.db.jdbcDriverClass=com.mysql.jdbc.Driver
amp.metering.listener.db.jdbcConnectionString=jdbc:mysql://localhost/test
amp.metering.listener.db.jdbcUsername=mysqluser
amp.metering.listener.db.jdbcPassword=letmein
amp.metering.listener.db.init=true
The brooklyn.usageManager.listeners
key takes a comma-delimited list of listener classes. In this case we are using
the io.cloudsoft.metering.MeteringDbListener
, which is used when recording events to a SQL database
The following lines are configuration options specific to the MeteringDbListener, and should be set as follows:
amp.metering.listener.db.jdbcDriverClass
: This is the fully qualified classname of the JDBC driver to be used.
For MYSQL, this should be set to com.mysql.jdb.Driver
, and for H2, use org.h2.Driver
. The JDBC driver will need to be added
to AMP’s classpath by copying the relevant provider jar to AMP’s lib/dropins
folder. The driver for MYSQL can be downloaded
here and the H2 driver can be downloaded
hereamp.metering.listener.db.jdbcConnectionString
: This is the jdbc connection string which AMP will use to connect to the
database. Note: This should point to an exising database instance, and the database must have been created in advance.
The schema (tables) can be automatically generated (see db.init
below), but the server and database must be created
manuallyamp.metering.listener.db.jdbcUsername
: This is the username that AMP will use to connect to the databaseamp.metering.listener.db.jdbcPassword
: This is the password that AMP will use to connect to the databaseamp.metering.listener.db.init
: If true
, AMP will attempt to initialize the database tables.
This must be done the first time that the database is used. The operation is idempotent - if the tables exist,
the operation is a no-op (even if the existing schema is different from that expected).A sample MYSQL script to create a database, username, and password is as follows
CREATE SCHEMA foo;
USE foo;
CREATE USER 'myuser' identified by 'L3tM3!n';
GRANT USAGE ON *.* TO 'myuser'@'%' IDENTIFIED BY 'L3tM3!n';
GRANT USAGE ON *.* TO 'myuser'@'localhost' IDENTIFIED BY 'L3tM3!n';
GRANT ALL PRIVILEGES ON foo.* TO 'myuser'@'%';
FLUSH PRIVILEGES;
This guide describes sources of information for understanding when things go wrong.
Whether you’re customizing out-of-the-box blueprints, or developing your own custom blueprints, you will inevitably have to deal with entity failure. Thankfully AMP provides plenty of information to help you locate and resolve any issues you may encounter.
The AMP web-console includes a tree view of the entities within an application. Errors within the application are represented visually, showing a “fire” image on the entity.
When an error causes an entire application to be unexpectedly down, the error is generally propagated to the top-level entity - i.e. marking it as “on fire”. To find the underlying error, one should expand the entity hierarchy tree to find the specific entities that have actually failed.
Many entities have some common sensors (i.e. attributes) that give details of the error status:
service.isUp
(often referred to as “service up”) is a boolean, saying whether the service is up. For many
software processes, this is inferred from whether the “service.notUp.indicators” is empty. It is also
possible for some entities to set this attribute directly.service.notUp.indicators
is a map of errors. This often gives much more information than the single
service.isUp
attribute. For example, there may be many health-check indicators for a component:
is the root URL reachable, it the management api reporting healthy, is the process running, etc.service.problems
is a map of namespaced indicators of problems with a service.service.state
is the actual state of the service - e.g. CREATED, STARTING, RUNNING, STOPPING, STOPPED,
DESTROYED and ON_FIRE.service.state.expected
indicates the state the service is expected to be in (and when it transitioned to that).
For example, is the service expected to be starting, running, stopping, etc.These sensor values are shown in the “sensors” tab - see below.
The “Sensors” tab in the AMP web-console shows the attribute values of a particular entity. This gives lots of runtime information, including about the health of the entity - the set of attributes will vary between different entity types.
Note that null (or not set) sensors are hidden by default. You can click on the Show/hide empty records
icon (highlighted in yellow above) to see these sensors as well.
The sensors view is also tabulated. You can configure the numbers of sensors shown per page (at the bottom). There is also a search bar (at the top) to filter the sensors shown.
The activity view shows the tasks executed by a given entity. The top-level tasks are the effectors (i.e. operations) invoked on that entity. This view allows one to drill into the task, to see details of errors.
Select the entity, and then click on the Activities
tab.
In the table showing the tasks, each row is a link - clicking on the row will drill into the details of that task, including sub-tasks:
For ssh tasks, this allows one to drill down to see the env, stdin, stdout and stderr. That is, you can see the commands executed (stdin) and environment variables (env), and the output from executing that (stdout and stderr).
For tasks that did not fail, one can still drill into the tasks to see what was done.
It’s always worth looking at the Detailed Status section as sometimes that will give you the information you need. For example, it can show the exception stack trace in the thread that was executing the task that failed.
AMP’s logging is configurable, for the files created, the logging levels, etc.
With out-of-the-box logging, brooklyn.info.log
and brooklyn.debug.log
files are created. These are by default
rolling log files: when the log reaches a given size, it is compressed and a new log file is started.
Therefore check the timestamps of the log files to ensure you are looking in the correct file for the
time of your error.
With out-of-the-box logging, info, warnings and errors are written to the brooklyn.info.log
file. This gives
a summary of the important actions and errors. However, it does not contain full stacktraces for errors.
To find the exception, we’ll need to look in AMP’s debug log file. By default, the debug log file
is named brooklyn.debug.log
. You can use your favourite tools for viewing large text files.
One possible tool is less
, e.g. less brooklyn.debug.log
. We can quickly find the last exception
by navigating to the end of the log file (using Shift-G
), then performing a reverse-lookup by typing ?Exception
and pressing Enter
. Sometimes an error results in multiple exceptions being logged (e.g. first for the
entity, then for the cluster, then for the app). If you know the text of the error message (e.g. copy-pasted
from the Activities view of the web-console) then one can search explicitly for that text.
The grep
command is also extremely helpful. Useful things to grep for include:
grep -E "WARN|ERROR" brooklyn.info.log
.Grep’ing for particular log messages is also useful. Some examples are shown below:
This guide describes common problems encountered when deploying applications.
The error Invalid YAML: Plan not in acceptable format: Cannot convert ...
means that the text is not
valid YAML. Common reasons include that the indentation is incorrect, or that there are non-matching
brackets.
The error Unrecognized application blueprint format: no services defined
means that the services:
section is missing.
An error like Deployment plan item Service[name=<null>,description=<null>,serviceType=com.acme.Foo,characteristics=[],customAttributes={}] cannot be matched
means that the given entity type (in this case com.acme.Foo) is not in the catalog or on the classpath.
An error like Illegal parameter for 'location' (aws-ec3); not resolvable: java.util.NoSuchElementException: Unknown location 'aws-ec3': either this location is not recognised or there is a problem with location resolver configuration
means that the given location (in this case aws-ec3)
was unknown. This means it does not match any of the named locations in brooklyn.properties, nor any of the
clouds enabled in the jclouds support, nor any of the locations added dynamically through the catalog API.
There are many stages at which VM provisioning can fail! An error Failure running task provisioning
means there was some problem obtaining or connecting to the machine.
An error like ... Not authorized to access cloud ...
usually means the wrong identity/credential was used.
An error like Unable to match required VM template constraints
means that a matching image (e.g. AMI in AWS terminology) could not be found. This
could be because an incorrect explicit image id was supplied, or because the match-criteria could not
be satisfied using the given images available in the given cloud. The first time this error is
encountered, a listing of all images in that cloud/region will be written to the debug log.
Failure to form an ssh connection to the newly provisioned VM can be reported in several different ways, depending on the nature of the error. This breaks down into failures at different points:
... could not connect to any ip address port 22 on node ...
).... Exhausted available authentication methods ...
).There are many possible reasons for this ssh failure, which include:
machineCreateAttempts
configuration option, to automatically retry with a new VM.loginUser
configuration option.
An example of this is with some Ubuntu VMs, where the “ubuntu” user should be used. However, on some clouds
it defaults to trying to ssh as “root”.A very useful debug configuration is to set destroyOnFailure
to false. This will allow ssh failures to
be more easily investigated.
A common generic error message is that there was a timeout waiting for service-up.
This just means that the entity did not get to service-up in the pre-defined time period (the default is
two minutes, and can be configured using the start.timeout
config key; the timer begins after the
start tasks are completed).
See the overview for where to find additional information, especially the section on “Entity’s Error Status”.
A common problem when setting up an application in the cloud is getting the basic connectivity right - how do I get my service (e.g. a TCP host:port) publicly accessible over the internet?
This varies a lot - e.g. Is the VM public or in a private network? Is the service only accessible through a load balancer? Should the service be globally reachable or only to a particular CIDR?
This guide gives some general tips for debugging connectivity issues, which are applicable to a range of different service types. Choose those that are appropriate for your use-case.
If the VM is supposed to be accessible directly (e.g. from the public internet, or if in a private network then from a jump host)…
Can you ping
the VM from the machine you are trying to reach it from?
However, ping is over ICMP. If the VM is unreachable, it could be that the firewall forbids ICMP but still lets TCP traffic through.
You can check if a given TCP port is reachable and listening using telnet <host> <port>
, such as
telnet www.google.com 80
, which gives output like:
Trying 31.55.163.219...
Connected to www.google.com.
Escape character is '^]'.
If this is very slow to respond, it can be caused by a firewall blocking access. If it is fast, it could be that the server is just not listening on that port.
If using a hostname rather than IP, then is it resolving to a sensible IP?
Is the route to the server sensible? (e.g. one can hit problems with proxy servers in a corporate network, or ISPs returning a default result for unknown hosts).
The following commands can be useful:
host
is a DNS lookup utility. e.g. host www.google.com
.dig
stands for “domain information groper”. e.g. dig www.google.com
.traceroute
prints the route that packets take to a network host. e.g. traceroute www.google.com
.Depending on the type of location, AMP might use HTTP to provision machines (clocker, jclouds). If the host environment defines proxy settings, these might interfere with the reachability of the respective HTTP service.
One such case is using VirtualBox with host-only or private internal network settings, while using an external proxy for accessing the internet. It is clear that the external proxy won’t be able to route HTTP calls properly, but that might not be clear when reading the logs (although AMP will present the failing URL).
Try accessing the web-service URLs from a browser via the proxy, or perhaps try running AMP with proxy disabled:
export http_proxy=
bin/brooklyn launch
Try connecting to the service from the VM itself. For example, curl http://localhost:8080
for a
web-service.
On dev/test VMs, don’t be afraid to install the utilities you need such as curl
, telnet
, nc
,
etc. Cloud VMs often have a very cut-down set of packages installed. For example, execute
sudo apt-get update; sudo apt-get install -y curl
or sudo yum install -y curl
.
Check that the service is listening on the port, and on the correct NIC(s).
Execute netstat -antp
(or on OS X netstat -antp TCP
) to list the TCP ports in use (or use
-anup
for UDP). You should expect to see the something like the output below for a service.
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::8080 :::* LISTEN 8276/java
In this case a Java process with pid 8276 is listening on port 8080. The local address :::8080
format means all NICs (in IPv6 address format). You may also see 0.0.0.0:8080
for IPv4 format.
If it says 127.0.0.1:8080 then your service will most likely not be reachable externally.
Use ip addr show
(or the obsolete ifconfig -a
) to see the network interfaces on your server.
For netstat
, run with sudo
to see the pid for all listed ports.
On Linux, check if iptables
is preventing the remote connection. On Windows, check the Windows Firewall.
If it is acceptable (e.g. it is not a server in production), try turning off the firewall temporarily,
and testing connectivity again. Remember to re-enable it afterwards! On CentOS, this is sudo service
iptables stop
. On Ubuntu, use sudo ufw disable
. On Windows, press the Windows key and type ‘Windows
Firewall with Advanced Security’ to open the firewall tools, then click ‘Windows Firewall Properties’
and set the firewall state to ‘Off’ in the Domain, Public and Private profiles.
If you cannot temporarily turn off the firewall, then look carefully at the firewall settings. For
example, execute sudo iptables -n --list
and iptables -t nat -n --list
.
Some clouds offer a firewall service, where ports need to be explicitly listed to be reachable.
For example, [security groups for EC2-classic] (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#ec2-classic-security-groups) have rules for the protocols and ports to be reachable from specific CIDRs.
Check these settings via the cloud provider’s web-console (or API).
It can be useful to start listening on a given port, and to then check if that port is reachable. This is useful for testing basic connectivity when your service is not yet running, or to a different port to compare behaviour, or to compare with another VM in the network.
The nc
netcat tool is useful for this. For example, nc -l 0.0.0.0 8080
will listen on port
TCP 8080 on all network interfaces. On another server, you can then run echo hello from client
| nc <hostname> 8080
. If all works well, this will send “hello from client” over the TCP port 8080,
which will be written out by the nc -l
process before exiting.
Similarly for UDP, you use -lU
.
You may first have to install nc
, e.g. with sudo yum install -y nc
or sudo apt-get install netcat
.
For some use-cases, it is good practice to use the load balancer service offered by the cloud provider (e.g. ELB in AWS or the [Cloudstack Load Balancer] (http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/latest/network_setup.html#management-server-load-balancing))
The VMs can all be isolated within a private network, with access only through the load balancer service.
Debugging techniques here include ensuring connectivity from another jump server within the private network, and careful checking of the load-balancer configuration from the Cloud Provider’s web-console.
Use of DNAT is appropriate for some use-cases, where a particular port on a particular VM is to be made available.
Debugging connectivity issues here is similar to the steps for a cloud load balancer. Ensure connectivity from another jump server within the private network. Carefully check the NAT rules from the Cloud Provider’s web-console.
It is common for guest wifi to restrict access to only specific ports (e.g. 80 and 443, restricting ssh over port 22 etc).
Normally your best bet is then to abandon the guest wifi (e.g. to tether to a mobile phone instead).
There are some unconventional workarounds such as configuring sshd to listen on port 80 so you can use an ssh tunnel. However, the firewall may well inspect traffic so sending non-http traffic over port 80 may still fail.
There are many possible causes for a AMP server becoming slow or unresponsive. This guide describes some possible reasons, and some commands and tools that can help diagnose the problem.
Possible reasons include:
See AMP Requirements for details of server requirements.
The following commands will collect OS-level diagnostics about the machine, and about the AMP process. The commands below assume use of CentOS 6.x. Minor adjustments may be required for other platforms.
To display system information, run:
uname -a
To show details of the CPU and memory available to the machine, run:
cat /proc/cpuinfo
cat /proc/meminfo
To display information about user limits, run the command below (while logged in as the same user who runs AMP):
ulimit -a
If AMP is run as a different user (e.g. with user name “adalovelace”), then instead run:
ulimit -a -u adalovelace
Of particular interest is the limit for “open files”.
The command below will list the disk size for each partition, including the amount used and available. If the AMP base directory, persistence directory or logging directory are close to 0% available, this can cause serious problems:
df -h
To view the CPU and memory usage of all processes, and of the machine as a whole, one can use the
top
command. This runs interactively, updating every few seconds. To collect the output once
(e.g. to share diagnostic information in a bug report), run:
top -n 1 -b > top.txt
To count the number of open files for the AMP process (which includes open socket connections):
BROOKLYN_HOME=/home/users/brooklyn/apache-brooklyn-0.9.0-bin
BROOKLYN_PID=$(cat $BROOKLYN_HOME/pid_java)
lsof -p $BROOKLYN_PID | wc -l
To count (or view the number of “established” internet connections, run:
netstat -an | grep ESTABLISHED | wc -l
A lack of entropy can cause random number generation to be extremely slow. This can cause tasks like ssh to also be extremely slow. See linux kernel entropy for details of how to work around this.
To get memory and thread usage for the AMP (Java) process, two useful tools are jstack
and jmap
. These require the “development kit” to also be installed
(e.g. yum install java-1.7.0-openjdk-devel
). Some useful commands are shown below:
BROOKLYN_HOME=/home/users/brooklyn/apache-brooklyn-0.9.0-bin
BROOKLYN_PID=$(cat $BROOKLYN_HOME/pid_java)
jstack $BROOKLYN_PID
jmap -histo:live $BROOKLYN_PID
jmap -heap $BROOKLYN_PID
The jstack-active
script is a convenient light-weight way to quickly see which threads of a running AMP
server are attempting to consume the CPU. It filters the output of jstack
, to show only the
“really-runnable” threads (as opposed to those that are blocked).
BROOKLYN_HOME=/home/users/brooklyn/apache-brooklyn-0.9.0-bin
BROOKLYN_PID=$(cat $BROOKLYN_HOME/pid_java)
curl -O https://raw.githubusercontent.com/apache/brooklyn-dist/master/scripts/jstack-active.sh
jstack-active $BROOKLYN_PID
If an in-depth investigation of the CPU usage (and/or object creation) of a AMP Server is requiring, there are many profiling tools designed specifically for this purpose. These generally require that the process be launched in such a way that a profiler can attach, which may not be appropriate for a production server.
If the AMP Server was originally run to allow a remote debugger to connect (strongly discouraged in production!), then this provides a convenient way to investigate why AMP is being slow or unresonsive.
Cloudsoft AMP will by default create brooklyn.info.log and brooklyn.debug.log files. See the Logging docs for more information.
The following are useful log messages to search for (e.g. using grep
). Note the wording of
these messages (or their very presence) may change in future version of AMP.
The lines below are commonly logged, and can be useful to search for when finding the start of a section of logging.
2016-05-30 17:05:51,458 INFO o.a.b.l.AMPWebServer [main]: Started AMP console at http://127.0.0.1:8081/, running classpath://brooklyn.war
2016-05-30 17:06:04,098 INFO o.a.b.c.m.h.HighAvailabilityManagerImpl [main]: Management node tF3GPvQ5 running as HA MASTER autodetected
2016-05-30 17:06:08,982 INFO o.a.b.c.m.r.InitialFullRebindIteration [brooklyn-execmanager-rvpnFTeL-0]: Rebinding from /home/compose/compose-amp-state/brooklyn-persisted-state/data for master rvpnFTeL...
2016-05-30 17:06:11,105 INFO o.a.b.c.m.r.RebindIteration [brooklyn-execmanager-rvpnFTeL-0]: Rebind complete (MASTER) in 2s: 19 apps, 54 entities, 50 locations, 46 policies, 704 enrichers, 0 feeds, 160 catalog items
The debug log includes (every minute) a log statement about the memory usage and task activity. For example:
2016-05-27 12:20:19,395 DEBUG o.a.b.c.m.i.AMPGarbageCollector [brooklyn-gc]: AMP gc (before) - using 328 MB / 496 MB memory (5.58 kB soft); 69 threads; storage: {datagrid={size=7, createCount=7}, refsMapSize=0, listsMapSize=0}; tasks: 10 active, 33 unfinished; 78 remembered, 1696906 total submitted)
2016-05-27 12:20:19,395 DEBUG o.a.b.c.m.i.AMPGarbageCollector [brooklyn-gc]: AMP gc (after) - using 328 MB / 496 MB memory (5.58 kB soft); 69 threads; storage: {datagrid={size=7, createCount=7}, refsMapSize=0, listsMapSize=0}; tasks: 10 active, 33 unfinished; 78 remembered, 1696906 total submitted)
These can be extremely useful if investigating a memory or thread leak, or to determine whether a surprisingly high number of tasks are being executed.
One source of high CPU in AMP is when a subscription (e.g. for a policy or enricher) is being triggered many times (i.e. handling many events). A log message like that below will be logged on every 1000 events handled by a given single subscription.
2016-05-30 17:29:09,125 DEBUG o.a.b.c.m.i.LocalSubscriptionManager [brooklyn-execmanager-rvpnFTeL-8]: 1000 events for subscriber Subscription[SCFnav9g;CanopyComposeApp{id=gIeTwhU2}@gIeTwhU2:webapp.url]
If a subscription is handling a huge number of events, there are a couple of common reasons: * first, it could be subscribing to too much activity - e.g. a wildcard subscription for all events from all entities. * second it could be an infinite loop (e.g. where an enricher responds to a sensor-changed event by setting that same sensor, thus triggering another sensor-changed event).
All activity triggered by the REST API or web-console will be logged. Some examples are shown below:
2016-05-19 17:52:30,150 INFO o.a.b.r.r.ApplicationResource [brooklyn-jetty-server-8081-qtp1058726153-17473]: Launched from YAML: name: My Example App
location: aws-ec2:us-east-1
services:
- type: org.apache.brooklyn.entity.webapp.tomcat.TomcatServer
2016-05-30 14:46:19,516 DEBUG brooklyn.REST [brooklyn-jetty-server-8081-qtp1104967201-20881]: Request Tisj14 starting: POST /v1/applications/NiBy0v8Q/entities/NiBy0v8Q/expunge from 77.70.102.66
If investigating the behaviour of a particular entity (e.g. on failure), it can be very useful to
grep
the info and debug log for the entity’s id. For a software process, the debug log will
include the stdout and stderr of all the commands executed by that entity.
It can also be very useful to search for all effector invocations, to see where the behaviour has been triggered:
2016-05-27 12:45:43,529 DEBUG o.a.b.c.m.i.EffectorUtils [brooklyn-execmanager-gvP7MuZF-14364]: Invoking effector stop on TomcatServerImpl{id=mPujYmPd}
If you wish to send a detailed report, then depending on the nature of the problem, consider collecting the following information.
See AMP Slow or Unresponse docs for details of these commands.
BROOKLYN_HOME=/home/users/brooklyn/apache-brooklyn-0.9.0-bin
BROOKLYN_PID=$(cat $BROOKLYN_HOME/pid_java)
REPORT_DIR=/tmp/brooklyn-report/
DEBUG_LOG=${BROOKLYN_HOME}/brooklyn.debug.log
uname -a > ${REPORT_DIR}/uname.txt
df -h > ${REPORT_DIR}/df.txt
cat /proc/cpuinfo > ${REPORT_DIR}/cpuinfo.txt
cat /proc/meminfo > ${REPORT_DIR}/meminfo.txt
ulimit -a > ${REPORT_DIR}/ulimit.txt
cat /proc/${BROOKLYN_PID}/limits >> ${REPORT_DIR}/ulimit.txt
top -n 1 -b > ${REPORT_DIR}/top.txt
lsof -p ${BROOKLYN_PID} > ${REPORT_DIR}/lsof.txt
netstat -an > ${REPORT_DIR}/netstat.txt
jmap -histo:live ${BROOKLYN_PID} > ${REPORT_DIR}/jmap-histo.txt
jmap -heap ${BROOKLYN_PID} > ${REPORT_DIR}/jmap-heap.txt
for i in {1..10}; do
jstack ${BROOKLYN_PID} > ${REPORT_DIR}/jstack.${i}.txt
sleep 1
done
grep "brooklyn gc" ${DEBUG_LOG} > ${REPORT_DIR}/brooklyn-gc.txt
grep "events for subscriber" ${DEBUG_LOG} > ${REPORT_DIR}/events-for-subscriber.txt
tar czf brooklyn-report.tgz ${REPORT_DIR}
Also consider providing your log files and persisted state, though extreme care should be taken if these might contain cloud or machine credentials (especially if Externalised Configuration is not being used for credential storage).
The troubleshooting overview in AMP gives information for how to find more information about errors.
If that doesn’t give enough information to diagnose, fix or workaround the problem, then it can be required to login to the machine, to investigate further. This guide applies to entities that are types of “SoftwareProcess” in AMP, or that follows those conventions.
The ssh connection details for an entity is published to a sensor host.sshAddress
. The login
credentials will depend on the AMP configuration. The default is to use the ~/.ssh/id_rsa
or ~/.ssh/id_dsa
on the AMP host (uploading the associated ~/.ssh/id_rsa.pub
to the machine’s
authorised_keys). However, this can be overridden (e.g. with specific passwords etc) in the
location’s configuration.
For Windows, there is a similar sensor with the name host.winrmAddress
.
For ssh-based software processes, the install directory and the run directory are published as sensors
install.dir
and run.dir
respectively.
For some entities, files are unpacked into the install dir; configuration files are written to the run dir along with log files. For some other entities, these directories may be mostly empty - e.g. if installing RPMs, and that software writes its logs to a different standard location.
Most entities have a sensor log.location
. It is generally worth checking this, along with other files
in the run directory (such as console output).
It is worth checking that the process is running, e.g. using ps aux
to look for the desired process.
Some entities also write the pid of the process to pid.txt
in the run directory.
It is also worth checking if the required port is accessible. This is discussed in the guide
“Troubleshooting Server Connectivity Issues in the Cloud”, including listing the ports in use:
execute netstat -antp
(or on OS X netstat -antp TCP
) to list the TCP ports in use (or use
-anup
for UDP).
It is also worth checking the disk space on the server, e.g. using df -m
, to check that there
is sufficient space on each of the required partitions.
This guide takes a deep look at the Java and log messages for some failure scenarios, giving common steps used to identify the issues.
Many blueprints run bash scripts as part of the installation. This section highlights how to identify a problem with a bash script.
First let’s take a look at the customize()
method of the Tomcat server blueprint:
@Override
public void customize() {
newScript(CUSTOMIZING)
.body.append("mkdir -p conf logs webapps temp")
.failOnNonZeroResultCode()
.execute();
copyTemplate(entity.getConfig(TomcatServer.SERVER_XML_RESOURCE), Os.mergePaths(getRunDir(), "conf", "server.xml"));
copyTemplate(entity.getConfig(TomcatServer.WEB_XML_RESOURCE), Os.mergePaths(getRunDir(), "conf", "web.xml"));
if (isProtocolEnabled("HTTPS")) {
String keystoreUrl = Preconditions.checkNotNull(getSslKeystoreUrl(), "keystore URL must be specified if using HTTPS for " + entity);
String destinationSslKeystoreFile = getHttpsSslKeystoreFile();
InputStream keystoreStream = resource.getResourceFromUrl(keystoreUrl);
getMachine().copyTo(keystoreStream, destinationSslKeystoreFile);
}
getEntity().deployInitialWars();
}
Here we can see that it’s running a script to create four directories before continuing with the customization. Let’s
introduce an error by changing mkdir
to mkrid
:
newScript(CUSTOMIZING)
.body.append("mkrid -p conf logs webapps temp") // `mkdir` changed to `mkrid`
.failOnNonZeroResultCode()
.execute();
Now let’s try deploying this using the following YAML:
name: Tomcat failure test
location: localhost
services:
- type: org.apache.brooklyn.entity.webapp.tomcat.TomcatServer
Shortly after deployment, the entity fails with the following error:
Failure running task ssh: customizing TomcatServerImpl{id=e1HP2s8x} (HmyPAozV):
Execution failed, invalid result 127 for customizing TomcatServerImpl{id=e1HP2s8x}
By selecting the Activities
tab, we can drill into the task that failed. The list of tasks shown (where the
effectors are shown as top-level tasks) are clickable links. Selecting that row will show the details of
that particular task, including its sub-tasks. We can eventually get to the specific sub-task that failed:
By clicking on the stderr
link, we can see the script failed with the following error:
/tmp/brooklyn-20150721-132251052-l4b9-customizing_TomcatServerImpl_i.sh: line 10: mkrid: command not found
This tells us what went wrong, but doesn’t tell us where. In order to find that, we’ll need to look at the stack trace that was logged when the exception was thrown.
It’s always worth looking at the Detailed Status section as sometimes this will give you the information you need. In this case, the stack trace is limited to the thread that was used to execute the task that ran the script:
Failed after 40ms
STDERR
/tmp/brooklyn-20150721-132251052-l4b9-customizing_TomcatServerImpl_i.sh: line 10: mkrid: command not found
STDOUT
Executed /tmp/brooklyn-20150721-132251052-l4b9-customizing_TomcatServerImpl_i.sh, result 127: Execution failed, invalid result 127 for customizing TomcatServerImpl{id=e1HP2s8x}
java.lang.IllegalStateException: Execution failed, invalid result 127 for customizing TomcatServerImpl{id=e1HP2s8x}
at org.apache.brooklyn.entity.software.base.lifecycle.ScriptHelper.logWithDetailsAndThrow(ScriptHelper.java:390)
at org.apache.brooklyn.entity.software.base.lifecycle.ScriptHelper.executeInternal(ScriptHelper.java:379)
at org.apache.brooklyn.entity.software.base.lifecycle.ScriptHelper$8.call(ScriptHelper.java:289)
at org.apache.brooklyn.entity.software.base.lifecycle.ScriptHelper$8.call(ScriptHelper.java:287)
at org.apache.brooklyn.core.util.task.DynamicSequentialTask$DstJob.call(DynamicSequentialTask.java:343)
at org.apache.brooklyn.core.util.task.BasicExecutionManager$SubmissionCallable.call(BasicExecutionManager.java:469)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
In order to find the exception, we’ll need to look in AMP’s debug log file. By default, the debug log file
is named brooklyn.debug.log
. Usually the easiest way to navigate the log file is to use less
, e.g.
less brooklyn.debug.log
. We can quickly find find the stack trace by first navigating to the end of the log file
with Shift-G
, then performing a reverse-lookup by typing ?Tomcat
and pressing Enter
. If searching for the
blueprint type (in this case Tomcat) simply matches tasks unrelated to the exception, you can also search for
the text of the error message, in this case ? invalid result 127
. You can make the search case-insensitivity by
typing -i
before performing the search. To skip the current match and move to the next one (i.e. ‘up’ as we’re
performing a reverse-lookup), simply press n
In this case, the ?Tomcat
search takes us directly to the full stack trace (Only the last part of the trace
is shown here):
... at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:63) ~[guava-17.0.jar:na]
at org.apache.brooklyn.core.util.task.BasicTask.get(BasicTask.java:343) ~[classes/:na]
at org.apache.brooklyn.core.util.task.BasicTask.getUnchecked(BasicTask.java:352) ~[classes/:na]
... 9 common frames omitted
Caused by: brooklyn.util.exceptions.PropagatedRuntimeException:
at org.apache.brooklyn.util.exceptions.Exceptions.propagate(Exceptions.java:97) ~[classes/:na]
at org.apache.brooklyn.core.util.task.BasicTask.getUnchecked(BasicTask.java:354) ~[classes/:na]
at org.apache.brooklyn.entity.software.base.lifecycle.ScriptHelper.execute(ScriptHelper.java:339) ~[classes/:na]
at org.apache.brooklyn.entity.webapp.tomcat.TomcatSshDriver.customize(TomcatSshDriver.java:72) ~[classes/:na]
at org.apache.brooklyn.entity.software.base.AbstractSoftwareProcessDriver$8.run(AbstractSoftwareProcessDriver.java:150) ~[classes/:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_71]
at org.apache.brooklyn.core.util.task.DynamicSequentialTask$DstJob.call(DynamicSequentialTask.java:343) ~[classes/:na]
... 5 common frames omitted
Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Execution failed, invalid result 127 for customizing TomcatServerImpl{id=e1HP2s8x}
at java.util.concurrent.FutureTask.report(FutureTask.java:122) [na:1.7.0_71]
at java.util.concurrent.FutureTask.get(FutureTask.java:188) [na:1.7.0_71]
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:63) ~[guava-17.0.jar:na]
at org.apache.brooklyn.core.util.task.BasicTask.get(BasicTask.java:343) ~[classes/:na]
at org.apache.brooklyn.core.util.task.BasicTask.getUnchecked(BasicTask.java:352) ~[classes/:na]
... 10 common frames omitted
Caused by: java.lang.IllegalStateException: Execution failed, invalid result 127 for customizing TomcatServerImpl{id=e1HP2s8x}
at org.apache.brooklyn.entity.software.base.lifecycle.ScriptHelper.logWithDetailsAndThrow(ScriptHelper.java:390) ~[classes/:na]
at org.apache.brooklyn.entity.software.base.lifecycle.ScriptHelper.executeInternal(ScriptHelper.java:379) ~[classes/:na]
at org.apache.brooklyn.entity.software.base.lifecycle.ScriptHelper$8.call(ScriptHelper.java:289) ~[classes/:na]
at org.apache.brooklyn.entity.software.base.lifecycle.ScriptHelper$8.call(ScriptHelper.java:287) ~[classes/:na]
... 6 common frames omitted
AMP’s use of tasks and helper classes can make the stack trace a little harder than usual to follow, but a good
place to start is to look through the stack trace for the node’s implementation or ssh driver classes (usually
named FooNodeImpl
or FooSshDriver
). In this case we can see the following:
at org.apache.brooklyn.entity.webapp.tomcat.TomcatSshDriver.customize(TomcatSshDriver.java:72) ~[classes/:na]
Combining this with the error message of mkrid: command not found
we can see that indeed mkdir
has been
misspelled mkrid
on line 72 of TomcatSshDriver.java
.
The section above gives an example of a failure that occurs when a script is run. In this section we will look at
a failure in a non-script related part of the code. We’ll use the customize()
method of the Tomcat server again,
but this time, we’ll correct the spelling of ‘mkdir’ and add a line that attempts to copy a nonexistent resource
to the remote server:
newScript(CUSTOMIZING)
.body.append("mkdir -p conf logs webapps temp")
.failOnNonZeroResultCode()
.execute();
copyTemplate(entity.getConfig(TomcatServer.SERVER_XML_RESOURCE), Os.mergePaths(getRunDir(), "conf", "server.xml"));
copyTemplate(entity.getConfig(TomcatServer.WEB_XML_RESOURCE), Os.mergePaths(getRunDir(), "conf", "web.xml"));
copyTemplate("classpath://nonexistent.xml", Os.mergePaths(getRunDir(), "conf", "nonexistent.xml")); // Resource does not exist!
Let’s deploy this using the same YAML from above. Here’s the resulting error in the AMP debug console:
Again, this tells us what the error is, but we need to find where the code is that attempts to copy this file. In this case it’s shown in the Detailed Status section, and we don’t need to go to the log file:
Failed after 221ms: Error getting resource 'classpath://nonexistent.xml' for TomcatServerImpl{id=PVZxDKU1}: java.io.IOException: Error accessing classpath://nonexistent.xml: java.io.IOException: nonexistent.xml not found on classpath
java.lang.RuntimeException: Error getting resource 'classpath://nonexistent.xml' for TomcatServerImpl{id=PVZxDKU1}: java.io.IOException: Error accessing classpath://nonexistent.xml: java.io.IOException: nonexistent.xml not found on classpath
at org.apache.brooklyn.core.util.ResourceUtils.getResourceFromUrl(ResourceUtils.java:297)
at org.apache.brooklyn.core.util.ResourceUtils.getResourceAsString(ResourceUtils.java:475)
at org.apache.brooklyn.entity.software.base.AbstractSoftwareProcessDriver.getResourceAsString(AbstractSoftwareProcessDriver.java:447)
at org.apache.brooklyn.entity.software.base.AbstractSoftwareProcessDriver.processTemplate(AbstractSoftwareProcessDriver.java:469)
at org.apache.brooklyn.entity.software.base.AbstractSoftwareProcessDriver.copyTemplate(AbstractSoftwareProcessDriver.java:390)
at org.apache.brooklyn.entity.software.base.AbstractSoftwareProcessDriver.copyTemplate(AbstractSoftwareProcessDriver.java:379)
at org.apache.brooklyn.entity.webapp.tomcat.TomcatSshDriver.customize(TomcatSshDriver.java:79)
at org.apache.brooklyn.entity.software.base.AbstractSoftwareProcessDriver$8.run(AbstractSoftwareProcessDriver.java:150)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at org.apache.brooklyn.core.util.task.DynamicSequentialTask$DstJob.call(DynamicSequentialTask.java:343)
at org.apache.brooklyn.core.util.task.BasicExecutionManager$SubmissionCallable.call(BasicExecutionManager.java:469)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Error accessing classpath://nonexistent.xml: java.io.IOException: nonexistent.xml not found on classpath
at org.apache.brooklyn.core.util.ResourceUtils.getResourceFromUrl(ResourceUtils.java:233)
... 14 more
Caused by: java.io.IOException: nonexistent.xml not found on classpath
at org.apache.brooklyn.core.util.ResourceUtils.getResourceViaClasspath(ResourceUtils.java:372)
at org.apache.brooklyn.core.util.ResourceUtils.getResourceFromUrl(ResourceUtils.java:230)
... 14 more
Looking for Tomcat
in the stack trace, we can see in this case the problem lies at line 79 of TomcatSshDriver.java
Sometimes an entity will fail outside the direct commands issues by AMP. When installing and launching an entity, AMP will check the return code of scripts that were run to ensure that they completed successfully (i.e. the return code of the script is zero). It is possible, for example, that a launch script completes successfully, but the entity fails to start.
We can simulate this type of failure by launching Tomcat with an invalid configuration file. As seen in the previous
examples, AMP copies two xml configuration files to the server: server.xml
and web.xml
The first few non-comment lines of server.xml
are as follows (you can see the full file here):
<Server port="${driver.shutdownPort?c}" shutdown="SHUTDOWN">
<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
<Listener className="org.apache.catalina.core.JasperListener" />
Let’s add an unmatched XML element, which will make this XML file invalid:
<Server port="${driver.shutdownPort?c}" shutdown="SHUTDOWN">
<unmatched-element> <!-- This is invalid XML as we won't add </unmatched-element> -->
<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
<Listener className="org.apache.catalina.core.JasperListener" />
As AMP doesn’t know how these types of resources are used, they’re not validated as they’re copied to the remote machine. As far as AMP is concerned, the file will have copied successfully.
Let’s deploy Tomcat again, using the same YAML as before. This time, the deployment runs for a few minutes before failing
with Timeout waiting for SERVICE_UP
:
If we drill down into the tasks in the Activities
tab, we can see that all of the installation and launch tasks
completed successfully, and stdout of the launch
script is as follows:
Executed /tmp/brooklyn-20150721-153049139-fK2U-launching_TomcatServerImpl_id_.sh, result 0
The task that failed was the post-start
task, and the stack trace from the Detailed Status section is as follows:
Failed after 5m 1s: Timeout waiting for SERVICE_UP from TomcatServerImpl{id=BUHgQeOs}
java.lang.IllegalStateException: Timeout waiting for SERVICE_UP from TomcatServerImpl{id=BUHgQeOs}
at org.apache.brooklyn.core.entity.Entities.waitForServiceUp(Entities.java:1073)
at org.apache.brooklyn.entity.software.base.SoftwareProcessImpl.waitForServiceUp(SoftwareProcessImpl.java:388)
at org.apache.brooklyn.entity.software.base.SoftwareProcessImpl.waitForServiceUp(SoftwareProcessImpl.java:385)
at org.apache.brooklyn.entity.software.base.SoftwareProcessDriverLifecycleEffectorTasks.postStartCustom(SoftwareProcessDriverLifecycleEffectorTasks.java:164)
at org.apache.brooklyn.entity.software.base.lifecycle.MachineLifecycleEffectorTasks$7.run(MachineLifecycleEffectorTasks.java:433)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at org.apache.brooklyn.core.util.task.DynamicSequentialTask$DstJob.call(DynamicSequentialTask.java:343)
at org.apache.brooklyn.core.util.task.BasicExecutionManager$SubmissionCallable.call(BasicExecutionManager.java:469)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
This doesn’t really tell us what we need to know, and looking in the brooklyn.debug.log
file yields no further
clues. The key here is the error message Timeout waiting for SERVICE_UP
. After running the installation and
launch scripts, assuming all scripts completed successfully, AMP will periodically check the health of the node
and will set the node on fire if the health check does not pass within a pre-prescribed period (the default is
two minutes, and can be configured using the start.timeout
config key). The periodic health check also continues
after the successful launch in order to check continued operation of the node, but in this case it fails to pass
at all.
The first thing we need to do is to find out how AMP determines the health of the node. The health-check is
often implemented in the isRunning()
method in the entity’s ssh driver. Tomcat’s implementation of isRunning()
is as follows:
@Override
public boolean isRunning() {
return newScript(MutableMap.of(USE_PID_FILE, "pid.txt"), CHECK_RUNNING).execute() == 0;
}
The newScript
method has conveniences for default scripts to check if a process is running based on its PID. In this
case, it will look for Tomcat’s PID in the pid.txt
file and check if the PID is the PID of a running process
It’s worth a quick sanity check at this point to check if the PID file exists, and if the process is running.
By default, the pid file is located in the run directory of the entity. You can find the location of the entity’s run
directory by looking at the run.dir
sensor. In this case it is /tmp/brooklyn-martin/apps/jIzIHXtP/entities/TomcatServer_BUHgQeOs
.
To find the pid, you simply cat the pid.txt file in this directory:
$ cat /tmp/brooklyn-martin/apps/jIzIHXtP/entities/TomcatServer_BUHgQeOs/pid.txt
73714
In this case, the PID in the file is 73714. You can then check if the process is running using ps
. You can also
pipe the output to fold
so the full launch command is visible:
$ ps -p 73714 | fold -w 120
PID TTY TIME CMD
73714 ?? 0:08.03 /Library/Java/JavaVirtualMachines/jdk1.8.0_51.jdk/Contents/Home/bin/java -Dnop -Djava.util.logg
ing.manager=org.apache.juli.ClassLoaderLogManager -javaagent:/tmp/brooklyn-martin/apps/jIzIHXtP/entities/TomcatServer_BU
HgQeOs/brooklyn-jmxmp-agent-shaded-0.8.0-SNAPSHOT.jar -Xms200m -Xmx800m -XX:MaxPermSize=400m -Dcom.sun.management.jmxrem
ote -Dbrooklyn.jmxmp.rmi-port=1099 -Dbrooklyn.jmxmp.port=31001 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.manage
ment.jmxremote.authenticate=false -Djava.endorsed.dirs=/tmp/brooklyn-martin/installs/TomcatServer_7.0.56/apache-tomcat-7
.0.56/endorsed -classpath /tmp/brooklyn-martin/installs/TomcatServer_7.0.56/apache-tomcat-7.0.56/bin/bootstrap.jar:/tmp/
brooklyn-martin/installs/TomcatServer_7.0.56/apache-tomcat-7.0.56/bin/tomcat-juli.jar -Dcatalina.base=/tmp/brooklyn-mart
in/apps/jIzIHXtP/entities/TomcatServer_BUHgQeOs -Dcatalina.home=/tmp/brooklyn-martin/installs/TomcatServer_7.0.56/apache
-tomcat-7.0.56 -Djava.io.tmpdir=/tmp/brooklyn-martin/apps/jIzIHXtP/entities/TomcatServer_BUHgQeOs/temp org.apache.catali
na.startup.Bootstrap start
This confirms that the process is running. The next thing we can look at is the service.notUp.indicators
sensor. This
reads as follows:
{"service.process.isRunning":"The software process for this entity does not appear to be running"}
This confirms that the problem is indeed due to the service.process.isRunning
sensor. We assumed earlier that this was
set by the isRunning()
method in TomcatSshDriver.java
, but this isn’t always the case. The service.process.isRunning
sensor is wired up by the connectSensors()
method in the node’s implementation class, in this case
TomcatServerImpl.java
. Tomcat’s implementation of connectSensors()
is as follows:
@Override
public void connectSensors() {
super.connectSensors();
if (getDriver().isJmxEnabled()) {
String requestProcessorMbeanName = "Catalina:type=GlobalRequestProcessor,name=\"http-*\"";
Integer port = isHttpsEnabled() ? getAttribute(HTTPS_PORT) : getAttribute(HTTP_PORT);
String connectorMbeanName = format("Catalina:type=Connector,port=%s", port);
jmxWebFeed = JmxFeed.builder()
.entity(this)
.period(3000, TimeUnit.MILLISECONDS)
.pollAttribute(new JmxAttributePollConfig<Integer>(ERROR_COUNT)
.objectName(requestProcessorMbeanName)
.attributeName("errorCount"))
.pollAttribute(new JmxAttributePollConfig<Integer>(REQUEST_COUNT)
.objectName(requestProcessorMbeanName)
.attributeName("requestCount"))
.pollAttribute(new JmxAttributePollConfig<Integer>(TOTAL_PROCESSING_TIME)
.objectName(requestProcessorMbeanName)
.attributeName("processingTime"))
.pollAttribute(new JmxAttributePollConfig<String>(CONNECTOR_STATUS)
.objectName(connectorMbeanName)
.attributeName("stateName"))
.pollAttribute(new JmxAttributePollConfig<Boolean>(SERVICE_PROCESS_IS_RUNNING)
.objectName(connectorMbeanName)
.attributeName("stateName")
.onSuccess(Functions.forPredicate(Predicates.<Object>equalTo("STARTED")))
.setOnFailureOrException(false))
.build();
jmxAppFeed = JavaAppUtils.connectMXBeanSensors(this);
} else {
// if not using JMX
LOG.warn("Tomcat running without JMX monitoring; limited visibility of service available");
connectServiceUpIsRunning();
}
}
We can see here that if jmx is not enabled, the method will call connectServiceUpIsRunning()
which will use the
default PID-based method of determining if a process is running. However, as JMX is running, the service.process.isRunning
sensor (denoted here by the SERVICE_PROCESS_IS_RUNNING
variable) is set to true if and only if the
stateName
JMX attribute equals STARTED
. We can see from the previous call to .pollAttribute
that this
attribute is also published to the CONNECTOR_STATUS
sensor. The CONNECTOR_STATUS
sensor is defined as follows:
AttributeSensor<String> CONNECTOR_STATUS =
new BasicAttributeSensor<String>(String.class, "webapp.tomcat.connectorStatus", "Catalina connector state name");
Let’s go back to the AMP debug console and look for the webapp.tomcat.connectorStatus
:
As the sensor is not shown, it’s likely that it’s simply null or not set. We can check this by clicking the “Show/hide empty records” icon (highlighted in yellow above):
We know from previous steps that the installation and launch scripts completed, and we know the procecess is running, but we can see here that the server is not responding to JMX requests. A good thing to check here would be that the JMX port is not being blocked by iptables, firewalls or security groups (see the troubleshooting connectivity guide). Let’s assume that we’ve checked that and they’re all open. There is still one more thing that AMP can tell us.
Still on the Sensors
tab, let’s take a look at the log.location
sensor:
/tmp/brooklyn-martin/apps/c3bmrlC3/entities/TomcatServer_C1TAjYia/logs/catalina.out
This is the location of Tomcat’s own log file. The location of the log file will differ from process to process
and when writing a custom entity you will need to check the software’s own documentation. If your blueprint’s
ssh driver extends JavaSoftwareProcessSshDriver
, the value returned by the getLogFileLocation()
method will
automatically be published to the log.location
sensor. Otherwise, you can publish the value yourself by calling
entity.setAttribute(Attributes.LOG_FILE_LOCATION, getLogFileLocation());
in your ssh driver
Note: The log file will be on the server to which you have deployed Tomcat, and not on the AMP server. Let’s take a look in the log file:
$ less /tmp/brooklyn-martin/apps/c3bmrlC3/entities/TomcatServer_C1TAjYia/logs/catalina.out
Jul 21, 2015 4:12:12 PM org.apache.tomcat.util.digester.Digester fatalError
SEVERE: Parse Fatal Error at line 143 column 3: The element type "unmatched-element" must be terminated by the matching end-tag "</unmatched-element>".
org.xml.sax.SAXParseException; systemId: file:/tmp/brooklyn-martin/apps/c3bmrlC3/entities/TomcatServer_C1TAjYia/conf/server.xml; lineNumber: 143; columnNumber: 3; The element type "unmatched-element" must be terminated by the matching end-tag "</unmatched-element>".
at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:203)
at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177)
at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:441)
at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:368)
at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1437)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1749)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2973)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:606)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:848)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:777)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:649)
at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1561)
at org.apache.catalina.startup.Catalina.load(Catalina.java:615)
at org.apache.catalina.startup.Catalina.start(Catalina.java:677)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:321)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:455)
Jul 21, 2015 4:12:12 PM org.apache.catalina.startup.Catalina load
WARNING: Catalina.start using conf/server.xml: The element type "unmatched-element" must be terminated by the matching end-tag "</unmatched-element>".
Jul 21, 2015 4:12:12 PM org.apache.catalina.startup.Catalina start
SEVERE: Cannot start server. Server instance is not configured.
As expected, we can see here that the unmatched-element
element has not been terminated in the server.xml
file