Building a Maven repository by combining S3, Nexus, and Lambda
Originally published: 2016-10-28
Last updated: 2016-10-28
Organizations that use Maven (or other tools that consume Maven-style repositories, like Gradle and Leiningen) have a quandary: how to deploy locally-produced artifacts. There are a few standard approaches:
- Shared directory as “local” repository
I don't know of any teams that actually do this, perhaps because sharing directories on Linux is non-trivial. This approach also doesn't scale well for teams where multiple people are working on the same project, because each person's snapshot builds will overwrite the others'.
- scp to a remote directory, which is served with Apache or another webserver
This was the standard approach to maintaining a local repository in the early days (before repository servers became common). It has the benefit that each user has a unique set of credentials, his or her private key. But it's a lot of work to set up, and leaves open the possibility of accidentally overwriting artifacts.
- Amazon S3, using a third-party wagon
Maven (and its relatives) abstracts file management to a “wagon” implementation. So you can replace the built-in HTTP- or SCP-based access to a repository with whatever you want. There are several wagons available that will read and write to the Amazon S3 storage system. For this post I'm using the Spring wagon.
Like an SCP-based solution, S3 provides built-in credentialing: each user has a unique access key. Unlike the SCP-based solutions, you don't need a webserver, and the access key also controls who can download the files — which may or may not be a benefit.
- A repository manager
Repository managers provide the most flexibility: not only can they store the artifacts that you build locally, but they can also proxy remote repositories. This means that you no longer need a long list of repositories in each POM; instead, you specify the repository server as a mirror of Maven Central in your
settings.xml
.Moreover, a repository manager can be used to control what artifacts are allowed in the build: rather than simply proxying the rest of the world, the repository manager can serve all artifacts from curated local storage. This can be very useful in a regulated corporate environment.
The big downside to a repository manager, at least for small teams, is authentication. While the popular managers support LDAP, if that's not your team's mechanism of choice you'll have to maintain a separate set of credentials for each user.
An optimal solution would combine the “already there” credentialing of SCP or S3 with the flexibility of a repository manager. This post describes one approach to that, using S3 to deploy artifacts and an AWS Lambda process to push them to a repository manager.
I will say up front that the deployment process that I describe is only useful for organizations that already deploy to S3 and want to integrate a local repository server. If you don't already have a deployment process, there's an easier way; you'll have to scroll to the end of the article to see it.
The bigger audience for this article are those people that are interested in integrating AWS Lambda with their development process. While it's very easy to get started with one of the Lambda “blueprints,” there are some gotchas when implementing a real application (especially with regards to private and public network access).
Step 1: Configure AWS
I'm going to assume that you're already familiar with AWS services, so won't be giving detailed directions. At each step I link to the appropriate page in the AWS console, and follow that with what I consider to be best practices. You'll find links to the relevant AWS documentation at the end of this article.
Create a bucket to use as a deployment target
Individual developers will publish to this bucket using the Maven Deploy and
Release plugins. I used the name “maven-deployment
” for this article; you'll need to pick a different name, because bucket names are
globally unique. I recommend a DNS-based name, such as
“com-example-maven-deployment
”.
By default, deployed artifacts will stay in the bucket forever. This can be a good (and
relatively cheap) way to backup your release artifacts. However, if you deploy snapshots
you'll be paying to store obsolete files. To avoid this problem, add a lifecycle rule
that deletes files with the prefix “snapshots/
” after they've
been in the bucket for a day (or longer, if you think you'll ever want to manually
retrieve old snapshots).
Create policies to control access to this bucket
One policy will be for your developers, and will allow them to store objects in the bucket as well as list and retrieve. The other will be for the “republisher” process, and is read-only.
To create the policies, you need to know the ARN of the bucket, which will have the
form “arn:aws:s3:::BUCKETNAME
”. Normally an ARN includes
the account ID, but because buckets have global names, it's omitted for bucket ARNs.
Here is my policy for the developers, adapted from
this AWS blog post and adjusted for the permissions that the S3 wagon needs. I've named it
“maven-deployer
”, as it will be used by the
developers when deploying.
Note that there are two sections to the policy: the first allows access to the bucket,
while the second allows access to the objects held in the bucket. Also note that the
allowed actions are specified with wildcards: there are several operations around
getting and putting objects, and rather than analyze the wagon code (which I had
to do for GetBucketLocation
) it's easier to grant full access.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::maven-deployment" ] }, { "Effect": "Allow", "Action": [ "s3:Get*", "s3:Put*" ], "Resource": [ "arn:aws:s3:::maven-deployment/*" ] } ] }
The policy for the republisher process is almost the same, but omits the
“s3:Put*
” action. I've named it
“maven-republisher
”
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::maven-deployment" ] }, { "Effect": "Allow", "Action": [ "s3:Get*" ], "Resource": [ "arn:aws:s3:::maven-deployment/*" ] } ] }
Note that these policies are independent from any other bucket access policies. For example, you could grant full access to all of your users using one of the Amazon managed policies, or via direct bucket permissions, which would make these policies irrelevant.
Create IAM users for each of your developers and put them in a developer group
When creating the users you have the option to create and download access keys; do this, and hand them out to your users. Assuming that you allow logins to the AWS console, your users can always generate access keys; it's just easier to download them now, in a batch.
It may be tempting to create a single “deployment” user — or even hand out the root access keys to everyone — but in the long run it's much better to have individual users. For one thing, when one of your developers leaves, it's easy to disable their access.
When creating the group, attach the “maven-deployer
” policy to it, and add all of the users that you created in the previous step.
Configure VPC
If you signed up for AWS after December 4 2013, you already have a VPC. If you signed up before that date, you might still be running “EC2 Classic,” and need to create a VPC for this article. If that's the case, I recommend following “scenario 2” docs, creating a VPC with public and private subnets.
For this article I
created a new subnet,
“nexus-lambda
”, which will be used for both the Nexus
server and the Lambda process. I also
created a route table for this subnet (note that you must explicitly assign the route table to the subnet).
Since the Nexus server will be accessed from the Internet, this subnet has to be
public. That means that, in addition to the “local” route that is part
of every new route table, I added a route for 0.0.0.0/0
,
with my VPC's Internet Gateway as its target.
Since a Lambda function doesn't have a public IP, it can't use the Internet Gateway, and therefore wouldn't be able to access S3. Except that AWS allows you to create an endpoint for S3 within your VPC, and attach it to the routing table. Unfortunately, not all AWS services can be exposed as endpoints within a VPC; depending on what services you use, you might need to put your Lambda function in a private subnet with a NAT.
Create an EC2 instance for the Nexus server
I'm using EC2 for the repository server because this is an AWS-centric article. You could run the server within your premises and save on costs, but you'd need to configure your firewall to allow access for Lambda (and also add a NAT to the Lambda subnet).
Here are some things to think about when launching this instance (ordered by the page in the launch wizard where they apply):
- I recommend using the Amazon Linux AMI, because it has the latest command-line tools. If you choose a different AMI, plan to install the tools manually rather than downloading from the package manager (Ubuntu, in particular, installs a very outdated toolset).
- The “t2.small” instance type (with 2 GB of memory and one virtual CPU) is sufficient to run Nexus. It currently costs $228 per year (less if you reserve the instance).
- In the “Configure Instance Details” page, be sure to select the subnet
that you created above. You also want to auto-assign a public IP address; verify
that the subnet setting is “Enable” or do so explicitly.
Lastly for this page, enable termination protection: you don't want one bad click to take down your repository.
-
In the “Add Storage” page, add another volume to hold the repository. The size of this volume will depend on how many artifacts you use; it's OK to start low (say 30 Gb), because you can replace this volume with a larger one in the future.
Do not check the “Delete on Termination” flag for this volume. Since it will be the sole copy of your company-produced artifacts, it needs to stay around forever. And sometimes instances do need to be terminated.
Note that you'll need to format and mount this drive once the instance starts. Also be sure to add it to
/etc/fstab
so that it will be mounted whenever the instance reboots. - It's a good idea to create a new security group for this instance (and remember to change the name so that it indicates where it's used!). At a minimum this security group must allow access from within the VPC as well as from your office. I tend to allow all traffic from those sites, rather than mucking with ports. Also, think about how you want your developers to access the machine: it's easy is to allow access to port 8081 from everywhere, but that means that anybody can see your corporate artifacts. Better is to individually add developers' home IP addresses, but those can change. Best is to use a VPN so that your developers do everything from an office IP.
Once the instance launches, add a DNS entry for its public IP. For this article I use the (unresolvable) name “nexus.example.com
”.
If you're not able to update your company's DNS records, you could use the raw IP
address or the Amazon-generated DNS name. Just be aware that these will change if
you ever stop the instance (while you may not intentionally stop the instance,
hardware sometimes fails).
I'm going to leave post-startup configuration until the next section, but at this point you should verify that you can login to the instance and, using the credentials for one of your development users, access the bucket.
echo "foo" > /tmp/foo.txt aws s3 cp /tmp/foo.txt s3://maven-deployment/test/ aws s3 ls s3://maven-deployment/test/ aws s3 cp s3://maven-deployment/test/foo.txt /tmp/bar.txt
If you don't get any errors, you're good to move on. If you do get errors, check the user/group assignment first, and then check the policy contents and whether it's been assigned to the developer group.
Step 2: Configure Nexus
There are several repository managers available. I prefer Nexus because it's the repository manager used for Maven Central. There is a paid version and a free version; the former gives you more features, but the latter should be sufficient for a small team (and you can upgrade later if you decide it's worthwhile).
As of this writing, there are also two supported versions of Nexus: 2.x and 3.x. The latter has more features, but requiresd JDK 1.8. The former runs with JDK 1.7, which is what the Amazon AMI uses by default. Since it has full support for Maven, that's what I'll use; if you choose 3.x, some of the steps/links below might not be relevant.
Download here,
and unpack in the /nexus
directory. Then perform the following steps
for basic configuration:
-
Configure Nexus to run as a service, so that it will automatically restart if the machine reboots. While the documentation says to create a new user, you can use “
ec2-user
”; this service will be the only thing running on the machine, and it will be easier to manage if you have only one login. -
Start the service, by running
sudo service nexus start
from the command-line (this is included in the directions for setting up Nexus as a service, but I want to reiterate it here — none of the following steps will work if Nexus isn't running). Verify that it's running by connecting tohttp://nexus.example.com:8081/nexus
(changing the hostname to whatever you're using). -
Change the admin and deployment user passwords. It's not immediately apparent how to change a user's password: go to the user list, right-click on the username, and you'll get a popup with option to set/reset the user's password. Nexus has well-known default passwords, and you're exposing it to the Internet; changing passwords should be the first thing that you do.
-
Configure each development user's settings.xml to refer to Nexus. To explain what this does: Maven searches a list of repositories for dependencies, combining the repositories specified in the POM with the built-in “
central
” repository. The<mirror>
section insettings.xml
says that all user-specified repositories should go to Nexus, while the<repositories>
and<pluginRepositories>
sections override the built-in default.
At this point you have a fully functional Nexus repository. If you try building one of your
projects, rather than seeing downloads from https://repo.maven.apache.org/maven2
the artifacts should come from http://nexus.example.com:8081/nexus/content/groups/public/
.
And if you log into the Nexus instance, you should see that these artifacts now reside in
the “sonatype-work
” directory.
Create two new hosted repositories, “s3-snapshots
” and
“s3-releases
”. Nexus comes configured with the hosted
repositories “snapshots
” and “releases
”,
and you could use them, but you'll have to change all URLs in the rest of this article.
I think it's easier to delete the existing repositories and create new ones.
Once you've created the repositories, add them to the public group. Hosted repositories should appear in the group list before any proxy repositories; this will prevent Nexus from making pointless requests to the remote repositories.
Now it's time to test your repository. Deployments are a simple HTTP PUT command, and the
directory structure of your hosted repository exactly matches the structure of your local
repository. Which means that you can use curl
to deploy. So, assuming
that you have a project with group ID “com.example
”, artifact ID
“deployment
”, and version “1.0.0-SNAPSHOT
”,
the following commands will deploy the most recent POM and JAR for this artifact (again, you'll
have to replace “nexus.example.com
”; with your hostname).
curl -u deployment:deployment123 -T $HOME/.m2/repository/com/example/deployment/1.0.0-SNAPSHOT/deployment-1.0.0-SNAPSHOT.jar http://nexus.example.com:8081/nexus/content/repositories/s3-snapshots/com/example/deployment/1.0.0-SNAPSHOT/deployment-1.0.0-SNAPSHOT.jar curl -u deployment:deployment123 -T $HOME/.m2/repository/com/example/deployment/1.0.0-SNAPSHOT/deployment-1.0.0-SNAPSHOT.pom http://nexus.example.com:8081/nexus/content/repositories/s3-snapshots/com/example/deployment/1.0.0-SNAPSHOT/deployment-1.0.0-SNAPSHOT.pom
If successful, curl
won't return anything. If the repository isn't
available at the URL you used, it will return an error page. And if the security group isn't
set up properly, the command will appear to do nothing for five minutes, and then time-out.
You can verify that the deployment worked by selecting the snapshots repository in the Nexus dashboard and clicking the “Browse Storage” tab (you may need to refresh the tree). Or, you can go into a project that depends on that snapshot, clear the snapshot from the user's local repository, and run a build. You should see the snapshot artifact being downloaded from Nexus.
Step 3: Deploy to S3
There are many wagons that you can use to deploy to S3; for this example I'm using the Spring wagon. This plugin is easy to use, and requires a few minor changes to your project POMs (or better, a cross-project parent POM). I've created a sample POM to get you started; here are the important points:
-
Add the wagon into the set of build extensions. Note that I use a property to identify the wagon's version; as of this writing it's 5.0.0.RELEASE.
<project xmlns="http://maven.apache.org/POM/4.0.0"> ... <build> <extensions> <extension> <groupId>org.springframework.build</groupId> <artifactId>aws-maven</artifactId> <version>${spring-s3-wagon.version}</version> </extension> ...
-
Ensure that the Maven Release Plugin is in the build configuration. Again I'm using a property for the plugin version; it's currently 2.5.3. This plugin configuration should not be necessary, but I've found that I'm unable to deploy releases without it.
<project xmlns="http://maven.apache.org/POM/4.0.0"> ... <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-release-plugin</artifactId> <version>${maven-release-plugin.version}</version> </plugin> ...
-
Add a
distributionManagement
section that specifies the deployment bucket. Note that there are different paths for snapshots and releases.<project xmlns="http://maven.apache.org/POM/4.0.0"> ... <distributionManagement> <repository> <id>aws-release</id> <name>AWS Release Repository</name> <url>s3://maven-deployment/releases</url> </repository> <snapshotRepository> <id>aws-snapshot</id> <name>AWS Snapshot Repository</name> <url>s3://maven-deployment/snapshots</url> </snapshotRepository> </distributionManagement> ...
-
Add an
scm
section that specifies the shared source repository URL. The release plugin will make several commits to the repository, including a tag, and needs to be able to write those to a place that others can see. For this example I'm using Git, and forked from a repository on my local machine.<project xmlns="http://maven.apache.org/POM/4.0.0"> ... <scm> <developerConnection>svm:git:ssh://localhost/tmp/example</developerConnection> </scm> ...
-
Ensure that all of your developers have configured their AWS credentials.
While the
aws-maven
docs suggest that credentials be added tosettings.xml
, I believe that a much better approach is to store them in the environment, as described in the sidebar above. Doing so means that there's only one place for the developer to update credentials (should that be necessary), and it also lessens the chance of accidentally sharing credentials when helping someone to configure their ownsettings.xml
.
At this point you should be able to deploy the snapshot version of your project.
From the command line, run mvn deploy
and verify that the
last lines of the output (before “Build Success”) consist of uploading
or downloading files with the correct name.
... [INFO] --- maven-deploy-plugin:2.7:deploy (default-deploy) @ deployment --- ... Uploaded: s3://maven-deployment/snapshots/com/example/deployment/maven-metadata.xml (281 B at 0.6 KB/sec) [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ ...
If you don't have your credentials or policy properly configured, you'll get a 403 (Access Denied) error when you attempt to retrieve the metadata file.
... [INFO] --- maven-deploy-plugin:2.7:deploy (default-deploy) @ deployment --- Downloading: s3://maven-deployment/snapshots/com/example/deployment/1.0.2-SNAPSHOT/maven-metadata.xml [WARNING] Could not transfer metadata com.example:deployment:1.0.2-SNAPSHOT/maven-metadata.xml from/to aws-snapshot (s3://maven-deployment/snapshots): Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 79BF84BF644C4BC2, AWS Error Code: AccessDenied, AWS Error Message: Access Denied ...
The first thing to check if this happens is that the user's credentials are correct,
and that the user is part of the “maven-deployers
” group.
If those check out, look at the policy document, verify that you spelled the bucket
name correctly in the Resource
section, and that the get and
put statement has a wildcard at the end of the bucket name.
You will get a different message if the environment isn't correctly set up:
... [INFO] --- maven-deploy-plugin:2.7:deploy (default-deploy) @ deployment --- Downloading: s3://maven-deployment/snapshots/com/example/deployment/1.0.2-SNAPSHOT/maven-metadata.xml [WARNING] Could not transfer metadata com.example:deployment:1.0.2-SNAPSHOT/maven-metadata.xml from/to aws-snapshot (s3://maven-deployment/snapshots): Unable to load AWS credentials from any provider in the chain ...
Step 4: Republishing via Lambda
At this point we have two pieces of the puzzle in place: a way to deploy artifacts to S3, and a repository server that will accept new artifacts via PUT requests. The last piece of the puzzle is to trigger a PUT whenever a new artifact is uploaded to S3. This is where AWS Lambda comes in: one of its common use cases is responding to S3 bucket events.
For this article I decided to write the republisher function using Python; Lambda also supports Java and JavaScript. I rejected Java due to JVM startup time: I expect these scripts to have a very short runtime, so startup time is significant overhead. And I rejected JavaScript because I wanted to keep the example in a single file: in the real world I'd use the async library to avoid callback hell, but that would mean creating a deployment package for Lambda. With Python, everything can be handled using either the standard library or the AWS SDK, so remains as a simple script.
Here, then, is the code:
from __future__ import print_function import base64 import boto3 import httplib import re import tempfile import urllib NEXUS_HOST = '172.30.0.120' NEXUS_PORT = 8081 NEXUS_BASE_PATH = '/nexus/content/repositories/' NEXUS_SNAPSHOT_PATH = NEXUS_BASE_PATH + 's3-snapshots/' NEXUS_RELEASE_PATH = NEXUS_BASE_PATH + 's3-releases/' AUTH_HEADER = "Basic " + base64.b64encode('deployment:deployment123') print('Loading function') s3 = boto3.resource('s3') def lambda_handler(event, context): bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'].encode('utf8')) if should_process(key): print('processing: ' + key) staging_file = tempfile.TemporaryFile() try: download_to_staging(event, bucket, key, staging_file) upload_to_nexus(staging_file, key) return True except Exception as e: print('Error processing ' + key) print(e) raise e finally: staging_file.close() else: print('ignoring: ' + key) def should_process(key): return (key.startswith('snapshots/') or key.startswith('releases/')) \ and (not key.endswith('/')) \ and (key.find('maven-metadata.xml') == -1) def download_to_staging(event, bucket, key, staging_file): s3.Object(bucket, key).download_fileobj(staging_file) staging_file.flush() print("downloaded {} bytes; reported size in event is {}" .format(staging_file.tell(), event['Records'][0]['s3']['object']['size'])) def upload_to_nexus(staging_file, key): request_path = get_destination_url(key) print("uploading file to: http://{}:{}{}".format(NEXUS_HOST, NEXUS_PORT, request_path)) cxt = httplib.HTTPConnection(NEXUS_HOST, NEXUS_PORT) try: staging_file.seek(0) cxt.request("PUT", request_path, staging_file, { "Authorization": AUTH_HEADER }) response = cxt.getresponse() print("response status: {}".format(response.status)) finally: cxt.close() def get_destination_url(key): if key.startswith("snapshots/"): return NEXUS_SNAPSHOT_PATH + re.sub(r'-\d{8}\.\d{6}-\d+', '-SNAPSHOT', key[10:]) elif key.startswith("releases/"): return NEXUS_RELEASE_PATH + key[9:] else: raise Exception("invalid key (should not get here): " + key)
I think that's mostly self-documenting, but here are a few key points:
- The general approach is to download the artifact from S3 into a staging file,
then upload the staging file to Nexus. Lambda provides a limited amount of
space in
/tmp
, and it simplifies the code to use it. In a production application I might think about copying using a loop (the S3GetObject
operation returns a byte range, and may require multiple requests to return the entire file). - To avoid any concurrency issues, I use a temporary staging file. This file will
be deleted when the file handle is closed, so I have to jump through some hoops
in order to use it both as the destination for the download and the source for
the upload. I would prefer to just use
os.tmpnam()
to generate the filename, but that would fill the logs with (in this case spurious) security warnings. - AWS recommends that you do not use public hostnames in your functions, because they add time
for DNS lookup. Instead, I used the variable
NEXUS_HOST
to hold the repository server's internal IP. EC2 will retain the internal IP if you stop and restart a server but not if you terminate and re-create it; you'll need to edit the script if the address changes. - That brings up the whole issue of runtime configuration. The short answer is that you're on your own: Lambda does not provide a standard way for you to store configuration properties. One approach is to store a config file in S3, perhaps encrypted using KMS. For this article that would be distracting, so the deployment credentials are likewise stored in the script, unencrypted. Don't do this in your production scripts.
- The S3 wagon creates objects representing the intermediate directories in
the artifact path. I'm not sure why it does this — S3 keys are not
actual file paths — but I want to ignore any events where the key
represents a directory. The
should_process
function checks for this, as well as verifying that we're processing an actual snapshot or release. - The
maven-metadata.xml
file is another thing that we don't want to process: it's used internally by the repository, and Nexus will build its own copy (albeit after some delay from the upload). - That regular expression in
get_destination_url
removes the timestamp-based unique version identifier that the Maven deploy plugin adds to snapshot artifacts. Maven should be able to retrieve the latest version based on the available metadata, but as I noted above the metadata is not always up-to-date. As the unique snapshot identifier exists to support repeatable builds, which I think are a moot point with snapshots, I'm OK with just removing it.
With that covered, let's create the function. Start with the AWS-provided “blueprint” and make the following changes as you step through the creation wizard.
-
We want the function to run for any operation that adds a file to the bucket, so choose the bucket name and the “Object Created (All)” event type. Leave the “Prefix” and “Suffix” fields empty; these are useful if you have a bunch of stuff in the bucket, but we only expect deployed files (and reject anything else in the function).
Remember to check the “Enable Trigger” checkbox before moving to the next step!
-
Name it whatever you want (I call mine “mavenRedeployer”) and update the decription field to something more meaningful. Make sure that the “Runtime” field is showing Python 2.x, and replace the example function code with that shown above.
Scrolling down, leave the “Handler” field alone; it specifies the entry point for the Python code.
We're going to start with a generated role for this function, so ensure that the “Role” field has “Create new role from template” selected. Pick whatever name you want, and leave the predefined policy template list alone (we'll change it later, in the role).
In the “Advanced settings” section, you can leave the memory alone, but should increase the timeout to 10 seconds to ensure enough time to open network connections and copy the file. In normal operation it will take far less time to run; if it does time-out, that's a good indication that a network connection is blocked.
The last part of this page is VPC configuration: select whatever VPC and subnet that you configured above. You'll get a “high availability” warning if you only specify one subnet, but that's only relevant when you need high availability; feel free to create another subnet if you feel the need.
When running in a VPC you also need to pick a security group. You can use the “
nexus-repository
” group that we created for the EC2 instance, or the default security group for the VPC.
That's it for function creation. The next step is to update the
role configuration for the generated role, replacing the generated “AWSLambdaS3ExecutionRole
” policy with the “maven-republisher
” policy that we created earlier. The
generated policy gives the function permissions on all buckets, while the explicit policy
limits it to the deployment bucket. Note that detaching a policy from a role does not delete
the policies; you have to do that manually from the
policy list.
And now you can test. I recommend an initial smoketest from the Lambda console; you can use this event as test data. For this initial test you're looking to see that the function is loaded and compiles without a problem, and that you have access to the S3 bucket. With the key in the sample event, you should expect a “missing file” error from S3; if you update the key with a real path in your S3 bucket, it should run.
I've also noticed that this function tends to time-out on their first test, while trying to access S3. I'm not sure if that's due to the VPC endpoint not being fully up, or some other cause. Normally it works when I click “Test” a second time. If it still times out, you'll need to check your network connection.
The real test is to deploy one of your projects (I recommend deploying both snapshot and release artifacts), and watch it appear on Nexus. If this doesn't happen in relatively short order, you'll need to look at the Cloudwatch logs for the function. Hopefully though, things will go right because Cloudwatch logs are a pain: each log entry contains at most a few function invocations, so you'll be clicking through lots of logfiles to find the actual uploads.
Things That Can Go Wrong
OK, your artifacts are now making their way into Nexus after being deployed to S3. Except when they aren't. As with any distributed application, there are multiple points of failure, any one of which can disrupt the system as a whole. Here are a few of the common problems, with suggestions for debugging and workarounds.
This only works going forward
If you've already been deploying artifacts into S3, they're not going to go through the
Lambda process: it's only triggered by new objects in the bucket. Rather than
try to fool it by crafting events, simply use curl
to upload the
artifacts to Nexus.
Eventual Consistency is not your friend
Artifacts will typically appear in Nexus within seconds after they've been pushed to S3. But that won't happen 100% of the time; it depends on capacity within AWS, and events might be delayed. That may cause a problem if you have a continuous integration server that triggers downstream builds whenever an upstream artifact has changed.
This isn't a problem with releases, since you'll manually update the dependency list for each downstream project. It is, however, a problem with snapshots, made worse because it's a silent failure: your build script will happily use the old snapshot build, so you won't get notification that your changes broke the downstream (at least until the next downstream build).
A partial solution to this problem is to change your build server's configuration so that it has a “quiet period” between builds. This is actually a good choice in any case, as there can be multiple build triggers within a short amount of time.
Lambda functions fail
Again, this is a rare problem, but it does happen. This is a simple function, but it still depends on two external resources, as well as an assumption that it has enough space to spool the artifact between download and upload. If you hit it with a large-enough artifact, you'll get an out-of-space error. If the network is misbehaving (or the Nexus server isn't running) you'll get a timeout. Lambda will retry the function several times, and transient problems should go away, but the possibility of an unrecovered failure always exists.
The recourse here is to set up a Cloudwatch alarm on any invocation error for your function. If you see that the failure was persistent, you can manually deploy the artifacts.
There's gotta be a better way!
There is: don't let your developers do deploys. Instead, set up a continuous integration server (such as Jenkins), have it do snapshot deploys after every successful build, and use manually-invoked release jobs. You can get all of the benefits of Nexus, not have to manage credentials for every developer, and move one step closer to a continuous-deployment environment (where every build is a release).
I'm not completely opposed to developers making releases: in a complex build, where you have to propagate a release through many semi-related projects, it may be the only way to maintain sanity (especially if multiple developers are making updates to the projects). But in most cases, I think it's simply a holdover from a time, when single developers were responsible for projects, and carefully guarded their releases. Today, we work as teams.
For More Information
Example code for this project, including sample policy documents, is available at GitHub. It's all licensed under the Apache 2.0 model, so feel free to copy-n-paste.
Here are links to relevant AWS documentation (yep, there are a lot of them; I've ordered them by the configuration steps above):
- Creating S3 buckets, overview of S3 lifecycle management, and how to configure lifecycle policies.
- Overview of IAM identies, creating users, creating groups, and creating roles.
- Overview of IAM policies and reference for policy components.
- Overview of Virtual Private Clouds (VPC).
- VPC endpoints and route tables.
- Overview of EC2 security groups.
- Overview of Lambda functions and Lambda permissions management.
Copyright © Keith D Gregory, all rights reserved