Building a Maven repository by combining S3, Nexus, and Lambda

Originally published: 2016-10-28

Last updated: 2016-10-28

Organizations that use Maven (or other tools that consume Maven-style repositories, like Gradle and Leiningen) have a quandary: how to deploy locally-produced artifacts. There are a few standard approaches:

An optimal solution would combine the “already there” credentialing of SCP or S3 with the flexibility of a repository manager. This post describes one approach to that, using S3 to deploy artifacts and an AWS Lambda process to push them to a repository manager.

I will say up front that the deployment process that I describe is only useful for organizations that already deploy to S3 and want to integrate a local repository server. If you don't already have a deployment process, there's an easier way; you'll have to scroll to the end of the article to see it.

The bigger audience for this article are those people that are interested in integrating AWS Lambda with their development process. While it's very easy to get started with one of the Lambda “blueprints,” there are some gotchas when implementing a real application (especially with regards to private and public network access).

Step 1: Configure AWS

I'm going to assume that you're already familiar with AWS services, so won't be giving detailed directions. At each step I link to the appropriate page in the AWS console, and follow that with what I consider to be best practices. You'll find links to the relevant AWS documentation at the end of this article.

Create a bucket to use as a deployment target

Individual developers will publish to this bucket using the Maven Deploy and Release plugins. I used the name “maven-deployment” for this article; you'll need to pick a different name, because bucket names are globally unique. I recommend a DNS-based name, such as “com-example-maven-deployment”.

By default, deployed artifacts will stay in the bucket forever. This can be a good (and relatively cheap) way to backup your release artifacts. However, if you deploy snapshots you'll be paying to store obsolete files. To avoid this problem, add a lifecycle rule that deletes files with the prefix “snapshots/” after they've been in the bucket for a day (or longer, if you think you'll ever want to manually retrieve old snapshots).

Create policies to control access to this bucket

One policy will be for your developers, and will allow them to store objects in the bucket as well as list and retrieve. The other will be for the “republisher” process, and is read-only.

To create the policies, you need to know the ARN of the bucket, which will have the form “arn:aws:s3:::BUCKETNAME”. Normally an ARN includes the account ID, but because buckets have global names, it's omitted for bucket ARNs.

Here is my policy for the developers, adapted from this AWS blog post and adjusted for the permissions that the S3 wagon needs. I've named it “maven-deployer”, as it will be used by the developers when deploying.

Note that there are two sections to the policy: the first allows access to the bucket, while the second allows access to the objects held in the bucket. Also note that the allowed actions are specified with wildcards: there are several operations around getting and putting objects, and rather than analyze the wagon code (which I had to do for GetBucketLocation) it's easier to grant full access.

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [

The policy for the republisher process is almost the same, but omits the “s3:Put*” action. I've named it “maven-republisher

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [

Note that these policies are independent from any other bucket access policies. For example, you could grant full access to all of your users using one of the Amazon managed policies, or via direct bucket permissions, which would make these policies irrelevant.

Create IAM users for each of your developers and put them in a developer group

When creating the users you have the option to create and download access keys; do this, and hand them out to your users. Assuming that you allow logins to the AWS console, your users can always generate access keys; it's just easier to download them now, in a batch.

It may be tempting to create a single “deployment” user — or even hand out the root access keys to everyone — but in the long run it's much better to have individual users. For one thing, when one of your developers leaves, it's easy to disable their access.

When creating the group, attach the “maven-deployer” policy to it, and add all of the users that you created in the previous step.

Configure VPC

If you signed up for AWS after December 4 2013, you already have a VPC. If you signed up before that date, you might still be running “EC2 Classic,” and need to create a VPC for this article. If that's the case, I recommend following “scenario 2” docs, creating a VPC with public and private subnets.

For this article I created a new subnet, “nexus-lambda”, which will be used for both the Nexus server and the Lambda process. I also created a route table for this subnet (note that you must explicitly assign the route table to the subnet).

Since the Nexus server will be accessed from the Internet, this subnet has to be public. That means that, in addition to the “local” route that is part of every new route table, I added a route for, with my VPC's Internet Gateway as its target.

Since a Lambda function doesn't have a public IP, it can't use the Internet Gateway, and therefore wouldn't be able to access S3. Except that AWS allows you to create an endpoint for S3 within your VPC, and attach it to the routing table. Unfortunately, not all AWS services can be exposed as endpoints within a VPC; depending on what services you use, you might need to put your Lambda function in a private subnet with a NAT.

Create an EC2 instance for the Nexus server

I'm using EC2 for the repository server because this is an AWS-centric article. You could run the server within your premises and save on costs, but you'd need to configure your firewall to allow access for Lambda (and also add a NAT to the Lambda subnet).

Here are some things to think about when launching this instance (ordered by the page in the launch wizard where they apply):

Once the instance launches, add a DNS entry for its public IP. For this article I use the (unresolvable) name “”. If you're not able to update your company's DNS records, you could use the raw IP address or the Amazon-generated DNS name. Just be aware that these will change if you ever stop the instance (while you may not intentionally stop the instance, hardware sometimes fails).

I'm going to leave post-startup configuration until the next section, but at this point you should verify that you can login to the instance and, using the credentials for one of your development users, access the bucket.

echo "foo" > /tmp/foo.txt

aws s3 cp /tmp/foo.txt s3://maven-deployment/test/

aws s3 ls s3://maven-deployment/test/

aws s3 cp s3://maven-deployment/test/foo.txt /tmp/bar.txt

If you don't get any errors, you're good to move on. If you do get errors, check the user/group assignment first, and then check the policy contents and whether it's been assigned to the developer group.

Step 2: Configure Nexus

There are several repository managers available. I prefer Nexus because it's the repository manager used for Maven Central. There is a paid version and a free version; the former gives you more features, but the latter should be sufficient for a small team (and you can upgrade later if you decide it's worthwhile).

As of this writing, there are also two supported versions of Nexus: 2.x and 3.x. The latter has more features, but requiresd JDK 1.8. The former runs with JDK 1.7, which is what the Amazon AMI uses by default. Since it has full support for Maven, that's what I'll use; if you choose 3.x, some of the steps/links below might not be relevant.

Download here, and unpack in the /nexus directory. Then perform the following steps for basic configuration:

  1. Configure Nexus to run as a service, so that it will automatically restart if the machine reboots. While the documentation says to create a new user, you can use “ec2-user”; this service will be the only thing running on the machine, and it will be easier to manage if you have only one login.

  2. Start the service, by running sudo service nexus start from the command-line (this is included in the directions for setting up Nexus as a service, but I want to reiterate it here — none of the following steps will work if Nexus isn't running). Verify that it's running by connecting to (changing the hostname to whatever you're using).

  3. Change the admin and deployment user passwords. It's not immediately apparent how to change a user's password: go to the user list, right-click on the username, and you'll get a popup with option to set/reset the user's password. Nexus has well-known default passwords, and you're exposing it to the Internet; changing passwords should be the first thing that you do.

  4. Perform the rest of the steps on the post-install checklist.

  5. Configure each development user's settings.xml to refer to Nexus. To explain what this does: Maven searches a list of repositories for dependencies, combining the repositories specified in the POM with the built-in “central” repository. The <mirror> section in settings.xml says that all user-specified repositories should go to Nexus, while the <repositories> and <pluginRepositories> sections override the built-in default.

At this point you have a fully functional Nexus repository. If you try building one of your projects, rather than seeing downloads from the artifacts should come from And if you log into the Nexus instance, you should see that these artifacts now reside in the “sonatype-work” directory.

Create two new hosted repositories, “s3-snapshots” and “s3-releases”. Nexus comes configured with the hosted repositories “snapshots” and “releases”, and you could use them, but you'll have to change all URLs in the rest of this article. I think it's easier to delete the existing repositories and create new ones.

Once you've created the repositories, add them to the public group. Hosted repositories should appear in the group list before any proxy repositories; this will prevent Nexus from making pointless requests to the remote repositories.

Now it's time to test your repository. Deployments are a simple HTTP PUT command, and the directory structure of your hosted repository exactly matches the structure of your local repository. Which means that you can use curl to deploy. So, assuming that you have a project with group ID “com.example”, artifact ID “deployment”, and version “1.0.0-SNAPSHOT”, the following commands will deploy the most recent POM and JAR for this artifact (again, you'll have to replace “”; with your hostname).

curl -u deployment:deployment123 -T $HOME/.m2/repository/com/example/deployment/1.0.0-SNAPSHOT/deployment-1.0.0-SNAPSHOT.jar

curl -u deployment:deployment123 -T $HOME/.m2/repository/com/example/deployment/1.0.0-SNAPSHOT/deployment-1.0.0-SNAPSHOT.pom

If successful, curl won't return anything. If the repository isn't available at the URL you used, it will return an error page. And if the security group isn't set up properly, the command will appear to do nothing for five minutes, and then time-out.

You can verify that the deployment worked by selecting the snapshots repository in the Nexus dashboard and clicking the “Browse Storage” tab (you may need to refresh the tree). Or, you can go into a project that depends on that snapshot, clear the snapshot from the user's local repository, and run a build. You should see the snapshot artifact being downloaded from Nexus.

Step 3: Deploy to S3

There are many wagons that you can use to deploy to S3; for this example I'm using the Spring wagon. This plugin is easy to use, and requires a few minor changes to your project POMs (or better, a cross-project parent POM). I've created a sample POM to get you started; here are the important points:

  1. Add the wagon into the set of build extensions. Note that I use a property to identify the wagon's version; as of this writing it's 5.0.0.RELEASE.

    <project xmlns="">
  2. Ensure that the Maven Release Plugin is in the build configuration. Again I'm using a property for the plugin version; it's currently 2.5.3. This plugin configuration should not be necessary, but I've found that I'm unable to deploy releases without it.

    <project xmlns="">
  3. Add a distributionManagement section that specifies the deployment bucket. Note that there are different paths for snapshots and releases.

    <project xmlns="">
                <name>AWS Release Repository</name>
                <name>AWS Snapshot Repository</name>
  4. Add an scm section that specifies the shared source repository URL. The release plugin will make several commits to the repository, including a tag, and needs to be able to write those to a place that others can see. For this example I'm using Git, and forked from a repository on my local machine.

    <project xmlns="">
  5. Ensure that all of your developers have configured their AWS credentials.

    While the aws-maven docs suggest that credentials be added to settings.xml, I believe that a much better approach is to store them in the environment, as described in the sidebar above. Doing so means that there's only one place for the developer to update credentials (should that be necessary), and it also lessens the chance of accidentally sharing credentials when helping someone to configure their own settings.xml.

At this point you should be able to deploy the snapshot version of your project. From the command line, run mvn deploy and verify that the last lines of the output (before “Build Success”) consist of uploading or downloading files with the correct name.

[INFO] --- maven-deploy-plugin:2.7:deploy (default-deploy) @ deployment ---
Uploaded: s3://maven-deployment/snapshots/com/example/deployment/maven-metadata.xml (281 B at 0.6 KB/sec)
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------

If you don't have your credentials or policy properly configured, you'll get a 403 (Access Denied) error when you attempt to retrieve the metadata file.

[INFO] --- maven-deploy-plugin:2.7:deploy (default-deploy) @ deployment ---
Downloading: s3://maven-deployment/snapshots/com/example/deployment/1.0.2-SNAPSHOT/maven-metadata.xml
[WARNING] Could not transfer metadata com.example:deployment:1.0.2-SNAPSHOT/maven-metadata.xml from/to aws-snapshot (s3://maven-deployment/snapshots): Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 79BF84BF644C4BC2, AWS Error Code: AccessDenied, AWS Error Message: Access Denied

The first thing to check if this happens is that the user's credentials are correct, and that the user is part of the “maven-deployers” group. If those check out, look at the policy document, verify that you spelled the bucket name correctly in the Resource section, and that the get and put statement has a wildcard at the end of the bucket name.

You will get a different message if the environment isn't correctly set up:

[INFO] --- maven-deploy-plugin:2.7:deploy (default-deploy) @ deployment ---
Downloading: s3://maven-deployment/snapshots/com/example/deployment/1.0.2-SNAPSHOT/maven-metadata.xml
[WARNING] Could not transfer metadata com.example:deployment:1.0.2-SNAPSHOT/maven-metadata.xml from/to aws-snapshot (s3://maven-deployment/snapshots): Unable to load AWS credentials from any provider in the chain

Step 4: Republishing via Lambda

At this point we have two pieces of the puzzle in place: a way to deploy artifacts to S3, and a repository server that will accept new artifacts via PUT requests. The last piece of the puzzle is to trigger a PUT whenever a new artifact is uploaded to S3. This is where AWS Lambda comes in: one of its common use cases is responding to S3 bucket events.

For this article I decided to write the republisher function using Python; Lambda also supports Java and JavaScript. I rejected Java due to JVM startup time: I expect these scripts to have a very short runtime, so startup time is significant overhead. And I rejected JavaScript because I wanted to keep the example in a single file: in the real world I'd use the async library to avoid callback hell, but that would mean creating a deployment package for Lambda. With Python, everything can be handled using either the standard library or the AWS SDK, so remains as a simple script.

Here, then, is the code:

from __future__ import print_function

import base64
import boto3
import httplib
import re
import tempfile
import urllib

NEXUS_BASE_PATH = '/nexus/content/repositories/'

AUTH_HEADER = "Basic " + base64.b64encode('deployment:deployment123')

print('Loading function')

s3 = boto3.resource('s3')

def lambda_handler(event, context):

    bucket = event['Records'][0]['s3']['bucket']['name']
    key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'].encode('utf8'))
    if should_process(key):
        print('processing: ' + key)
        staging_file = tempfile.TemporaryFile()
            download_to_staging(event, bucket, key, staging_file)
            upload_to_nexus(staging_file, key)
            return True
        except Exception as e:
            print('Error processing ' + key)
            raise e
        print('ignoring:   ' + key)

def should_process(key):
    return (key.startswith('snapshots/') or key.startswith('releases/')) \
       and (not key.endswith('/')) \
       and (key.find('maven-metadata.xml') == -1)

def download_to_staging(event, bucket, key, staging_file):
    s3.Object(bucket, key).download_fileobj(staging_file)
    print("downloaded {} bytes; reported size in event is {}"
          .format(staging_file.tell(), event['Records'][0]['s3']['object']['size']))

def upload_to_nexus(staging_file, key):
    request_path = get_destination_url(key)
    print("uploading file to: http://{}:{}{}".format(NEXUS_HOST, NEXUS_PORT, request_path))
    cxt = httplib.HTTPConnection(NEXUS_HOST, NEXUS_PORT)
        cxt.request("PUT", request_path, staging_file, { "Authorization": AUTH_HEADER })
        response = cxt.getresponse()
        print("response status: {}".format(response.status))

def get_destination_url(key):
    if key.startswith("snapshots/"):
        return NEXUS_SNAPSHOT_PATH + re.sub(r'-\d{8}\.\d{6}-\d+', '-SNAPSHOT', key[10:])
    elif key.startswith("releases/"):
        return NEXUS_RELEASE_PATH + key[9:]
        raise Exception("invalid key (should not get here): " + key)

I think that's mostly self-documenting, but here are a few key points:

With that covered, let's create the function. Start with the AWS-provided “blueprint” and make the following changes as you step through the creation wizard.

  1. Configure triggers

    We want the function to run for any operation that adds a file to the bucket, so choose the bucket name and the “Object Created (All)” event type. Leave the “Prefix” and “Suffix” fields empty; these are useful if you have a bunch of stuff in the bucket, but we only expect deployed files (and reject anything else in the function).

    Remember to check the “Enable Trigger” checkbox before moving to the next step!

  2. Configure function

    Name it whatever you want (I call mine “mavenRedeployer”) and update the decription field to something more meaningful. Make sure that the “Runtime” field is showing Python 2.x, and replace the example function code with that shown above.

    Scrolling down, leave the “Handler” field alone; it specifies the entry point for the Python code.

    We're going to start with a generated role for this function, so ensure that the “Role” field has “Create new role from template” selected. Pick whatever name you want, and leave the predefined policy template list alone (we'll change it later, in the role).

    In the “Advanced settings” section, you can leave the memory alone, but should increase the timeout to 10 seconds to ensure enough time to open network connections and copy the file. In normal operation it will take far less time to run; if it does time-out, that's a good indication that a network connection is blocked.

    The last part of this page is VPC configuration: select whatever VPC and subnet that you configured above. You'll get a “high availability” warning if you only specify one subnet, but that's only relevant when you need high availability; feel free to create another subnet if you feel the need.

    When running in a VPC you also need to pick a security group. You can use the “nexus-repository” group that we created for the EC2 instance, or the default security group for the VPC.

That's it for function creation. The next step is to update the role configuration for the generated role, replacing the generated “AWSLambdaS3ExecutionRole” policy with the “maven-republisher” policy that we created earlier. The generated policy gives the function permissions on all buckets, while the explicit policy limits it to the deployment bucket. Note that detaching a policy from a role does not delete the policies; you have to do that manually from the policy list.

And now you can test. I recommend an initial smoketest from the Lambda console; you can use this event as test data. For this initial test you're looking to see that the function is loaded and compiles without a problem, and that you have access to the S3 bucket. With the key in the sample event, you should expect a “missing file” error from S3; if you update the key with a real path in your S3 bucket, it should run.

I've also noticed that this function tends to time-out on their first test, while trying to access S3. I'm not sure if that's due to the VPC endpoint not being fully up, or some other cause. Normally it works when I click “Test” a second time. If it still times out, you'll need to check your network connection.

The real test is to deploy one of your projects (I recommend deploying both snapshot and release artifacts), and watch it appear on Nexus. If this doesn't happen in relatively short order, you'll need to look at the Cloudwatch logs for the function. Hopefully though, things will go right because Cloudwatch logs are a pain: each log entry contains at most a few function invocations, so you'll be clicking through lots of logfiles to find the actual uploads.

Things That Can Go Wrong

OK, your artifacts are now making their way into Nexus after being deployed to S3. Except when they aren't. As with any distributed application, there are multiple points of failure, any one of which can disrupt the system as a whole. Here are a few of the common problems, with suggestions for debugging and workarounds.

This only works going forward

If you've already been deploying artifacts into S3, they're not going to go through the Lambda process: it's only triggered by new objects in the bucket. Rather than try to fool it by crafting events, simply use curl to upload the artifacts to Nexus.

Eventual Consistency is not your friend

Artifacts will typically appear in Nexus within seconds after they've been pushed to S3. But that won't happen 100% of the time; it depends on capacity within AWS, and events might be delayed. That may cause a problem if you have a continuous integration server that triggers downstream builds whenever an upstream artifact has changed.

This isn't a problem with releases, since you'll manually update the dependency list for each downstream project. It is, however, a problem with snapshots, made worse because it's a silent failure: your build script will happily use the old snapshot build, so you won't get notification that your changes broke the downstream (at least until the next downstream build).

A partial solution to this problem is to change your build server's configuration so that it has a “quiet period” between builds. This is actually a good choice in any case, as there can be multiple build triggers within a short amount of time.

Lambda functions fail

Again, this is a rare problem, but it does happen. This is a simple function, but it still depends on two external resources, as well as an assumption that it has enough space to spool the artifact between download and upload. If you hit it with a large-enough artifact, you'll get an out-of-space error. If the network is misbehaving (or the Nexus server isn't running) you'll get a timeout. Lambda will retry the function several times, and transient problems should go away, but the possibility of an unrecovered failure always exists.

The recourse here is to set up a Cloudwatch alarm on any invocation error for your function. If you see that the failure was persistent, you can manually deploy the artifacts.

There's gotta be a better way!

There is: don't let your developers do deploys. Instead, set up a continuous integration server (such as Jenkins), have it do snapshot deploys after every successful build, and use manually-invoked release jobs. You can get all of the benefits of Nexus, not have to manage credentials for every developer, and move one step closer to a continuous-deployment environment (where every build is a release).

I'm not completely opposed to developers making releases: in a complex build, where you have to propagate a release through many semi-related projects, it may be the only way to maintain sanity (especially if multiple developers are making updates to the projects). But in most cases, I think it's simply a holdover from a time, when single developers were responsible for projects, and carefully guarded their releases. Today, we work as teams.

For More Information

Example code for this project, including sample policy documents, is available at GitHub. It's all licensed under the Apache 2.0 model, so feel free to copy-n-paste.

Here are links to relevant AWS documentation (yep, there are a lot of them; I've ordered them by the configuration steps above):

Copyright © Keith D Gregory, all rights reserved

This site does not intentionally use tracking cookies. Any cookies have been added by my hosting provider (InMotion Hosting), and I have no ability to remove them. I do, however, have access to the site's access logs with source IP addresses.