Skip to main content

Publish Maven site with Amazon S3 and CloudFront

Amazon S3 now supports static website hosting. As a 10 years Maven user, I wonder how easy it is to deploy Maven generated site to Amazon S3 and let the rock-solid storage provider to host my project websites.

There are several existing s3 wagon providers, which all seem to have the same problem, not supporting directory copy. This is understandable since before S3 new website hosting feature, I guess people mostly expect to deploy artifacts rather than website to S3. So my first task is to write an AWS S3 wagon that supports directory copy.

With AWS Java SDK, task becomes as simple as one single class. I made my S3 wagon available in Maven central repository at org.cyclopsgroup:awss3-maven-wagon:0.1. The source code is hosted in github:jiaqi/cym2/awss3.

The next thing is to create an S3 bucket in console. To avoid trouble, bucket name is set to the future website domain name according to this discussion. Website feature needs to be explicitly enabled. I also created an IAM account with limited permission just for website management.

Comparing to other S3 wagon configuration, it's pretty much the way to configure a project to use S3 wagon. Add awss3-maven-wagon extension:
<extensions>
  <extension>
    <groupId>org.cyclopsgroup</groupId>
    <artifactId>awss3-maven-wagon</artifactId>
  </extension>
</extensions>

Set distribution management with s3 as communication protocol
<distributionManagement>
  <site>
    <server>
      <id>my-server-id</id>
      <url>s3://my-s3-bucket/project/path</url>
    </server>
  </site>
</distributionManagement>

And configure AWS credentials in settings.xml
<settings>
  <servers>
    <server>
      <id>my-server-id</id>
      <username>AWS_ACCESS_KEY_ID</username>
      <password>AWS_SECRET_KEY</password>
    </server>
    ......

Now after a site:deploy target, entire site is uploaded to S3 bucket. The website is available now under default domain name http://<bucket name>.s3-website-us-east-1.amazonaws.com. S3 bucket doesn't allow me to configure CNAME to match. This is why bucket name needs to match domain name as I plan to create friendly CNAME under my own domain. Obviously my bucket will share the same IP in Amazon cloud with others. Without explicit configuration, they only way to figure out which bucket to serve request is to match bucket name.

The last thing is CloudFront. No reason why not to take advantage of CloudFront. It's easy to setup, and it accepts CNAME configuration(unlike S3 website).

One problem about CloudFront is that since it's not designed to work as a website, request with path like http://mysite.com/a/dir is not mapped to content a/dir/index.html. S3 does not have directory concept, a/dir/index.html and a/dir can be two different objects coexist in the same bucket. Request like http://mysite.com/a/dir is mapped to object a/dir instead of a/dir/index.html. Such problem does not exist in S3 static website, while it exists when I hook up CloudFront and S3 bucket, since the hook has nothing to do with website feature anymore. In the end, I had to create object a/dir with html redirection page to redirect to a/dir/index.html in order to work around this problem.

Comments

kirti said…
Amazon provides many service and crate a boost in the market either it may be S3 website,CDN,SNS services and many more...and tool like Bucket Explorer are really helpful to manage and use the services directly of S3..its like click and done....its amazing...
Unknown said…
I'm trying to use your plugin and described configuration but I get the following output from maven (version 3.0.3) when trying to run site:deploy

http://pastebin.com/76uxdc94

I'm not sure what is causing this error.
Jiaqi Guo said…
To Eric,

According to the source code of AbstractAWSSigner.java, it's likely the secretKey is not configured correctly. Secret key should be configured in tag of settings.xml.

I should probably add some validation in plugin before empty input goes that far.
can you please answer this
https://github.com/jiaqi/cym2/issues/1
hi;

you have hardcoded read-all permission;

any chance to make it configurable?

##################

// Upload file and allow everyone to read
s3.putObject( bucketName, key, in, meta );
s3.setObjectAcl( bucketName, key, CannedAccessControlList.PublicRead );

Popular posts from this blog

Spring, Angular and other reasons I like and hate Bazel at the same time

For several weeks I've been trying to put together an Angular application served Java Spring MVC web server in Bazel. I've seen the Java, Angular combination works well in Google, and given the popularity of Java, I want get it to work with open source. How hard can it be to run arguably the best JS framework on a server in probably the most popular server-side language with  the mono-repo of planet-scale ? The rest of this post walks through the headaches and nightmares I had to get things to work but if you are just here to look for a working example, github/jiaqi/angular-on-java is all you need. https://github.com/jiaqi/angular-on-java Java web application with Appengine rule Surprisingly there isn't an official way of building Java web application in Bazel, the closest thing is the Appengine rule  and Spring MVC seems to work well with it. 3 Java classes, a JSP and an appengine.xml was all I need. At this point, the server starts well but I got "No

Customize IdGenerator in JPA, gap between Hibernate and JPA annotations

JPA annotation is like a subset of Hibernate annotation, this means people will find something available in Hibernate missing in JPA. One of the important missing features in JPA is customized ID generator. JPA doesn't provide an approach for developer to plug in their own IdGenerator. For example, if you want the primary key of a table to be BigInteger coming from sequence, JPA will be out of solution. Assume you don't mind the mixture of Hibernate and JPA Annotation and your JPA provider is Hibernate, which is mostly the case, a solution before JPA starts introducing new Annotation is, to replace JPA @SequenceGenerator with Hibernate @GenericGenerator. Now, let the code talk. /** * Ordinary JPA sequence. * If the Long is changed into BigInteger, * there will be runtime error complaining about the type of primary key */ @Id @Column(name = "id", precision = 12) @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "XyzIdGenerator") @SequenceGe

A dozen things to know about AWS Simple Workflow in Eclipse and Maven

Amazon AWS Simple Workflow AWS Simple Workflow(SWF) from Amazon is a unique workflow solution comparing to traditional workflow products such as JBPM and OSWorkflow. SWF is extremely scalable and engineer friendly(in that flow is defined with Java code) while it comes with limitations and lots of gotchas. Always use Flow Framework The very first thing to know is, it's almost impossible to build a SWF application correctly without Flow Framework . Even though the low level SWF RESTful service API is public and available in SDK, for most workflow with parallelism, timer or notification, consider all possibilities of how each event can interlace with another, it's beyond manageable to write correct code with low-level API to cover all use cases. For this matter SWF is quite unique comparing to other thin-client AWS technologies. The SWF flow framework heavily depends on AspectJ for various purposes. If you are not familiar with AspectJ in Eclipse and Maven, this article