A Grunt interface into the Amazon Web Services Node.JS SDK aws-sdk
This plugin requires Grunt 0.4.x
If you haven't used Grunt before, be sure to check out the Getting Started guide, as it explains how to create a Gruntfile as well as install and use Grunt plugins. Once you're familiar with that process, you may install this plugin with this command:
npm install --save-dev grunt-aws
One the plugin has been installed, it may be enabled inside your Gruntfile with this line of JavaScript:
grunt.loadNpmTasks('grunt-aws');
This plugin aims to provide a task for each service on AWS. Currently however, it only supports:
- Fast
- Simple
- Auto Gzip
- Smart Local Caching
To upload all files inside build/
into my-awesome-bucket
:
grunt.initConfig({
aws: grunt.file.readJSON("credentials.json"),
s3: {
options: {
accessKeyId: "<%= aws.accessKeyId %>",
secretAccessKey: "<%= aws.secretAccessKey %>",
bucket: "my-awesome-bucket"
},
build: {
cwd: "build/",
src: "**"
}
}
});
See the complete example here
Amazon access key id
Amazon secret access key
Bucket name
Amazon session token, required if you're using temporary access keys
Default US Standard
For all possible values, see Location constraints.
Default true
SSL is enabled or not
Default 3
Number of retries for a request
Default "public-read"
File permissions, must be one of:
"private"
"public-read"
"public-read-write"
"authenticated-read"
"bucket-owner-read"
"bucket-owner-full-control"
Default true
Gzips the file before uploading and sets the appropriate headers
Note: The default is true
because this task assumes you're uploading content to be consumed by browsers developed after 1999. On the terminal, you can retrieve a file using curl --compressed <url>
.
Default false
Performs a preview run displaying what would be modified
Default 20
Number of S3 operations that may be performed concurrently
Default true
Upload files, whether or not they already exist (set to false
if you never update existing files).
Default None
Path to copy filewithin S3. ex. my-bucket2/output/d.txt
Default None
Path to copy all files within S3. ex. my-bucket2/output/
Default true
Skip uploading files which have already been uploaded (same ETag). Each target has it's own options cache, so if you change the options object, files will be forced to reupload.
Default 60*60*1000
(1hr)
Number of milliseconds to wait before retrieving the
object list from S3. If you only modify this bucket
from grunt-aws
on one machine then it can be Infinity
if you like. To disable cache, set it to 0
.
Set HTTP headers, please see the putObject docs
The following are allowed:
ContentLength
ContentType
(will override mime type lookups)ContentDisposition
ContentEncoding
CacheControl
(accepts a string or converts numbers into header asmax-age=<num>, public
)Expires
(converts dates to strings withtoUTCString()
)GrantFullControl
GrantRead
GrantReadACP
GrantWriteACP
ServerSideEncryption
("AES256"
)StorageClass
("STANDARD"
or"REDUCED_REDUNDANCY"
)WebsiteRedirectLocation
The properties not listed are still available as:
ACL
-access
option aboveBody
- the file to be uploadedKey
- the calculated file pathBucket
-bucket
option aboveMetadata
-meta
option below
Set custom HTTP headers
All custom headers will be prefixed with x-amz-meta-
.
For example {Foo:"42"}
becomes x-amz-meta-foo:42
.
Add a charset to every one of your Content-Type
. For example: utf-8
. If this is not set, then all text files will get charset of UTF-8 by default.
Define your own mime types
This object will be passed into mime.define()
Default "application/octet-stream"
The default mime type for when mime.lookup()
fails
Default false
Create the bucket if it does not exist. Use the bucket
option to name the bucket. Use the access
and region
as parameters when creating the bucket.
Default false
Configure static web hosting for the bucket. Set to true
to enable the default hosting with the IndexDocument
set to index.html
. Otherwise, set the value to be an object that matches the parameters required for WebsiteConfiguration
in putBucketWebsite docs.
First run will deploy like:
Running "s3:uat" (s3) task
Retrieving list of existing objects...
>> Put 'public/vendor/jquery.rest.js'
>> Put 'index.html'
>> Put 'scripts/app.js'
>> Put 'styles/app.css'
>> Put 'public/img/loader.gif'
>> Put 'public/vendor/verify.notify.js'
>> Put 6 files
Subsequent runs should look like:
Running "s3:uat" (s3) task
>> No change 'index.html'
>> No change 'public/vendor/jquery.rest.js'
>> No change 'styles/app.css'
>> No change 'scripts/app.js'
>> No change 'public/img/loader.gif'
>> No change 'public/vendor/verify.notify.js'
>> Put 0 files
s3: {
//provide your options...
options: {
accessKeyId: "<%= aws.accessKeyId %>",
secretAccessKey: "<%= aws.secretAccessKey %>",
bucket: "my-bucket"
},
//then create some targets...
//upload all files within build/ to root
build: {
cwd: "build/",
src: "**"
},
//upload all files within build/ to output/
move: {
cwd: "build/",
src: "**",
dest: "output/"
},
//upload and rename an individual file
specificFile: {
src: "build/a.txt",
dest: "output/b.txt"
},
//upload and rename many individual files
specificFiles: {
files: [{
src: "build/a.txt",
dest: "output/b.txt"
},{
src: "build/c.txt",
dest: "output/d.txt"
}]
},
//upload and rename many individual files (shorter syntax)
specificFilesShort: {
"output/b.txt": "build/a.txt"
"output/d.txt": "build/c.txt"
},
//upload the img/ folder and all it's files
images: {
src: "img/**"
},
//upload the docs/ folder and it's pdf and txt files
documents: {
src: "docs/**/*.{pdf,txt}"
},
//upload the secrets/ folder and all its files to a different bucket
secrets: {
//override options
options: {
bucket: "my-secret-bucket"
}
src: "secrets/**"
},
//upload the public/ folder with a custom Cache-control header
longTym: {
options: {
headers: {
CacheControl: 'max-age=900, public, must-revalidate'
}
}
src: "public/**"
},
//upload the public/ folder with a 2 year cache time
longTym: {
options: {
headers: {
CacheControl: 630720000 //max-age=630720000, public
}
}
src: "public/**"
},
//upload the public/ folder with a specific expiry date
beryLongTym: {
options: {
headers: {
Expires: new Date('2050') //Sat, 01 Jan 2050 00:00:00 GMT
}
}
src: "public/**"
},
//Copy file directly from s3 bucket to a different bucket
copyFile: {
src: "build/c.txt",
dest: "output/d.txt",
options: {
copyFile: "my-bucket2/output/d.txt"
}
},
//Copy all files in directory
copyFiles: {
src: "public/**",
options: {
copyFrom: 'my-bucket2/public'
}
}
}
- Download operation
- Delete unmatched files
- Create DNS records using simple configuration
- Smart Local Caching
To create two new records - the first resolving to an IP address and the second resolving to the domain name a bucket:
grunt.initConfig({
aws: grunt.file.readJSON("credentials.json"),
route53: {
options: {
accessKeyId: "<%= aws.accessKeyId %>",
secretAccessKey: "<%= aws.secretAccessKey %>",
zones: {
'mydomain.org': [{
name: 'record1.mydomain.org',
type: 'A',
value: ['1.1.1.1']
},{
name: 'record2.mydomain.org',
type: 'CNAME',
value: ['record2.mydomain.org.s3-website-ap-southeast-2.amazonaws.com']
}]
}
}
}
});
Amazon access key id
Amazon secret access key
Use AWS IAM Role instead of credentials
An object containing names of zones and a list of DNS records to be created for this zone in Route 53.
Each record requires name
, type
and value
to be set. The name
property is the new domain to be created. The type
is the DNS type e.g. CNAME, ANAME, etc.. The value
is a list of domain names or IP addresses that the DNS entry will resolve to.
It is also possible to specify any of the additional options described in the ResourceRecordSet section of the changeResourceRecordSets method. For example, AliasTarget
could be used to set up an alias record.
Default 300
Default TTL of any new Route 53 records.
Default false
Performs a preview run displaying what would be modified
Default 20
Number of Route53 operations that may be performed concurrently
Default true
Cache data returned from Route 53. Once records
- Better support for alias records
- Create zones?
- Invalidate a list of files, up to the maximum allowed by CloudFront, like
/index.html
and/pages/whatever.html
- Update CustomErrorResponses
- Update OriginPath on the first origin in the distribution, other origins will stay the same
- Update DefaultRootObject
A sample configuration is below. Each property must follow the requirements from the CloudFront updateDistribution Docs.
grunt.initConfig({
aws: grunt.file.readJSON("credentials.json"),
cloudfront: {
options: {
accessKeyId: "<%= aws.accessKeyId %>",
secretAccessKey: "<%= aws.secretAccessKey %>",
distributionId: '...',
},
html: {
options: {
invalidations: [
'/index.html',
'/pages/whatever.html'
],
customErrorResponses: [ {
ErrorCode: 0,
ErrorCachingMinTTL: 0,
ResponseCode: 'STRING_VALUE',
ResponsePagePath: 'STRING_VALUE'
} ],
originPath: 'STRING_VALUE',
defaultRootObject: 'STRING_VALUE'
}
}
}
});
Amazon access key id
Amazon secret access key
The CloudFront Distribution ID to be acted on
An array of strings that are each a root relative path to a file to be invalidated
An array of objects with the properties shown above
A string to set the origin path for the first origin in the distribution
A string to set the default root object for the distribution
- Publish to a SNS topic
To public a message
grunt.initConfig({
aws: grunt.file.readJSON("credentials.json"),
cloudfront: {
options: {
accessKeyId: "<%= aws.accessKeyId %>",
secretAccessKey: "<%= aws.secretAccessKey %>",
region: '<%= aws.region %>',
target: 'AWS:ARN:XXXX:XXXX:XXXX',
message: 'You got it',
subject: 'A Notification'
}
}
});
Amazon access key id
Amazon secret access key
The region that the Topic is hosted under
The AWS ARN for the topic
The message content for the notification
The subject to use for the notification
- Add other SNS functionality
Copyright © 2013 Jaime Pillora <[email protected]>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.