This project can serve as a sample for getting started with the new nested stacks implementation of the Amplify CLI API category.
This project only works when using the nested-stacks branch of this repository:
https://github.com/mikeparisstuff/amplify-cli/tree/nested-stacks
To get started, clone that repo and then run npm run setup-dev
from the root
of the project directory. When that command finishes, the amplify
command on
your machine will now map to the nested-stacks build.
The Amplify CLI exposes the GraphQL Transform libraries to help create APIs with common patterns and best practices baked in but it also provides number of escape hatches for those situations where you might need a bit more control. Here are a few common use cases you might find useful.
Overwrite a resolver generated by the GraphQL Transform
Let's say you have a simple schema.graphql...
type Todo @model {
id: ID!
name: String!
description: String
}
and you want to change the behavior of request mapping template of the the Query.getTodo resolver that will be generated when the project compiles. To do this you would create a file named Query.getTodo.req.vtl
in the resolvers directory of your API project. The next time you run amplify push
or amplify api gql-compile
, your resolver template will be used instead of the auto-generated template. You may similarly create a Query.getTodo.res.vtl
file to change the behavior of the resolver's response mapping template.
Add a custom resolver that targets a DynamoDB table from @model
This is useful if you want to write a more specific query against a DynamoDB table that was created by @model. For example, assume you had this schema with two @model types and a pair of @connection directives.
type Todo @model {
id: ID!
name: String!
description: String
comments: [Todo] @connection(name: "TodoComments")
}
type Comment @model {
id: ID!
content: String
todo: Todo @connection(name: "TodoComments")
}
This schema will generate resolvers for Query.getTodo, Query.listTodos, Query.getComment, and Query.listComments at the top level as well as for Todo.comments, and Comment.todo to implement the @connection. Under the hood, the transform will create a GlobalSecondaryIndex on the Comment table in DynamoDB but it will not generate a top level query field that queries the GSI because you can fetch the comments for a given todo object via the Query.getTodo.comments query path. If you want to fetch all comments for a todo object via a top level query field i.e. Query.commentsForTodo then do the following:
- Add the desired field to your schema.graphql.
// ... Todo and Comment types from above
type CommentConnection {
items: [Comment]
nextToken: String
}
type Query {
commentsForTodo(todoId: ID!, limit: Int, nextToken: String): CommentConnection
}
- Add a resolver resource to a stack in the stacks/ directory.
{
// ... The rest of the template
"Resources": {
"QueryCommentsForTodoResolver": {
"Type": "AWS::AppSync::Resolver",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"DataSourceName": "CommentTable",
"TypeName": "Query",
"FieldName": "commentsForTodo",
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.commentsForTodo.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
"ResponseMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.commentsForTodo.res.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
}
}
}
}
}
- Write the resolver templates.
## Query.commentsForTodo.req.vtl **
#set( $limit = $util.defaultIfNull($context.args.limit, 10) )
{
"version": "2017-02-28",
"operation": "Query",
"query": {
"expression": "#connectionAttribute = :connectionAttribute",
"expressionNames": {
"#connectionAttribute": "commentTodoId"
},
"expressionValues": {
":connectionAttribute": {
"S": "$context.args.todoId"
}
}
},
"scanIndexForward": true,
"limit": $limit,
"nextToken": #if( $context.args.nextToken ) "$context.args.nextToken" #else null #end,
"index": "gsi-TodoComments"
}
## Query.commentsForTodo.res.vtl **
$util.toJson($ctx.result)
Add a custom resolver that targets an AWS Lambda function
Velocity is useful as a fast, secure environment to run arbitrary code but when it comes to writing complex business logic you can just as easily call out to an AWS lambda function. Here is how:
-
First create a function by running
amplify add function
. The rest of the example assumes you created a function named "echofunction" via theamplify add function
command. If you already have a function then you may skip this step. -
Add a field to your schema.graphql that will invoke the AWS Lambda function.
type Query {
echo(msg: String): String
}
- Add the function as an AppSync data source in the stack's Resources block.
"EchoLambdaDataSource": {
"Type": "AWS::AppSync::DataSource",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"Type": "AWS_LAMBDA",
"Name": "EchoFunction",
"ServiceRoleArn": {
"Fn::GetAtt": [
"EchoLambdaDataSourceRole",
"Arn"
]
},
"LambdaConfig": {
"LambdaFunctionArn": {
"Fn::Sub": [
"arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:echofunction-${env}",
{ "env": { "Ref": "env" } }
]
}
}
}
}
- Create an AWS IAM role that allows AppSync to invoke the lambda function on your behalf to the stack's Resources block.
"EchoLambdaDataSourceRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "EchoLambdaDataSourceRole",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "appsync.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
},
"Policies": [
{
"PolicyName": "InvokeLambdaFunction",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"lambda:invokeFunction"
],
"Resource": [
{
"Fn::Sub": [
"arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:echofunction-${env}",
{ "env": { "Ref": "env" } }
]
}
]
}
]
}
}
]
}
}
- Create an AppSync resolver in the stack's Resources block.
"QueryEchoResolver": {
"Type": "AWS::AppSync::Resolver",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"DataSourceName": {
"Fn::GetAtt": [
"EchoLambdaDataSource",
"Name"
]
},
"TypeName": "Query",
"FieldName": "echo",
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.echo.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
"ResponseMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.echo.res.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
}
}
}
- Create the resolver templates in the project's resolvers directory.
resolvers/Query.echo.req.vtl
{
"version": "2017-02-28",
"operation": "Invoke",
"payload": {
"type": "Query",
"field": "echo",
"arguments": $utils.toJson($context.arguments),
"identity": $utils.toJson($context.identity),
"source": $utils.toJson($context.source)
}
}
resolvers/Query.echo.res.vtl
$util.toJson($ctx.result)
After running amplify push
open the AppSync console with amplify api console
and test your API with this simple query:
query {
echo(msg:"Hello, world!")
}
Add a custom geo search resolver that targets an Elasticsearch domain created by @searchable
To add a geo search capabilities to an API add the @searchable directive to an @model type.
type Todo @model @searchable {
id: ID!
name: String!
description: String
comments: [Todo] @connection(name: "TodoComments")
}
The next time you run amplify push
, an Amazon Elasticsearch domain will be created and configured such that data automatically streams from DynamoDB into Elasticsearch. The @searchable directive on the Todo type will generate a Query.searchTodos query field and resolver but it is not uncommon to want more specific search capabilities. You can write a custom search resolver by following these steps:
- Add the relevant location and search fields to the schema.
type Location {
lat: Float
lon: Float
}
input LocationInput {
lat: Float
lon: Float
}
type Todo @model @searchable {
id: ID!
name: String!
description: String
comments: [Todo] @connection(name: "TodoComments")
location: Location
}
type Query {
nearbyTodos(location: LocationInput!, km: Int): TodoConnection
}
- Create the resolver record in the stack's Resources block.
"QueryNearbyTodos": {
"Type": "AWS::AppSync::Resolver",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"DataSourceName": "ElasticsearchDomain",
"TypeName": "Query",
"FieldName": "nearbyTodos",
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.nearbyTodos.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
"ResponseMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.nearbyTodos.res.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
}
}
}
- Write the resolver templates.
## Query.nearbyTodos.req.vtl
## Objects of type Todo will be stored in the /todo index
#set( $indexPath = "/todo/doc/_search" )
#set( $distance = $util.defaultIfNull($ctx.args.km, 200) )
{
"version": "2017-02-28",
"operation": "GET",
"path": "$indexPath.toLowerCase()",
"params": {
"body": {
"query": {
"bool" : {
"must" : {
"match_all" : {}
},
"filter" : {
"geo_distance" : {
"distance" : "${distance}km",
"location" : $util.toJson($ctx.args.location)
}
}
}
}
}
}
}
## Query.nearbyTodos.res.vtl
#set( $items = [] )
#foreach( $entry in $context.result.hits.hits )
#if( !$foreach.hasNext )
#set( $nextToken = "$entry.sort.get(0)" )
#end
$util.qr($items.add($entry.get("_source")))
#end
$util.toJson({
"items": $items,
"total": $ctx.result.hits.total,
"nextToken": $nextToken
})
- Run
ampify push
Amazon Elasticsearch domains can take a while to deploy. Take this time to read up on Elasticsearch to see what capabilities you are about to unlock.
Getting Started with Elasticsearch
- After the update is comples but before creating any objects, update your Elasticsearch index mapping.
An index mapping tells elasticsearch how it should treat the data that you are trying to store. By default if we create an object with field "location": { "lat": 40, "lon": -40 }
, Elasticsearch will treat that data as an object type when in reality we want it to be treated as a geo_point. You use the mapping APIs to tell Elasticsearch how to do this.
Make sure you tell Elasticsearch that your location field is a geo_point before creating objects in the index because otherwise you will need delete the index and try again. Go to the Amazon Elasticsearch Console and find the Elasticsearch domain that contains this environment's GraphQL API ID. Click on it and open the kibana link. To get kibana to show up you need to install a browser extension such as AWS Agent and configure it with your AWS profile's public key and secret so the browser can sign your requests to kibana for security reasons. Once you have kibana open, click the "Dev Tools" tab on the left and run the commands below using the in browser console.
# Create the /todo index if it does not exist
PUT /todo
# Tell Elasticsearch that the location field is a geo_point
PUT /todo/_mapping/doc
{
"properties": {
"location": {
"type": "geo_point"
}
}
}
- Use your API to create objects and immediately search them.
After updating the Elasticsearch index mapping, open the AWS AppSync console with amplify api console
and try out these queries.
mutation CreateTodo {
createTodo(input:{
name: "Todo 1",
description: "The first thing to do",
location: {
lat:43.476446,
lon:-110.767786
}
}) {
id
name
location {
lat
lon
}
description
}
}
query NearbyTodos {
nearbyTodos(location: {
lat: 43.476546,
lon: -110.768786
}, km: 200) {
items {
id
name
location {
lat
lon
}
}
}
}
When you run Mutation.createTodo, the data will automatically be streamed via AWS Lambda into Elasticsearch such that it nearly immediately available via Query.nearbyTodos.