Lorenzo Canese
10 May 2021
•
5 min read
In 2018, Amazon Web Services (AWS) annonuced Go as a supported language for Lambda, making it possible to write serverless applications in one of the most popular languages for cloud applications. The main benefit of serverless architecture is the possibility to shift operational responsabilities to cloud providers, in order to focus solely on software development. This comes together with a boost in overall systems' scalability and availability, and with a cost-saving pattern that naturally leads to "pay as you go" formulas.
Given all the aforementioned advantages, serverless development is gaining increasing popularity with AWS Lambda being a widespread solution in this field.
Designed at Google in 2007, Go is a modern cross-platform, compiled and statically typed programming language that gained huge popularity in recent years, mainly for its simplicity and performance. You can find a lot of articles on the web about Go pros and lots of examples as well, since it has rapidly been adopted by major companies and it is the core language of many tools and system vastly used in cloud architectures (Kubernetes and Docker, for example). On my side, I started developing in Go five years ago and I still use it daily for cloud development, microservices, scripting and CLI tools.
In such a context, new patterns emerged also in infrastructure management, leading to a set of best practices defined as infrastructure as code. The underlying idea is to handle infrastructure management as software development, relying on common practices as versioning, reusability and collaboration to enable automation of releases and updates to infrastructure components. HashiCorp Terraform is one of the tools that allow the codification of infrastructure, supporting multiple cloud providers.
It's time to get our hands dirty! We'll expose an HTTP endpoint on API Gateway and a Lambda function handling the incoming request.
Let's start with the code of our Lambda function: we'll rely on AWS official SDK in Go, which models AWS events structure. In our scenario, we'll setup a proxy integration, which means that our Lambda function will receive the whole incoming request as its input: the SDK provides APIGatewayProxyRequest
struct for such case.
We can setup our handler function with the following signature:
func handleRequest(ctx context.Context, request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error){
// Our code here
}
We won't implement complex logic, keeping our focus on the big picture: release our Lambda function and make it work!
Thus, we'll return the current time in a simple JSON object. We can define our HTTP response with APIGatewayProxyResponse
struct, setting status code, body and headers:
type timeEvent struct {
Time string `json:"time"`
}
func handleRequest(ctx context.Context, request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
t := timeEvent{Time: time.Now().String()}
b, err := json.Marshal(t)
if err != nil {
return events.APIGatewayProxyResponse{}, err
}
return events.APIGatewayProxyResponse{
StatusCode: http.StatusOK,
Body: string(b),
}, nil
}
And that's it, we can evolve our code but this is a good starting point.
We have to compile our source code, and we can setup a make
command to do it in a Docker image, so that there's no need to install Go on our machine.
The command will be like this:
compile:
docker run -e GOOS=linux -e GOARCH=amd64 -v $$(pwd):/app -w /app golang:1.13 go build -ldflags="-s -w" -o bin/aws-lambda-go
It's now time to dive into the infrastructure definition. As you may know, Lambda code should be uploaded in a zip archive: we can leverage Terraform archive provider to compress our binary file:
data "archive_file" "zip" {
type = "zip"
source_file = "bin/aws-lambda-go"
output_path = "aws-lambda-go.zip"
}
We can now define our Lambda and its parameters, giving it a name, runtime configuration, the reference to the archive and a IAM role:
resource "aws_lambda_function" "time" {
function_name = "time"
filename = "aws-lambda-go.zip"
handler = "aws-lambda-go"
source_code_hash = "data.archive_file.zip.output_base64sha256"
role = "${aws_iam_role.iam_for_lambda.arn}"
runtime = "go1.x"
memory_size = 128
timeout = 10
}
resource "aws_iam_role" "iam_for_lambda" {
name = "iam_for_lambda"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
We want to expose our function over an HTTP endpoint or better make our Lambda the backend of an HTTP API. We're going to use AWS API Gateway to set up a simple GET endpoint responding to /time
path:
resource "aws_api_gateway_rest_api" "api" {
name = "time_api"
}
resource "aws_api_gateway_resource" "resource" {
path_part = "time"
parent_id = "${aws_api_gateway_rest_api.api.root_resource_id}"
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
}
resource "aws_api_gateway_method" "method" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
resource_id = "${aws_api_gateway_resource.resource.id}"
http_method = "GET"
authorization = "NONE"
}
The last step is to configure API Gateway integration, binding our Lambda to the defined HTTP endpoint and setting up the deployment of our API under v1
stage. Last, but not least, we want the URL of our endpoint: we'll get it as the output of our release.
resource "aws_api_gateway_integration" "integration" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
resource_id = "${aws_api_gateway_resource.resource.id}"
http_method = "${aws_api_gateway_method.method.http_method}"
integration_http_method = "POST"
type = "AWS_PROXY"
uri = "${aws_lambda_function.time.invoke_arn}"
}
resource "aws_lambda_permission" "apigw_lambda" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = "${aws_lambda_function.time.function_name}"
principal = "apigateway.amazonaws.com"
source_arn = "${aws_api_gateway_rest_api.api.execution_arn}/*/*/*"
}
resource "aws_api_gateway_deployment" "time_deploy" {
depends_on = [aws_api_gateway_integration.integration]
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
stage_name = "v1"
}
output "url" {
value = "${aws_api_gateway_deployment.time_deploy.invoke_url}${aws_api_gateway_resource.resource.path}"
}
To ease development and deployment of our infrastructure, we've provided a Docker image containing a Terraform installation and the providers needed for our example, so that it can be possible to run the example even without installing Terraform.
The image is available on Dockerhub, you can pull and use it: the only convention is that you need to mount your Terraform files under /srv
folder without overriding the provided /srv/providers.tf
file. This is needed to prevent the execution of terraform init
command, since it's been executed during image build.
The three basic Terraform commands plan
, apply
and destroy
are available under the corresponding make
commands, which use three environment variables related to the AWS account you want to deploy this example to. So, do not forget to set AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
and AWS_DEFAULT_REGION
with the proper values related to your AWS environment.
Running make plan
you'll get the preview of the actions Terraform will perform and you can review the output of our work.
If you're happy with it, running make apply
will acutally deploy the infrastructure on AWS. The ouput of this command is the endpoint url, which you can call with curl
or any other client, to test our work!
Do not forget to run make destroy
once you're happy with the tests, so that you delete all the resources on AWS: although this example fits the free tier, we'd better prevent unexepected surprises :)
In this post we've seen how to develop, configure and deploy an AWS Lambda function handling incoming requests for a given HTTP endpoint. Although our example is simple, this is a good starting point to develop you own serverless system, adding more endpoints, some more logic on the backend side and, optionally, a fine-grained API Gateway configuration to enable compression or CORS.
The code is available on Github.
That's all for now, reach out on my social links for any feedback.
Thank you!
Ground Floor, Verse Building, 18 Brunswick Place, London, N1 6DZ
108 E 16th Street, New York, NY 10003
Join over 111,000 others and get access to exclusive content, job opportunities and more!