Coinbase Logo

gRPC to AWS Lambda: Is it Possible?

By Author


, March 7, 2019

, 5 min read time

At Coinbase, we have been evaluating gRPC for new services and have had a positive experience so far. We’ve also been closely watching the trends of the industry towards “Serverless” architectures. We’ve been experimenting with the AWS Lambda platform as a location to run various types of workloads, including API endpoints. However, we are curious if there’s a way to unify these.

There are two main ways to invoke a Lambda function synchronously: API Gateway or a direct “invoke” API call to AWS. API Gateway allows for incoming requests to be made over HTTP/2 with HTTP/1 requests bundled and forwarded to Lambda. Direct invocation of Lambdas requires tight coupling with AWS SDKs.

I was curious about whether it was even possible to make a gRPC call through API gateway to a Lambda and have a response return all the way to the client. Turns out, it’s very close to possible for unary request / response gRPC calls.

Prior to diving in here, it can be helpful to read gRPC On HTTP/2: Engineering A Robust, High Performance Protocol to gain a deeper understanding of gRPC itself.


To get started, I followed the AWS SAM quick start guide to get Hello World Lambda deployed.

Then I started bootstrapping a very simple // Location: api/hello.proto syntax = "proto3";

option go_package = "api";

service Prod { rpc Alive(AliveRequest) returns (AliveResponse) {} }

message AliveRequest { string message = 1; }

message AliveResponse { string message = 1; } service with a single RPC that accepted and sent a very simple message.

Example Protobuf Definition

To build the proto into compiled Golang, I installed the protoc compiler for Golang and compiled the hello protobuf file into a Golang package.

brew install protobuf go get -u go get -u protoc -I=./api --go_out=plugins=grpc:./api ./api/hello.proto

Generate protobuf

I created a very simple gRPC Golang client for the RPC API.package main

import ( "context" "flag" "fmt" "log"

"" ""

"api" )

var ( tls = flag.Bool("tls", true, "Connection uses TLS if true, else plain TCP") caFile = flag.String("ca_file", "", "The file containning the CA root cert file") serverAddr = flag.String("server_addr", "<api-gateway-host>:443", "The server address in the format of host:port") serverHostOverride = flag.String("server_host_override", "<api-gateway-host>", "The server name use to verify the hostname returned by TLS handshake") )

func main() { flag.Parse() var opts []grpc.DialOption if *tls { creds, err := credentials.NewClientTLSFromFile(*caFile, *serverHostOverride) if err != nil { log.Fatalf("Failed to create TLS credentials %v", err) } opts = append(opts, grpc.WithTransportCredentials(creds)) opts = append(opts, grpc.WithWaitForHandshake()) } else { opts = append(opts, grpc.WithInsecure()) } conn, err := grpc.Dial(*serverAddr, opts...) if err != nil { log.Fatalf("fail to dial: %v", err) } defer conn.Close()

client := api.NewProdClient(conn)

resp, err := client.Alive(context.TODO(), &api.AliveRequest{Message: "Hello"}) if err != nil { log.Fatalf("err making request: %v", err) }


Test gRPC Client

First Attempt: Send out a gRPC Request

The first error that comes up is related to the content-type of the response.

err making request: rpc error: code = Internal desc = transport: received the unexpected content-type “application/json”

This makes sense as the default lambda is sending back JSON to the gRPC client, which won’t work because of a mismatch. gRPC clients expect “application/grpc+proto” back. The first fix involves setting the correct content-type in the API response from Lambda. This can be done in the Headers field of the APIGatewayProxyResponse struct as below.

func handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) { return events.APIGatewayProxyResponse{ Body: "Hello, world.", Headers: map[string]string{ "Content-Type": "application/grpc+proto", }, StatusCode: 200, }, nil }

Returning the proper Content-Type header

Second Attempt: Protobuf Response

After returning the correct content type, the next error is absolutely bizarre.

err making request: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (1701604463 vs. 4194304)

gRPC has a max-size of 16 MB returned in a response, and our function was clearly not returning that much data. However, we are simply returning a string, so it seems now is the time to return a protobuf.

The next handler looks like this:

func handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) { message := &api.AliveResponse{Message: "Hello, world."}

b, err := proto.Marshal(message) if err != nil { return events.APIGatewayProxyResponse{ StatusCode: 500, }, err }

return events.APIGatewayProxyResponse{ Body: base64.StdEncoding.EncodeToString(b), Headers: map[string]string{ "Content-Type": "application/grpc+proto", "grpc-status": "0", }, IsBase64Encoded: true, StatusCode: 200, }, nil }

Handler with marshaled protobuf

First, we construct a protobuf struct, then serialize to a byte array, then base64 encode into the final response body. Base64 encoding is required in order for API Gateway to return a binary response.

There’s also two incantations required to actually get API gateway to convert the response to binary. First, we need to set the integration response type to “CONVERT_TO_BINARY”.

This can be done in the CLI below:

aws apigateway update-integration-response \

-rest-api-id XXX \

-resource-id YYY \

-http-method GET \

-status-code 200 \

-patch-operations ‘[{“op” : “replace”, “path” : “/contentHandling”, “value” : “CONVERT_TO_BINARY”}]’

In addition, the “Binary Media Types” setting needs to be set to “*/*”


Note: AWS Console Screenshot

Third Attempt: Length-prefixed Response

However, we still get the same ResourceExhausted error. Let’s double check that API Gateway is properly sending back a binary protobuf response.

To debug more, we can set:

export GODEBUG=http2debug=2

This will give us output about what is going back and forth over the wire for the HTTP/2 requests.

http2: Framer 0xc0002c8380: wrote SETTINGS len=0 http2: Framer 0xc0002c8380: read SETTINGS len=18, settings: MAX_CONCURRENT_STREAMS=128, INITIAL_WINDOW_SIZE=65536, MAX_FRAME_SIZE=16777215 http2: Framer 0xc0002c8380: read WINDOW_UPDATE len=4 (conn) incr=2147418112 http2: Framer 0xc0002c8380: wrote SETTINGS flags=ACK len=0 http2: Framer 0xc0002c8380: wrote HEADERS flags=END_HEADERS stream=1 len=84 http2: Framer 0xc0002c8380: wrote DATA flags=END_STREAM stream=1 len=12 data="\x00\x00\x00\x00\a\n\x05Hello" http2: Framer 0xc0002c8380: read SETTINGS flags=ACK len=0 http2: Framer 0xc0002c8380: read HEADERS flags=END_HEADERS stream=1 len=313 http2: Framer 0xc0002c8380: read DATA stream=1 len=15 data="\n\rHello, world." http2: Framer 0xc0002c8380: read DATA flags=END_STREAM stream=1 len=0 data="" err making request: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (222848364 vs. 4194304) http2: Framer 0xc0002c8380: wrote WINDOW_UPDATE len=4 (conn) incr=15

HTTP/2 Response From API Gateway

We see that as our request goes up, it writes a DATA frame with the content “\x00\x00\x00\x00\a\n\x05Hello”. However, what we get back is “\n\rHello, world.”. What are all those \x00 values in the request? This turns out to be a special serialization format that gRPC uses called “Length Prefixed Messages”. On the wire, this looks like:


See the gRPC HTTP/2 protocol mapping for more detail.

Here’s a quick and dirty implementation of the prefix construction with an updated handler.

const ( payloadLen = 1 sizeLen = 4 headerLen = payloadLen + sizeLen )

func msgHeader(data []byte) (hdr []byte, payload []byte) { hdr = make([]byte, headerLen) hdr[0] = byte(uint8(0))

// Write length of payload into buf binary.BigEndian.PutUint32(hdr[payloadLen:], uint32(len(data))) return hdr, data }

func handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) { message := &api.AliveResponse{Message: "Hello, world."}

b, err := proto.Marshal(message) if err != nil { return events.APIGatewayProxyResponse{ StatusCode: 500, }, err }

hdr, data := msgHeader(b)

hdr = append(hdr, data...)

return events.APIGatewayProxyResponse{ Body: base64.StdEncoding.EncodeToString(hdr), Headers: map[string]string{ "Content-Type": "application/grpc+proto", "grpc-status": "0", }, IsBase64Encoded: true, StatusCode: 200, }, nil }

Final Attempt: Missing trailing headers

After returning the correct prefix, we run into the final error.

err making request: rpc error: code = Internal desc = server closed the stream without sending trailers

This error is saying that API gateway closed the stream without returning trailing headers. Turns out that gRPC clients make a fundamental assumption that the response contains trailing headers with the stream closed. For example, here is what a proper response looks like:


:status = 200

grpc-encoding = gzip

content-type = application/grpc+proto


<Length-Prefixed Message>


grpc-status = 0 # OK

This is helpful for streaming, but may not be needed for unary request / response RPC invocations. In fact, there is an entire effort within the gRPC community to build compatibility with HTTP/1.1 or browsers with gRPC-Web.

Next Steps

To recap, our goal through this exercise was to see how closely we could get to a Lambda successfully responding to a gRPC client’s request, without modifying the client. We were able to make it almost all the way, but ran into a fundamental assumption that gRPC clients make about trailing headers.

There’s two possible paths forward. Either API Gateway needs to respond with the proper trailing HEADERS frame or gRPC clients need to relax their constraints around expecting trailing headers for unary request / response calls.

However, is it actually worth communicating with Lambdas over gRPC? Maybe. For us, there would be value to standardizing API interactions with Lambdas and containerized environments. The typed interface of Protobuf behind gRPC ensures a strong contract between the client and server that would be difficult to enforce otherwise

Unfortunately, gRPC behind lambda would not support any of the server, client, or bidi streaming solutions that benefit gRPC in a highly stateful environment.

There are other interesting solutions in the community to this problem, such as Twirp (by Twitch) and gRPC-Web.

If you’re interested in helping us build a modern, scalable platform for the future of crypto markets, we’re hiring in San Francisco and Chicago!

This website may contain links to third-party websites or other content for information purposes only (“Third-Party Sites”). The Third-Party Sites are not under the control of Coinbase, Inc., and its affiliates (“Coinbase”), and Coinbase is not responsible for the content of any Third-Party Site, including without limitation any link contained in a Third-Party Site, or any changes or updates to a Third-Party Site. Coinbase is not responsible for webcasting or any other form of transmission received from any Third-Party Site. Coinbase is providing these links to you only as a convenience, and the inclusion of any link does not imply endorsement, approval or recommendation by Coinbase of the site or any association with its operators.

Unless otherwise noted, all images provided herein are by Coinbase.

Coinbase logo