gRPC and Protocol Buffers in Golang
“if bridge building were like programming, halfway through we’d find out that the far bank was now 50 meters farther out, that it was actually mud rather than granite, and that rather than building a footbridge we were instead building a road bridge.”
― Sam Newman, Building Microservices
Why do we need RPCs?
Monolithic architecture vs Micro-service architecture
Monolithic application describes a single-tiered software application in which the user interface and data access code are combined into a single program from a single platform. In monolithic architecture, we typically have a single indivisible codebase with client-side user interface, server-side application and database. It has its own advantages — a single unit is much easier to handle logging, caching and for performance monitoring. So, it is simple to test, debug and deploy. Having monoliths may be a good choice short-term but it may come back to haunt you in the long run. As the application grows and you want to use different frameworks, languages or techniques, the whole codebase has to be changed. In short, adoption of a new framework or language needs a full system rewrite. Gradually, it becomes difficult to maintain, scale and even understand.
Micro-service architecture — a variant of the service-oriented architecture (SOA) structural style — arranges an application as a collection of loosely coupled services. In a micro-services architecture, services are fine-grained and the protocols are lightweight. Micro-service architecture, on the other hand provide something more orthogonal, more independent. Separate functionalities of an application can be developed, deployed and maintained separately. These separate services communicate with each other using well defined methods called APIs.
RESTful APIs with Json is the most widely used standard for communication between applications but gRPC is another way to perform this communication. It is built to leverage the features provided by http2 and overcome some of the limitations of REST. Let’s not get any deeper into the technicalities and jump straight into gRPC.
RPC - Remote Procedure Call
Short for google remote procedure call, RPC stands for Remote Procedure Call. It is flexible and can be used to connect various services written in different languages. RPC is the idea of invoking a process on a remote server. It is based on extending the conventional local procedure calling so that the called procedure need not exist in the same address space as the calling procedure. The two processes may be on the same system, or they may be on different systems with a network connecting them. RPC protocol allows one to get the result for a problem in the same format regardless of where it is executed. It can be local or in a remote server using better resources.

The following steps take place during an RPC:
1. The client invokes a client stub procedure, passing parameters in the usual way. The client stub resides within the same address space as the client itself.
2. The client stub marshals(packs) the parameters into a message. Marshalling includes converting the representation of the parameters into a standard format understood the remote server, and copying each parameter into the message.
3. The client stub passes the message to the remote server machine.
4. The server stub receives the parameters and demarshalls(unpacks) the parameters and calls the desired routine using the regular procedure call mechanism.
5. When the server procedure completes, it returns to the server stub (e.g., via a normal procedure call return), which marshals the return values into a message. .
6. The server stub then sends the result message back to the client stub.
7. The client stub demarshalls the return parameters and execution returns to the caller.

The RPC can be simplified as a call to a remote server that contains the procedure and the procedure parameters are communicated to the remote server (which also can be the same system as the client). The calling process is paused, and the remote process is executed in the remote server. When the procedure finishes and obtains the results, the results are communicated back to the calling environment, where execution resumes as if returning from a normal function/ procedure call.
gRPC
gRPC (gRPC Remote Procedure Calls) is an open source remote procedure call (RPC) system initially developed at Google in 2015. It uses HTTP/2 for transport, Protocol Buffers as the interface description language, and provides features such as authentication, bidirectional streaming and flow control, blocking or nonblocking bindings, and cancellation and timeouts. It generates cross-platform client and server bindings for many languages. Most common usage scenarios include connecting services in micro-services style architecture and connect mobile devices, browser clients to backend services.
It enables client and server applications to communicate transparently, and makes it easier to build connected systems in a micro-service architecture. The gRPC framework is developed and open-sourced by Google. Google has widely used a lot of the underlying technologies and concepts in gRPC for a long time for many of their products including several of Google’s cloud products.
gRPC follows HTTP semantics over HTTP/2. It allows you to build services with both synchronous and asynchronous communication model. It supports traditional Request/Response model and also bidirectional streams. Its capability for building full-duplex streaming lets you use it for advanced scenarios where both client and server applications can send stream of data asynchronously. gRPC is built with mobile clients in mind that will give you lot of performance advantages and ease for consuming APIs . When compared to RESTful approach, gRPC has many advantages including performance gain.
By default, gRPC uses Protocol Buffers as the Interface Definition Language (IDL) and as its underlying message interchange format. Unlike JSON and XML, Protocol Buffers are not just message interchange format, it’s also used for describing the service interfaces (service endpoints) unlike other formats. Thus Protocol Buffers are used for both the service interface and the structure of the payload messages. This makes gRPC and protobufs so powerful. In gRPC, you define services and its methods along with payload messages. Like a typical communication between a client application and a RPC system, a gRPC client application can directly call methods on a remote server as if it was a local object in your client application.
Here is an image that illustrates the communication between a gRPC server and a client application.

The various concepts of RPC hold true for gRPC as well and they have been explained earlier.
Protobuf - Protocol Buffers
Protocol is an alternative for other methods for data serialisation like JSON and XML. JSON and XML are really efficient and flexible but they lack language neutrality, i.e they are not fully optimised for data transmission between various micro-services in a language and platform neutral way.
Protobuf supports many languages like Go, Java, C++, Ruby, Python etc. The current version of protobuf is proto3. Protobuf optimizes the network bandwidth required for transmitting data by making the data as small as possible. This makes protobuf faster than Json or xml.
The definition of the data to be serialised in .proto files (protobufs configuration files). These files contain configurations known as messages. These .proto files can then be used to generate the code in any language that protobuf supports.
Protobuf data is stored and transmitted as binary. This boosts performance when compared to raw string as it takes less space and bandwidth. Though this comes with the compromise that it is not human-readable anymore. Protobuf also has the capability to serialize binary data into string.
In JSON and XML, the data and context aren’t separate and have to be combined in every message. But in protobuf they are kept separate which tremendously reduces message sizes. Consider a JSON example.
{
first_name: "Swagat",
last_name: "Parida",
roll_no: "16EE01025",
Role: "Student"
}
In xml it becomes:
<?xml version="1.0" encoding="UTF-8"?> <root> <Role>Student</Role>
<first_name>Swagat</first_name>
<last_name>Parida</last_name>
<roll_no>16EE01025</roll_no>
</root>
On the other hand in Protobufs, things are different. We first define a message in a .proto file as follows:
message msg {
string first_name = 0
string last_name = 1
string roll_no = 2
string Role = 3
}
As we can see that this contains all the context needed to send the data. It contains the context information. The numbers are a key to the real attributes. Now the Json defined earlier can just be sent as following
026Swagat126Parida22816EE01025327Student
This can be translated as follows. The key-values in the .proto file can be used to decode that 0: first_name, 1: last_name, 2: roll_no, 3: Role.
026Swagat translates as:
0: first_name,
2: type String(as defined by protobuf conventions),
6: length of the value - Swagat.
As we’ve can seen, the data is transmitted as Protobuf based on a configuration known as messages. The messages are kept in .proto
files. Let's look at a message example:
syntax = "proto3";message msg{
string first_name = 0;
string last_name = 1;
string roll_no = 2;
enum Role{
STUDENT = 0;
TEACHER = 1;
OTHER = 2;
}
proto3 only has repeated rule. A field is said to be repeated if the field represents an array of elements of the same type. If the field isn’t repeated, no rules should be added. Take a look at proto2 if you are using that. It has different sets of rules.
Protobuf data types:
The first is the scalar data types, like strings and numbers.
The second is an enum
data type. In our example, this is Role
. We can also use embedded messages for a datatype just like in Json or XML.
The scalar data types available in Protobuf are float
, int32
, int64
, uint32
, uint64
, sint32
, sint64
, fixed32
, fixed64
, sfixed32
, sfixed6
, bool
, string
, and bytes
.
Field naming conventions:
- All field names should have lowercase letters only.
- Field names cannot have spaces, different words in a field name must be separated using underscores only.
As field names are a numeric representation of the field as defined in the .proto file, this enables us to send data throught the wire wthout sending their complete names, just the key suffices. Ex: proto uses field tag 1 to represent last_name. Hence, field tags must be unique inside a message and they need to be integers.
reserved keyword is used to prevent a tag from being redefined.
Hands On
Protobuf’s .proto files are compiled using the protobuf compiler — protoc.
Install protoc:
For Mac
brew install protobuf
For Ubuntu
sudo apt install protobuf-compiler
Make sure it is properly installed by:
protoc --versions
Alternatively, it can be installed from Github. Make sure it is on your path before starting.
We need the golang grpc package:
go get -u google.golang.org/grpc
To properly use protobuf in golang we also need another package:
go get -u github.com/golang/protobuf/protoc-gen-go
Now, we are ready and set to proceed. Create a .proto file as follows:
syntax = "proto3";
package proto; message Request {
int64 a = 1;
int64 b = 2;
}message Response {
int64 result = 1; }service AddService {
rpc Add(Request) returns (Response);
rpc Multiply(Request) returns (Response);
}
As you can guess we are trying to make a gRPC to Add and Multiply two integers. Given the file is named mathservice.proto, we can compile the .proto file using the protoc compiler, in project root as:
protoc --go_out=plugins=grpc:proto proto/service.proto
This creates a service.pb.go file that has all the functions and tools required to support grpc. Don’t forget the plugins=grpc part mate!
If you investigate the .pb.go file you’ll find:
type Request struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
A int64 `protobuf:"varint,1,opt,name=a,proto3" json:"a,omitempty"`
B int64 `protobuf:"varint,2,opt,name=b,proto3" json:"b,omitempty"`
}
This is the request struct and it contains A and B, the two variables we had specified in the .proto file. It also has many other helper functions.
In the Response struct:
type Response struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Result int64 `protobuf:"varint,1,opt,name=result,proto3" json:"result,omitempty"`
}
This contains the Result variable as specified in the .proto file.
Lastly, but the most important AddServiceClient interface
type AddServiceClient interface {
Add(ctx context.Context, in *Request, opts ...grpc.CallOption) (*Response, error)
Multiply(ctx context.Context, in *Request, opts ...grpc.CallOption) (*Response, error)
}
This contains the Add, Multiply methods specified in the service AddService of .proto file. This interface can be used to implement the methods required for grpc.
Make a client/main.go as follows
package main
import (
"fmt"
"grpc_tutorial/proto"
"log"
"net/http"
"strconv"
"github.com/gin-gonic/gin"
"google.golang.org/grpc"
)
func main() {
conn, err := grpc.Dial("localhost:4040", grpc.WithInsecure())
if err != nil {
panic(err)
}
client := proto.NewAddServiceClient(conn)
g := gin.Default()
g.GET("/add/:a/:b", func(ctx *gin.Context) {
a, err := strconv.ParseUint(ctx.Param("a"), 10, 64)
if err != nil {
ctx.JSON(http.StatusBadRequest, gin.H{"error": "Invalid Parameter A"})
return
}
b, err := strconv.ParseUint(ctx.Param("b"), 10, 64)
if err != nil {
ctx.JSON(http.StatusBadRequest, gin.H{"error": "Invalid Parameter B"})
return
}
req := &proto.Request{A: int64(a), B: int64(b)}
if response, err := client.Add(ctx, req); err == nil {
ctx.JSON(http.StatusOK, gin.H{
"result": fmt.Sprint(response.Result),
})
} else {
ctx.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
}
})
g.GET("/mult/:a/:b", func(ctx *gin.Context) {
a, err := strconv.ParseUint(ctx.Param("a"), 10, 64)
if err != nil {
ctx.JSON(http.StatusBadRequest, gin.H{"error": "Invalid Parameter A"})
return
}
b, err := strconv.ParseUint(ctx.Param("b"), 10, 64)
if err != nil {
ctx.JSON(http.StatusBadRequest, gin.H{"error": "Invalid Parameter B"})
return
}
req := &proto.Request{A: int64(a), B: int64(b)}
if response, err := client.Multiply(ctx, req); err == nil {
ctx.JSON(http.StatusOK, gin.H{
"result": fmt.Sprint(response.Result),
})
} else {
ctx.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
}
})
if err := g.Run(":8080"); err != nil {
log.Fatalf("Failed to run server: %v", err)
}
}
Make another server/main.go file
package main
import (
"context"
"grpc_tutorial/proto"
"net"
"google.golang.org/grpc"
"google.golang.org/grpc/reflection"
)
type server struct{}
func main() {
listener, err := net.Listen("tcp", ":4040")
if err != nil {
panic(err)
}
srv := grpc.NewServer()
proto.RegisterAddServiceServer(srv, &server{})
reflection.Register(srv)
if e := srv.Serve(listener); e != nil {
panic(e)
}
}
func (s *server) Add(ctx context.Context, request *proto.Request) (*proto.Response, error) {
a, b := request.GetA(), request.GetB()
result := a + b
return &proto.Response{Result: result}, nil
}
func (s *server) Multiply(ctx context.Context, request *proto.Request) (*proto.Response, error) {
a, b := request.GetA(), request.GetB()
result := a * b
return &proto.Response{Result: result}, nil
}
Now run the server using:
go run server/main.go
This starts the server at localhost:4040
Run the client using:
go run client/main.go
Now you can call the rpc using the browser by:
localhost:8080/add/32/32
Or you can mutliply using
localhost:8080/mult/3/2112