Testing Microservices

Sunday 1 March 2020, 13:11

Testing is an important part of software development, but there are a lot of different ways to do testing and there is not a one-size-fits-all approach. In this blog post I discuss testing microservices from the outside. This approach works really well for small microservices that have a clearly defined external API.

When testing microservices from the outside, I recommend doing the following:

  • Start any third-party dependencies (e.g. Databases, Messaging Queues, Redis Clusters) as separate Docker containers.
  • Stub out any collaborators (other services, internal and external) in your test process, binding them to localhost.
  • Build your service from source.
  • Run your service as a sub-process of your test process, using its environment variables to point it at your stubs and other containers.
  • Use your service's external API to interact with it in your test scenarios.
  • Clean up ready for the next test run.

I have provided a simple example of this type of testing here, and I have copied and pasted extracts from this example in each section.

Let's look into each step in more detail and explain why I do things this way.

Third-party dependencies

Most services will have few a of these, usually some form of database and some other middleware solutions like Redis clusters or message queues. These tools usually have broad and complex APIs that you as a software developer only interact with via client libraries, so stubbing them out is not practical. Instead, using readily available Docker images to create temporary instances of these things is a quick and easy way to satisfy your service's dependencies.

Modern CI tools (Circle CI is one example) that have support for Docker that allow you to run one Docker container for your build and additional Docker containers for your tests to use when they run.

# .circleci/config.yml
version: 2
jobs:
  build:
    docker:
      - image: circleci/golang:1.14-buster-node # build
      - image: egymgmbh/datastore-emulator      # third-party
    steps:
      - checkout
      - run: go build -o main ./cmd/main.go
      - run: go test -v ./inttest

Locally you can use Docker Compose to spin up the needed additional containers to be able to run your test from your IDE.

# inttest/docker-compose.yaml
version: '3'
services:
  datastore:
    image: egymgmbh/datastore-emulator
    ports:
      - "8282:8282"

Collaborators

Collaborators are services that the service under test talks to, that have APIs that are both narrow and well understood. Usually these are other services that your team has control over, but they can include third-party API endpoints as well.

Collaborators can be stubbed out in the test for superior control and feedback. This is done by providing a minimum stub implementation of their API endpoints and running a server bound to a port on localhost in your test for each one.

// setup mock HTTP endpoint
mock := &mockCreditScoreService{
    score: "0",
}
s := &http.Server{
    Addr:    fmt.Sprintf(":%d", port),
    Handler: mock,
}
go func() {
    _ = s.ListenAndServe()
}()

Once these are available you need to configure your service to talk to these stub endpoints rather than the real ones. This is most easily achieved by ensuring that your service takes its configuration from environment variables.

Build your service from source

Doing this inside of your test is not strictly necessary and this can be done before your test runs, however if you do this your development experience inside your IDE will be much nicer. When you do this correctly you will see compiler errors as a test failure, so you only need to run your test to see any form of issue and you won't accidentally run your tests against a previous build.

For languages that aren't compiled it may be prudent to run the packaging step instead.

Being able to do this will require your build process to be quick and simple to avoid complicating or slowing down your test.

// build service
command := exec.Command("go", "build", "-o", "main", "cmd/main.go")
command.Stdout = os.Stdout
command.Stderr = os.Stderr
command.Dir = wd
err = command.Run()
require.NoError(t, err)

Run your service as a sub-process

I believe that this is an important part of this approach, as it prevents you from accidentally reaching into your service in some way other than through its external API. It also ensures that your process does not have any undocumented dependencies such a configuration files that are not in source control.

Being able to do this means that your service needs to be configurable using environment variables, command line arguments or configuration files. I personally prefer using environment variables. If your service is designed to run in Kubernetes then this will likely be the case already.

Starting the process as a sub-process is usually the easy part and can be achieved with a small amount of code in most languages.

// run service
command = exec.Command(filepath.Join(wd, "main"))
command.Stdout = os.Stdout
command.Stderr = os.Stderr
command.Dir = wd
command.Env = append(command.Env, "SERVICE_PORT=9000")
command.Env = append(command.Env, "HEALTH_CHECK_PORT=9001")
command.Env = append(command.Env, "DATASTORE_EMULATOR_HOST=0.0.0.0:8282")
command.Env = append(command.Env, "GCP_PROJECT_ID=example")
command.Env = append(command.Env, "CREDIT_SCORE_URL=http://localhost:8000/api/score")
err = command.Start()
require.NoError(t, err)

What is slightly harder is knowing when your process is ready to accept requests. Long start-up times can slow down your tests but they can also make them flaky (fail intermittently, when something times out in this case). I recommend polling a readiness check endpoint that signals when the service is available. Again if your service is designed to run in Kubernetes you will already have a readiness check endpoint.

// wait until service is up (simplified)
for {
    request, err := http.NewRequestWithContext(ctx, http.MethodGet, "http://localhost:9001/live", nil)
    require.NoError(t, err)
    response, err := http.DefaultClient.Do(request)
    if err == nil {
        break
    }
    time.Sleep(100 * time.Millisecond)
}

Use your service's external API

Your service's external API should have been designed to make it easy for clients to consume so it should also be easy to use in your tests.

createResp, err := client.CreateUser(ctx, &api.CreateUserRequest{
    FirstName:   firstName,
    LastName:    lastName,
    DateOfBirth: dob,
})
require.NoError(t, err)
getResp, err := client.GetUser(ctx, &api.GetUserRequest{
    ID: createResp.ID,
})
require.NoError(t, err)
require.Equal(t, getResp.FirstName, firstName)
require.Equal(t, getResp.LastName, lastName)
require.Equal(t, getResp.DateOfBirth, dob)

Using an RPC framework like gRPC makes defining a well-defined external API really easy, with the added benefit of being able to create a client in just a few lines of code.

conn, err := grpc.DialContext(ctx, ":9000", grpc.WithInsecure())
require.NoError(t, err)
client := api.NewAPIClient(conn)
return client, func() {
    _ = conn.Close()
}

Clean up

I recommend doing all the setup above at the beginning of your test suite and cleaning it all up afterwards for performance reasons. Clean up is important as you want to be able to run your tests multiple times without having to restart the Docker containers used for third-party dependencies in between, as this will ruin your local development experience.

Go has an idiomatic way of ensuring that clean up is done at the end of your test using the defer keyword.

creditScoreMock, tearDownMockCreditScoreService := startMockCreditScoreService(t, ctx, 8000)
defer tearDownMockCreditScoreService()

tearDownService := startService(t, ctx)
defer tearDownService()

client, tearDownClient := createClient(t, ctx)
defer tearDownClient()

Conclusion

All of this might seem like a tremendous amount of work to run a few tests, however, in my experience it is well worth the effort as the resulting tests are of high value because they test the whole service as a black box and so will generally give you more confidence than other forms of testing.

I also find that this form of testing makes the most sense for test driven development. I usually do the test setup before I write the service and sometimes even write the core business logic in the test before factoring it out into the service's packages.

Using this technique I have found that developing new services is much faster and that I have a lot more confidence in the result.

I hope that you have found this useful, please do not hesitate to contact me if you have any questions.