Testing with Microcks
It is likely you experienced the painful situation of deploying to production only to find out that an API service you integrate with has broken the contract. How can we effectively ensure this does not happen?
Microcks offers mocks but can also be used for Contract conformance testing of API or services being under development. You spend a lot of time describing request/response pairs and matching rules: it would be a shame not to use this sample as test cases once the development is on its way!
You find on the internet many different representations of how the different testing techniques relates to one another and should be ideally combine into a robust testing pipeline. At Microcks, we particularly like the Watirmelon representation below. Microcks clearly allows you to realize Automated API Tests and focus more precisely on Contract or Conformance testing.
The purpose of Microcks tests is precisely to check that the Interaction Contract - as represented by an OpenAPI or AsyncAPI specification, a Postman collection or whatever supported artifact - consumer and producer agreed upon is actually respected by the API provider. In other words: to check that an implementation of the API is conformant to its contract.
In order to help you getting confidence into your implementations, we developed the Conformance index and Conformance score metrics that you can see on the top right of each API | Service details page:
This metrics are available from the
1.6.0version of Microcks.
The Conformance index is a kind of grade that estimates how your API contract is actually covered by the samples you’ve attached to it. We compute this index based on the number of samples you’ve got on each operation, the complexity of dispatching rules of these operation and so on… It represents the maximum possible conformance score you may achieve if all your tests are successfull.
The Conformance score is the current score that has been computed during your last test execution. We also added a trend computation if things are going better or worse comparing to your history of tests on this API.
Once you have activated labels filtering on your repository and have ran a few tests, Microcks is also able to give you an aggregated view of your API patrimony in termes of Conformance Risks. The tree map below is displayed on the Dashboard page and represents risks in terms of average score per group of APIs (depending on the concept you chose it could be per domain, per application, per team, …)
This visualization allows you to have a clear understanding of your conformance risks at first glance!
From the page displaying basic information on your API or Service mocks, you have the ability to launch new tests against different endpoints that may be representing different environment into your development process. Hitting the NEW TEST… button, leads you to the following form where you will be able to specify a target URL for the test, as well as a Runner—a testing strategy for your new launch:
While it is convenient to launch test
on demandmanually, it may be interesting to consider launching new tests automatically when a new deployment of the application occurs for example… Microcks allows you such automation by offering API for ease of integration. See here for more details).
Service under test
Service under test is simply the reference of the API/Service specification we’d like to test. This a couple of
Service Name and
The Test Endpoint is simply a URI where a deployed component is providing API specification endpoint. In the testing literature, this is usually defined as the URI of the System Under Test.
HTTP based APIs
For HTTP based APIs (REST, SOAP, GraphQL or gRPC), this is a simple URL that should respect following pattern:
Event based APIs
For Event based API through Async API testing, pattern is depending on the protocole binding you’d like to test.
Kafka Test Endpoint have the following form with optional parameters placed just after a
? and separated using
||The URL of schema registry that is associated to the tested topic. This parameter is required when using and testing Avro encoded messages.|
||The username used if access to the registry is secured.|
||The source for authentication credentials if any. Valid values are just
As an example, you may have this kind of Test Endpoint value:
MQTT Test Endpoint have the following form with no optional parameters:
AMQP 0.9.1 Test Endpoint have the following form with optional parameters placed just after a
? and separated using
amqp.destination.type is used to specify if we shoulf connect to either a queue (use the
q value) or an exchange speciyfing its type:
d dor direct,
f for fanout,
t for topic,
h for headers. Then you have to specify either the queue or exchange name in
Depending on the type of destination, you will need additional optional parameters as specified below:
||Used to specify a routing key for direct or topic exchanges. If not specified the
||Flag telling if exchange to connect to is durable or not. Default is
||A bunch of headers where name starts with
As an example, you may have this kind of Test Endpoint values:
WebSocket Test Endpoint have the following form with no optional parameters
NATS Test Endpoint have the following form with no optional parameters:
Google PubSub Test Endpoint have the following form with no optional parameters:
Amazon Simple Queue Service Test Endpoint have the following form with optional parameters placed just after a
? and separated using
||The AWS endpoint override URI used for API calls. Handy for using SQS via LocalStack|
Amazon Simple Notification Service Test Endpoint have the following form with optional parameters placed just after a
? and separated using
||The AWS endpoint override URI used for API calls. Handy for using SNS via LocalStack|
As stated above, Microcks offers different strategies for running tests on endpoints where our microservice being developed are deployed. Such strategies are implemented as Test Runners. Here are the default Test Runners available within Microcks:
|Test Runner||API/Service Types||Description|
||REST and SOAP||Simplest test runner that only checks that valid target endpoints are deployed and available: returns a
||SOAP||Extension of HTTP Runner that also checks that the response is syntactically valid regarding SOAP WebService contract. It realizes a validation of the response payload using XSD schemas associated to service.|
||REST and SOAP||When the API artifact is defined using SoapUI: ensures that assertions put into Test cases are checked valid. Report failures.|
||REST||When the API artifact is defined using Postman: executes test scripts as specified within Postman. Report failures.|
||REST||When the API artifact is defined using Open API: it executes example requests and check that results have the expected Http status and that payload is compliant with JSON / OpenAPI schema specified into OpenAPI specification.|
||EVENT||When the API artifact is defined using Async API: it connects to specified broker endpoints, consume messages and check that payload is compliant with JSON / Avro / AsyncAPI schema specified into AsyncAPI specification.|
||GRPC||When the API artifact is defined using gRPC: it executes example requests and check that results payload is compliant with Protocol Buffer schema specified into gRPC proto file.|
||GRAPHQL||When the API is of type GraphQL: it executes example requests and check that results payload is compliant with the GraphQL Schema of the API.|
Depending on the type of Service or Tests you are running, the specification of a Timeout maybe mandatory. This is a numerical value expressed in milliseconds.
Depending on the Test Endpoint you are connecting to, you may need additional authentication information - like credentials or custom X509 Certificates. You may reuse External Secrets that has been made available in the Microcks installation by the administrator.
Getting tests history and details
Tests history for an API/Service is easily accessible from the API | Service summary page. Microcks keep history of all the launched tests on an API/Service version. Success and failures are kept in database with unique identifier and test number to allow you to compare cases of success and failures.
Specific test details can be visualized : Microcks also records the request and response pairs exchanged with the tested endpoint so that you’ll be able to access payload content as well as header. Failures are tracked and violated assertions messages displayed as shown in the screenshot below :