MTE API Relay
Introduction:
The MTE API Relay Container is a NodeJS application that encodes and proxies HTTP payloads to another MTE API Relay Container, which decodes and proxies the request to another API Service. The Eclypses MTE is a compiled software library combining quantum-resistant algorithms with proprietary, patented techniques to encode data. A common use case for the MTE API Relay is to add an MTE translation layer of security to HTTP Requests being made across the open internet (Example: A server-side application calls a third-party API service).
Customer Deployment
The MTE API Relay Container is typically used to encode HTTP Requests between two server applications. In Azure, the MTE API Relay Container can be orchestrated as a single or multi-container in an AKS. Orchestration and setup of the container service could take a few hours.
Typical Customer Deployment
In an ideal situation, a customer will already have (or plan to create):
- An application that sends HTTP Requests to another RESTful API Service.
- A second RESTful API Service. This could be in another region, Azure Account, or third-party environment (assuming the third-party has the ability to stand-up an MTE API Relay Container of their own).
Typical Use Case:
- To encode/decode HTTP Requests between services, an API Relay Container must exist on each side of the transmission.
- At least one API Relay Container will run in the sending environment.
- At least one API Relay Container will run in the receiving environment.
Prerequisites and Requirements
Technical
The following elements are previously required for a successful deployment:
- An application (or planned application) that communicates with a Web API using HTTP Requests
Skills or Specialized Knowledge
- Familiarity with Azure AKS
Required Environment Variables:
UPSTREAM
- Required
- The upstream application IP address, ingress, or URL that inbound requests will be proxied to.
CLIENT_ID_SECRET
- Required
- A secret that will be used to sign the x-mte-client-id header. A 32+ character string is recommended.
- Note: This will allow you to personalize your client/server relationship. It is required to validate the sender.
OUTBOUND_TOKEN
- Required
- The value appended to the x-mte-outbound-token header to denote the intended outbound recipient.
REDIS_URL
- Strongly Recommended in Production Environments
- The entry point to your Redis instance. If null, the container will use internal memory. In load-balanced workflows, a common cache location is required.
Optional Environment Variables:
PORT
- The port that the server will listen on.
- Default:
8080
. - Note: If this value is changed, make sure it is also changed in your application load balancer.
DEBUG
- A flag that enables debug logging.
- Default:
false
HEADERS
- An object of headers that will be added to all request/responses.
Minimal Configuration Example
UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
REDIS_URL='redis://10.0.1.230:6379'
OUTBOUND_TOKEN='abcdefg1234567`
Full Configuration Example
UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
REDIS_URL='redis://10.0.1.230:6379'
OUTBOUND_TOKEN='abcdefg1234567`
PORT=3000
DEBUG=true
HEADERS='{"x-service-name":"mte-relay"}'
Architecture Diagrams
Default Deployment
AKS Deployment
AKS Setup
Deploying MTE Relay on AKS is very simple, assuming you have already configured your kubectl
to connect to your AKS cluster.
Deployment File:
Copy this deployment file to your machine, and update the values for image
and the environment variables.
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-mte-api-relay-deployment
spec:
replicas: 1
selector:
matchLabels:
app: azure-mte-api-relay
template:
metadata:
labels:
app: azure-mte-api-relay
spec:
containers:
- name: azure-mte-api-relay
image: <CONTAINER_IMG>
ports:
- containerPort: 8080
env:
- name: CLIENT_ID_SECRET
value: <YOUR CLIENT ID SECRET HERE>
- name: UPSTREAM
value: <UPSTREAM VALUE HERE>
To release the deployment, run the command: kubectl apply -f deployment.yaml
.
Application Load Balancer
Expose a service in your AKS cluster to receive incoming traffic and direct it to your Relay server.
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: azure-mte-api-relay-service
spec:
type: LoadBalancer
selector:
app: azure-mte-api-relay
ports:
- protocol: TCP
port: 80
targetPort: 8080
Run the command: kubectl apply -f service.yaml
You may then run the command kubectl get services
to get information about the newly created load balancer.
Remove deployment and service
If you need to remove the deployment and service you may run the command: kubectl delete -f deployment.yaml && kubectl delete -f service.yaml
Usage Guide
How does MTE API Relay protect network traffic?
The first Relay server will take in this request and validate the x-mte-outbound-token
header exists and is valid. It will then attempt to establish and MTE Relay connection with the Relay Server specified in the x-mte-upstream
header. If that is successful, the request data will be encrypted and sent to the upstream server. The upstream server we decrypt the request and proxy the data to it's API server.
The reverse happens for the response, which allows the request and response to be encrypted while traversing public networks. Finally, the first Relay Server will catch the encrypted response, decrypt the data, and respond to it's API with the decrypted data.
Minimum Requirements
To use the MTE API Relay to protect HTTP traffic between two server, you need to:
- Send the initial request to the out-bound Relay Server
- Include the required headers
x-mte-outbound-token
This header should contain the value of theOUTBOUND_TOKEN
environment variable. Only requests that include this token will be allowed to make outbound requests.x-mte-upstream
The destination of this request once it has been encoded. It must be an MTE API Relay, so that the request can be decoded.
An example curl will look like this:
curl --location 'http://<RELAY_SERVER_1>/api/login' \
--header 'x-mte-outbound-token: 12098312098123098120398' \
--header 'x-mte-upstream: http://<RELAY_SERVER_2>' \
--header 'Content-Type: application/json' \
--header 'Cookie: Cookie_1=value' \
--data-raw '{
"email": "jim.halpert@example.com",
"password": "P@ssw0rd!"
}'
Additional Request Options
These additional options may be included as header in order to change how the request is encrypted.
x-mte-encode-type
Value can beMTE
orMKE
. Default isMKE
.x-mte-encode-headers
Can be"true"
or"false"
, default is"true"
.x-mte-encode-url
Can be"true"
or"false"
, default is"true"
.
Example Curl:
curl --location 'http://<RELAY_SERVER_1>/api/login' \
--header 'x-mte-outbound-token: 12098312098123098120398' \
--header 'x-mte-upstream: http://<RELAY_SERVER_2>' \
--header 'x-mte-encode-type: MTE' \
--header 'x-mte-encode-headers: false' \
--header 'x-mte-encode-url: false' \
--header 'Content-Type: application/json' \
--data-raw '{
"email": "jim.qweqwe@example.com",
"password": "P@ssw0rd!"
}'
Testing
Once the API Relay Server is configured:
- To test that the API Service is active and running, submit an HTTPGet request to the echo route:
- curl 'https://[your_domain]/api/mte-echo'
- Successful response:
{
"echo": "test",
"time": [time_stamp]
}
Troubleshooting
Most problems can be determined by consulting the logs. Some common problems that might occur are:
- Invalid Configuration
- Network misconfiguration
Some specific error examples include:
- I cannot reach my API Relay Container.
- Double check your network settings.
- Check Logs
- Server exits with a
ZodError
- This is a config validation error. Look at the "path" property to determine which of the required Environment Variables you did not set. For example, if the path property shows "upstream," then you forgot to set the environment variable "UPSTREAM."
- Server cannot reach Redis.
- Check that Redis is started in same VPC.
- If using credentials, check that credentials are correct.
MTE API Relay Server includes a Debug flag, which you can enable by setting the environment variable "DEBUG" to true. This will enable additional logging that you can review in CloudWatch to determine the source of any issues.
Health Check
For short and long-term health – monitor logs.
Echo Route
The Echo route can be called by an automated system to determine if the service is still running. To test that the API Service is active and running, submit an HTTP Get request to the echo route:
curl 'https://[your\_domain]/api/mte-echo'
Successful response:
{
"echo": true,
"time"[time_stamp]
}
Performance Metrics
100 concurrent connections, 5 minute duration, 1kb request and response
# Requests | Req/second | Median Response Time (ms) | p90 | p99 | |
---|---|---|---|---|---|
Control API | 28050 | 94 | 52 | 59 | 87 |
MTE Relay | 27763 | 92.6 | 62 | 71 | 87 |
Result | 98.9% | 98.5% | +10ms | +12ms | +0ms |
200 concurrent connections, 5 minute duration, 1kb request and response
# Requests | Req/second | Median Response Time (ms) | p90 | p99 | |
---|---|---|---|---|---|
Control API | 54938 | 183 | 55 | 66 | 110 |
MTE Relay | 53724 | 179 | 72 | 110 | 190 |
Result | 97.8% | 97.8% | +17ms | +44ms | +80ms |
250 concurrent connections, 5 minute duration, 1kb request and response
# Requests | Req/second | Median Response Time (ms) | p90 | p99 | |
---|---|---|---|---|---|
Control API | 67966 | 226 | 55 | 72 | 130 |
MTE Relay | 64465 | 215 | 100 | 170 | 280 |
Result | 94.8% | 95.1% | +45ms | +98ms | +150ms |
Note: We recommend load-balancing requests between multiple instances of MTE Relay once you reach this volume of traffic. Please monitor your application carefully.
300 concurrent connections, 5 minute duration, 1kb request and response
# Requests | Req/second | Median Response Time (ms) | p90 | p99 | |
---|---|---|---|---|---|
Control API | 80249 | 267.57 | 58 | 92 | 190 |
MTE Relay | 67696 | 225.74 | 260 | 350 | 450 |
Result | 84.3% | 84.3% | +202ms | +258ms | +260ms |
Routine Maintenance
Patches/Updates
Updated images are distributed through the marketplace.
Emergency Maintenance
Handling Fault Conditions
Tips for solving error states:
- Review all tips from Trouble Shooting section above.
- Check Logs for more information
- Configuration mismatch
- Double-check environment variables for errors
How to recover the software
The MTE API Relay Container AKS cluster can be relaunched. While current sessions may be affected, the container will seamlessly manage the re-pairing process with the MTE API Relay Upstream Server and the end-user should not be affected.
Support
The Eclypses support center is available to assist with inquiries about our products from 8:00 am to 8:00 pm MST, Monday through Friday, excluding Eclypses holidays. Our committed team of expert developer support representatives handles all incoming questions directed to the following email address: customer_support@eclypses.com