MTE Relay Server on AWS
Introduction:
MTE Relay Server is an end-to-end encryption system that protects all network requests with next-generation application data security. MTE Relay Server acts as a proxy-server that sits in front of your normal backend, and communicates with an MTE Relay Client, encoding and decoding all network traffic. MTE Relay Server is highly customizable and can be configured to integrate with a number of other services through the use of custom adapters.
Below is a visual example of a client application using MTE Relay Client communicating with an MTE Relay Server, which then proxies decoded traffic to your backend services.
Prerequisites and Requirements
Technical
- An existing project using REST HTTP calls.
Skills or Specialized Knowledge
- Familiarity with ECS and EKS.
- Experience with the AWS CLI
Deployment Guides
Below are deployment guides for ECS, EKS, and manual deployments.
Elastic Container Registry (ECS)
We offer two AWS CloudFormation templates:
- Production-ready template with load balancing and SSL certification requirements.
- Demo template that is meant to be a light-weight deployment for testing MTE Relay in a development capacity.
Requirements:
- Git
- AWS CLI
- AWS Permissions to launch resources.
Cloud Formation Guide
- Clone the Git repository with the templates
- Choose between the demo template or the production template.
- The production template includes multiple instances, load balancing, and requires an existing SSL cert for HTTPS.
- The development template is good for deploying MTE Relay Server in a development capacity.
- Modify the
parameters.json
file with your specific configuration.- See the configuration section for details on each option.
- In a terminal or command line,
cd
into the directory you're modifying; demo or production. - Copy/paste the Create command from the
deploy.sh
file into your terminal.- The delete command can be used to delete the stack and all resources it created.
Elastic Kubernetes Service (EKS)
Deploying MTE Relay on EKS is very simple, assuming you have already configured your kubectl
to connect to your EKS cluster.
Deployment File:
Copy this deployment.yaml
file to your machine, and update the values for image
and the environment variables.
# File: deployment.yaml
# MTE Relay Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: aws-mte-relay-deployment
spec:
replicas: 1
selector:
matchLabels:
app: aws-mte-relay
template:
metadata:
labels:
app: aws-mte-relay
spec:
imagePullSecrets:
- name: awscrcreds
containers:
- name: aws-mte-relay
image: 709825985650.dkr.ecr.us-east-1.amazonaws.com/eclypses/mte-relay-server:4.4.1
ports:
- containerPort: 8080
env:
- name: CLIENT_ID_SECRET
value: <YOUR CLIENT ID SECRET HERE>
- name: CORS_ORIGINS
value: <YOUR CORS ORIGINS HERE>
- name: UPSTREAM
value: <UPSTREAM VALUE HERE>
- name: AWS_REGION
value: <AWS REGION HERE>
---
# MTE Relay Service
apiVersion: v1
kind: Service
metadata:
name: aws-mte-relay-service
spec:
type: LoadBalancer
selector:
app: aws-mte-relay
ports:
- protocol: TCP
port: 80
targetPort: 8080
Commands
kubectl apply -f deployment.yaml
kubectl get all
kubectl delete -f deployment.yaml
Docker Image
MTE Relay Server is a docker image, so you can pull it onto your machine and run it however you like. You can use Docker Swarm, K8s, K3s, Podman, or another container runtime to run the container.
To pull the docker image, run this command.
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.com
docker pull 709825985650.dkr.ecr.us-east-1.amazonaws.com/eclypses/mte-relay-server:4.4.1
Consult the configuration section to understand what environment variables need te be set when running this image.
MTE Relay Client Setup
Eclypses provides several packages and techniques for client applications to interface with MTE Relay server:
Server Configuration
The MTE Relay is configurable using the following environment variables :
Required Configuration Variables:
UPSTREAM
- Required
- The upstream application IP address, ingress, or URL that inbound requests will be proxied to.
CORS_ORIGINS
- Required
- A comma-separated list of URLs that will be allowed to make cross-origin requests to the server.
CLIENT_ID_SECRET
- Required
- A secret that will be used to sign the x-mte-client-id header. A 32+ character string is recommended.
- Note: This will allow you to personalize your client/server relationship. It is required to validate the sender.
AWS_REGION
- Required
- The region you are running your Relay server in. Example:
us-east-1
REDIS_URL
- Strongly Recommended in Production Environments
- The entry point to your Redis ElastiCache cluster. If null, the container will use internal memory. In load-balanced workflows, a common cache location is required to maintain a paired relationship with the upstream API Relay.
Optional Configuration Variables:
The following configuration variables have default values. If the customer does choose to create the following keys, it is recommended that these keys be stored in AWS Secrets Manager.
PORT
- The port that the server will listen on.
- Default:
8080
. - Note: If this value is changed, make sure it is also changed in your application load balancer.
DEBUG
- A flag that enables debug logging.
- Default:
false
PASS_THROUGH_ROUTES
- A list of routes that will be passed through to the upstream application without being MTE encoded/decoded.
- example: "/some_route_that_is_not_secret"
MTE_ROUTES
- A list of routes that will be MTE encoded/decoded. If this optional property is included, only the routes listed will be MTE encoded/decoded, and any routes not listed here or in
PASS_THROUGH_ROUTES
will 404. If this optional property is not included, all routes not listed inPASSTHROUGH_ROUTES
will be MTE encoded/decoded.
- A list of routes that will be MTE encoded/decoded. If this optional property is included, only the routes listed will be MTE encoded/decoded, and any routes not listed here or in
CORS_METHODS
- A list of HTTP methods that will be allowed to make cross-origin requests to the server.
- Default:
GET, POST, PUT, DELETE
. - Note:
OPTIONS
andHEAD
are always allowed.
HEADERS
- An object of headers that will be added to all request/responses.
MAX_POOL_SIZE
- The number of encoder objects and decoder objects held in a pool. A larger pool will consume more memory, but it will also handle more traffic more quickly. This number is applied to all four pools; the MTE Encoder, MTE Decoder, MKE Encoder, and MKE Decoder pools.
- Default:
25
Minimal Configuration Example
UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
CORS_ORIGINS='https://www.my-company.com,https://dashboard.my-company.com'
Full Configuration Example
UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
CORS_ORIGINS='https://www.my-company.com,https://dashboard.my-company.com'
REDIS_URL='redis://10.0.1.230:6379'
PORT=3000
DEBUG=true
PASS_THROUGH_ROUTES='/health,/version'
MTE_ROUTES='/api/v1/*,/api/v2/*'
CORS_METHODS='GET,POST,DELETE'
HEADERS='{"x-service-name":"mte-relay"}'
MAX_POOL_SIZE=10
Demo Configuration Video
Architecture Diagrams
ECS Deployment
EKS Deployment
Security
The MTE Relay does not require AWS account root privilege for deployment or operation. The container only facilitates the data as it moves from client to server and does not store any sensitive data.
Costs
The MTE Relay container is a usage-based model based on Cost/Instance/Hour. See the marketplace listing for costs.
Billable Services
Service | Required | Purpose |
---|---|---|
AWS ECS | True | Container Orchestration |
AWS ElastiCache | True | MTE State Management |
AWS CloudWatch | True | Logging |
AWS VPC | True | Recommended |
Elastic Load Balancer | True | Required if orchestrating multiple containers |
Testing
Once the Relay Server is configured:
- Monitor logs in CloudWatch – check for initialization errors.
- On successful startup, you should see two logs
- MTE instantiated successfully.
- Server listening at http://[0.0.0.0]:8080
- On successful startup, you should see two logs
- To test that the API Service is active and running, submit an HTTPGet request to the echo route:
curl https://<DOMAIN>/api/mte-echo
- A successful response will turn a json payload with a timestamp from the server and a status code of 200.
Troubleshooting
Most problems can be determined by consulting the logs in AWS CloudWatch. Some common problems that might occur are:
- Invalid Configuration
- Network misconfiguration
Some specific error examples include:
- I cannot reach my relay server.
- Double check your Security Group allows traffic from your load balancer.
- Check CloudWatch
- Server exits with a
ZodError
- This is a config validation error. Look at the "path" property to determine which of the required Environment Variables you did not set. For example, if the path property shows "upstream," then you forgot to set the environment variable "UPSTREAM."
- Server cannot reach ElastiCache.
- Check that ElastiCache is started in same VPC.
- Check that ElastiCache security group allows traffic from MTE Relay ECS instance.
- If using credentials, check that credentials are correct.
MTE Relay Server includes a Debug flag, which you can enable by setting the environment variable "DEBUG" to true. This will enable additional logging that you can review in CloudWatch to determine the source of any issues.
Health Check
For short and long-term health – monitor logs in the AWS CloudWatch service.
Echo Route
The Echo route can be called by an automated system to determine if the service is still running. To test that the API Service is active and running, submit an HTTPGet request to the echo route:
curl 'https://[your\_domain]/api/mte-echo/test'
Successful response:
{
"echo":"test",
"time"[time_stamp]
}
Performance Metrics
Performance metrics were completed on an AWS t2.micro virtual machine, with 1vcpu and 1GB memory.
100 concurrent connections, 5 minute duration, 1kb request and response
# Requests | Req/second | Median Response Time (ms) | p90 | p99 | |
---|---|---|---|---|---|
Control API | 28050 | 94 | 52 | 59 | 87 |
MTE Relay | 27763 | 92.6 | 62 | 71 | 87 |
Result | 98.9% | 98.5% | +10ms | +12ms | +0ms |
200 concurrent connections, 5 minute duration, 1kb request and response
# Requests | Req/second | Median Response Time (ms) | p90 | p99 | |
---|---|---|---|---|---|
Control API | 54938 | 183 | 55 | 66 | 110 |
MTE Relay | 53724 | 179 | 72 | 110 | 190 |
Result | 97.8% | 97.8% | +17ms | +44ms | +80ms |
250 concurrent connections, 5 minute duration, 1kb request and response
# Requests | Req/second | Median Response Time (ms) | p90 | p99 | |
---|---|---|---|---|---|
Control API | 67966 | 226 | 55 | 72 | 130 |
MTE Relay | 64465 | 215 | 100 | 170 | 280 |
Result | 94.8% | 95.1% | +45ms | +98ms | +150ms |
Note: We recommend load-balancing requests between multiple instances of MTE Relay once you reach this volume of traffic. Please monitor your application carefully.
300 concurrent connections, 5 minute duration, 1kb request and response
# Requests | Req/second | Median Response Time (ms) | p90 | p99 | |
---|---|---|---|---|---|
Control API | 80249 | 267.57 | 58 | 92 | 190 |
MTE Relay | 67696 | 225.74 | 260 | 350 | 450 |
Result | 84.3% | 84.3% | +202ms | +258ms | +260ms |
Routine Maintenance
Patches/Updates
Updated images are distributed through the marketplace.
Service Limits
- ECS
- The MTE Relay Server Container workload is subject to ECS Service Limits.
- CloudWatch
- ELB (Elastic Load Balancer)
- The MTE Relay Server Container workload is subject to ELB Service Limits:
Emergency Maintenance
Handling Fault Conditions
Tips for solving error states:
- Review all tips from Trouble Shooting section above.
- Check ECS CloudWatch Logs for more information
- Configuration mismatch
- Double-check environment variables for errors
How to recover the software
The MTE Relay Container ECS task can be relaunched. While current client sessions may be affected, the client-side package should seamlessly manage the re-pairing process with the MTE Relay Server and the end-user should not be affected.
Supported Regions:
MTE Relay is supported in most AWS regions. Below is a short list of the regions we don't support:
- GovCloud
- Middle East (Bahrain)
- Middle East (UAE)
- China
Support
The Eclypses support center is available to assist with inquiries about our products from 8:00 am to 8:00 pm MST, Monday through Friday, excluding Eclypses holidays. Our committed team of expert developer support representatives handles all incoming questions directed to the following email address: customer_support@eclypses.com