MTE Relay Server on AWS
Introduction
MTE Relay Server is an end-to-end encryption system that protects all network requests with next-generation application data security. It acts as a proxy server in front of your backend, communicating with an MTE Relay Client to encode and decode all network traffic. The server is highly customizable and supports integration with other services through custom adapters.
Below is a typical architecture where a client application communicates with an MTE Relay Server, which then proxies decoded traffic to backend services:
Prerequisites
Technical Requirements
- A web or mobile application project that communicates with a backend API over HTTP.
Skills and Knowledge
- Familiarity with ECS and/or EKS.
- Experience with the AWS CLI.
Deployment Options
MTE Relay Server is provided as a Docker image and can be deployed on AWS ECS, AWS EKS, or manually using another container runtime.
1. Elastic Container Service (ECS)
Deployment Templates
Two CloudFormation templates are available:
- Production template: Multiple instances, load balancing, and SSL configuration.
- Demo template: Lightweight, development-oriented deployment.
Requirements
- Git
- AWS CLI
- AWS permissions to launch resources
Deployment Steps
- Clone the GitHub repository:
AWS CloudFormation Templates - Choose between the demo or production template.
- Modify the
parameters.json
file with your configuration. - Navigate to the respective template directory (demo or production).
- Run the
deploy.sh create
command to deploy, or the delete command to remove resources.
2. Elastic Kubernetes Service (EKS)
If kubectl
is already configured for your EKS cluster:
Example Deployment File (deployment.yaml
)
apiVersion: apps/v1
kind: Deployment
metadata:
name: aws-mte-relay-deployment
spec:
replicas: 1
selector:
matchLabels:
app: aws-mte-relay
template:
metadata:
labels:
app: aws-mte-relay
spec:
imagePullSecrets:
- name: awscrcreds
containers:
- name: aws-mte-relay
image: 709825985650.dkr.ecr.us-east-1.amazonaws.com/eclypses/mte-relay-server:4.4.8
ports:
- containerPort: 8080
env:
- name: CLIENT_ID_SECRET
value: <YOUR CLIENT ID SECRET HERE>
- name: CORS_ORIGINS
value: <YOUR CORS ORIGINS HERE>
- name: UPSTREAM
value: <UPSTREAM VALUE HERE>
- name: AWS_REGION
value: <AWS REGION HERE>
---
apiVersion: v1
kind: Service
metadata:
name: aws-mte-relay-service
spec:
type: LoadBalancer
selector:
app: aws-mte-relay
ports:
- protocol: TCP
port: 80
targetPort: 8080
Commands
kubectl apply -f deployment.yaml
kubectl get all
kubectl delete -f deployment.yaml
3. Docker Image
You can also run the image using Docker, Podman, K3s, or Docker Swarm.
Commands
aws ecr get-login-password --region us-east-1 \
| docker login --username AWS \
--password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.com
docker pull 709825985650.dkr.ecr.us-east-1.amazonaws.com/eclypses/mte-relay-server:4.4.8
Refer to the Server Configuration section for required environment variables.
Server Configuration
MTE Relay Server is configured using environment variables.
Required Variables
UPSTREAM
– Upstream API or service URL.CORS_ORIGINS
– Comma-separated list of allowed origins.CLIENT_ID_SECRET
– Secret for signing client IDs (minimum 32 characters).AWS_REGION
– AWS region where the server is deployed.REDIS_URL
(recommended for production) – Redis cluster for maintaining session pairs across load-balanced containers.
Optional Variables
PORT
– Default:8080
.LOG_LEVEL
– One of trace, debug, info, warning, error, panic, off. Default:info
.PASS_THROUGH_ROUTES
– Routes proxied without MTE encoding.MTE_ROUTES
– If set, only listed routes use encoding; others return404
.CORS_METHODS
– Default:GET, POST, PUT, DELETE
.HEADERS
– Object of custom headers.
Minimal Example
UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
CORS_ORIGINS='https://www.my-company.com,https://dashboard.my-company.com'
Full Example
UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
CORS_ORIGINS='https://www.my-company.com,https://dashboard.my-company.com'
REDIS_URL='redis://10.0.1.230:6379'
PORT=3000
LOG_LEVEL=info
PASS_THROUGH_ROUTES='/health,/version'
MTE_ROUTES='/api/v1/*,/api/v2/*'
CORS_METHODS='GET,POST,DELETE'
HEADERS='{"x-service-name":"mte-relay"}'
Client-Side Setup
Eclypses provides client-side SDKs to integrate with MTE Relay Server:
Client-side integration typically requires less than one day of effort.
Testing & Health Checks
-
Monitor startup logs in CloudWatch. Successful logs include:
MTE instantiated successfully.
Server listening at http://[0.0.0.0]:8080
-
Test Echo route:
curl 'https//<DOMAIN>/api/mte-echo?msg=test'
-
Expected response:
{
"echo": "test",
"time": "<timestamp>"
}
Monitoring
Amazon Managed Grafana
- Create a Grafana workspace in AWS.
- Add CloudWatch as a data source (IAM role auth recommended).
- Import the provided dashboard:
Dashboard Metrics
- Requests Processed [req/sec]
- Request Time [ms]
- Upstream Proxy Time [ms]
- Average Decode Time [ms]
- Average Encode Time [ms]
Performance Metrics
Performance was measured with ~1 kb request/response payloads on a t2.micro (1 vCPU, 1 GB RAM):
Concurrency | Req/Sec Relay | Req/Sec API | Relay % | Extra Latency (Median) |
---|---|---|---|---|
400 | 92.6 | 94 | 98.5% | +10 ms |
500 | 179 | 183 | 97.8% | +17 ms |
550 | 215 | 226 | 95.1% | +45 ms |
600 | 225.7 | 267.6 | 84.3% | +202 ms |
Note: At higher volumes (≥550 concurrent), scaling across multiple Relay instances is recommended.
Troubleshooting
- Invalid Configuration
- Check logs for missing/invalid environment variables.
- Relay unreachable
- Verify Security Groups and load balancer settings.
- Redis connection issues
- Ensure Redis is in the same VPC and credentials are correct.
Enable debug logs with DEBUG=true
.
Security
- No sensitive data is stored in the container.
- No root privileges required.
Costs
The service uses a usage-based cost per instance per hour.
Associated AWS services include:
AWS Service | Purpose |
---|---|
ECS | Container orchestration |
ElastiCache (Redis) | State/session management |
CloudWatch | Logging and monitoring |
VPC | Networking isolation |
Elastic Load Balancer | Scaling across Relay containers |
Maintenance
Routine Updates
- Updated container images are distributed through the AWS Marketplace.
Fault Recovery
- Relaunch the Relay container task; clients will automatically re-pair.
Service Limits
- ECS: ECS Service Quotas
- CloudWatch: CloudWatch Limits
- ELB: Load Balancer Limits
Supported Regions
MTE Relay Server is supported in most AWS regions, except:
- GovCloud
- Middle East (Bahrain, UAE)
- China
Support
For assistance, contact Eclypses Support:
📧 customer_support@eclypses.com
🕒 Monday–Friday, 8:00 AM–5:00 PM MST (excluding holidays)