MTE API Relay
Introduction
MTE API Relay is an end-to-end encryption system that protects HTTP traffic between server applications. It acts as a proxy server that encodes and decodes payloads using Eclypses MTE software, an encryption library combining quantum-resistant algorithms with proprietary, patented techniques to secure data.
Below is an architecture where a server application communicates through an MTE API Relay container, which then transmits proxied traffic to another API Relay that decodes and delivers the request to its upstream API service:
Prerequisites
Technical Requirements
- Two services that use HTTP to communicate with each other.
Skills and Knowledge
- Familiarity with ECS and/or EKS.
- General familiarity with AWS Services.
- Experience with the AWS CLI.
Deployment Options
MTE API Relay is provided as a Docker image and can be deployed on AWS ECS, AWS EKS, or manually using another container runtime.
1. Elastic Container Service (ECS)
Deployment Templates
Two CloudFormation templates are available:
- Production template: Multiple instances, load balancing, and SSL configuration.
- Demo template: Lightweight, development-oriented deployment.
Requirements
- Git
- AWS CLI
- AWS permissions to launch resources
Deployment Steps
- Clone the GitHub repository:
AWS CloudFormation Templates - Choose between the demo or production template.
- Modify the
parameters.json
file with your configuration. - Navigate to the respective template directory (demo or production).
- Run the
deploy.sh create
command to deploy, or the delete command to remove resources.
2. Elastic Kubernetes Service (EKS)
If kubectl
is already configured for your EKS cluster:
Example Deployment File (deployment.yaml
)
apiVersion: apps/v1
kind: Deployment
metadata:
name: aws-mte-api-relay-deployment
spec:
replicas: 1
selector:
matchLabels:
app: aws-mte-api-relay
template:
metadata:
labels:
app: aws-mte-api-relay
spec:
containers:
- name: aws-mte-api-relay
image: 709825985650.dkr.ecr.us-east-1.amazonaws.com/eclypses/709825985650.dkr.ecr.us-east-1.amazon.com/eclypses/mte-relay-forward-server:4.4.8
ports:
- containerPort: 8080
env:
- name: CLIENT_ID_SECRET
value: <YOUR CLIENT ID SECRET HERE>
- name: CORS_ORIGINS
value: <YOUR CORS ORIGINS HERE>
- name: UPSTREAM
value: <UPSTREAM VALUE HERE>
- name: OUTBOUND_TOKEN
value: <OUTBOUND TOKEN HERE>
- name: AWS_REGION
value: <AWS REGION HERE>
---
apiVersion: v1
kind: Service
metadata:
name: aws-mte-api-relay-service
spec:
type: LoadBalancer
selector:
app: aws-mte-api-relay
ports:
- protocol: TCP
port: 80
targetPort: 8080
Commands
kubectl apply -f deployment.yaml
kubectl get all
kubectl delete -f deployment.yaml
3. Docker Image
You can also run the image using Docker, Podman, K3s, or Docker Swarm.
Commands
aws ecr get-login-password --region us-east-1 \
| docker login --username AWS \
--password-stdin <ECR_REGISTRY_URL>
docker pull 709825985650.dkr.ecr.us-east-1.amazonaws.com/eclypses/709825985650.dkr.ecr.us-east-1.amazon.com/eclypses/mte-relay-forward-server:4.4.8
Refer to the Server Configuration section for required environment variables.
Server Configuration
MTE API Relay is configured using environment variables.
Required Variables
UPSTREAM
– Upstream API or service URL.CLIENT_ID_SECRET
– Secret for signing client IDs (minimum 32 characters).OUTBOUND_TOKEN
– Token appended to requests to denote the intended outbound recipient.AWS_REGION
– AWS region where the server is deployed.REDIS_URL
(recommended for production) – Redis cluster for maintaining session pairs across load-balanced containers.
Optional Variables
PORT
– Default:8080
.LOG_LEVEL
– One of trace, debug, info, warning, error, panic, off. Default:info
.PASS_THROUGH_ROUTES
– Routes proxied without MTE encoding.MTE_ROUTES
– If set, only listed routes use encoding; others return404
.CORS_ORIGINS
– Comma-separated list of allowed origins.CORS_METHODS
– Default:GET, POST, PUT, DELETE
.HEADERS
– Object of custom headers.
Minimal Example
UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
OUTBOUND_TOKEN='abcdefg1234567'
CORS_ORIGINS='https://www.my-company.com,https://dashboard.my-company.com'
Full Example
UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
OUTBOUND_TOKEN='abcdefg1234567'
REDIS_URL='redis://10.0.1.230:6379'
PORT=3000
LOG_LEVEL=info
PASS_THROUGH_ROUTES='/health,/version'
MTE_ROUTES='/api/v1/*,/api/v2/*'
CORS_ORIGINS='https://www.my-company.com,https://dashboard.my-company.com'
CORS_METHODS='GET,POST,DELETE'
HEADERS='{"x-service-name":"mte-api-relay"}'
Testing & Health Checks
-
Monitor startup logs in CloudWatch. Successful logs include:
MTE instantiated successfully.
Server listening at http://[0.0.0.0]:8080
-
Test Echo route:
curl 'http://<DOMAIN>/api/mte-echo?msg=test'
-
Expected response:
{
"echo": "test",
"time": "<timestamp>"
}
Monitoring
Amazon Managed Grafana
- Create a Grafana workspace in AWS.
- Add CloudWatch as a data source (IAM role auth recommended).
- Import the provided dashboard:
Dashboard Metrics
- Requests Processed [req/sec]
- Request Time [ms]
- Outbound Proxy Time [ms]
- Upstream Proxy Time [ms]
- Average Request Encode Time [ms]
- Average Request Decode Time [ms]
- Average Response Encode Time [ms]
- Average Response Decode Time [ms]
Performance Metrics
Performance was measured with ~1 kb request/response payloads on a t2.micro (1 vCPU, 1 GB RAM):
Concurrency | Req/Sec Relay | Req/Sec API | Relay % | Extra Latency (Median) |
---|---|---|---|---|
400 | 92.6 | 94 | 98.5% | +10 ms |
500 | 179 | 183 | 97.8% | +17 ms |
550 | 215 | 226 | 95.1% | +45 ms |
600 | 225.7 | 267.6 | 84.3% | +202 ms |
Note: At higher volumes (≥550 concurrent), scaling across multiple Relay instances is recommended.
Troubleshooting
- Invalid Configuration
- Check logs for missing/invalid environment variables.
- Relay unreachable
- Verify Security Groups and load balancer settings.
- Redis connection issues
- Ensure Redis is in the same VPC and credentials are correct.
Security
- No sensitive data is stored in the container.
- No root privileges required.
- Should be deployed in the same VPC as the upstream service to ensure proxied traffic remains internal.
Costs
The service uses a usage-based cost per instance per hour.
Associated AWS services include:
AWS Service | Purpose |
---|---|
ECS | Container orchestration |
ElastiCache (Redis) | State/session management |
CloudWatch | Logging and monitoring |
VPC | Networking isolation |
Elastic Load Balancer | Scaling across Relay containers |
AWS Secrets Manager | Recommended for secrets/env vars |
Maintenance
Routine Updates
- Updated container images are distributed through the AWS Marketplace.
Fault Recovery
- Relaunch the Relay container task; API Relays will automatically re-pair.
Service Limits
- ECS: ECS Service Quotas
- CloudWatch: CloudWatch Limits
- ELB: Load Balancer Limits
Key/Variable Rotation Recommendations
- Rotate the
CLIENT_ID_SECRET
andOUTBOUND_TOKEN
every 90 days as per security best practices.
Supported Regions
MTE API Relay is supported in most AWS regions, except:
- GovCloud
- Middle East (Bahrain, UAE)
- China
Support
For assistance, contact Eclypses Support:
📧 customer_support@eclypses.com
🕒 Monday–Friday, 8:00 AM–5:00 PM MST (excluding holidays)