MTE Relay Server On-Premise Deployment
Introduction
MTE Relay Server is an end-to-end encryption system that protects all network requests with next-generation application data security. It acts as a proxy server in front of your backend, communicating with an MTE Relay Client to encode and decode all network traffic. The server is highly customizable and supports integration with other services through custom adapters.
Below is a typical architecture where a client application communicates with an MTE Relay Server, which then proxies decoded traffic to backend services:
Prerequisites
Technical Requirements
- A web or mobile application project that communicates with a backend API over HTTP.
- Docker installed and running on your system.
- AWS CLI installed.
Skills and Knowledge
- Familiarity with Docker and/or Kubernetes.
- Basic knowledge of configuring containerized services.
Credentials
- An AWS Access Key ID and AWS Secret Access Key provided by Eclypses to access the private container repository.
Deployment Options
MTE Relay Server is provided as a Docker image and can be deployed on-premise using:
- Docker Run
- Docker Compose
- Kubernetes
- Other container runtimes (e.g., Podman, Docker Swarm, K3s)
1. Configure AWS CLI Access
You must configure a new AWS CLI profile using the credentials provided by Eclypses.
aws configure --profile eclypses-customer-on-prem
When prompted:
- AWS Access Key ID: Enter the ID provided by Eclypses.
- AWS Secret Access Key: Enter the secret key provided.
- Default region name: us-east-1
- Default output format: json
2. Pull the Docker Image
Authenticate Docker with the Eclypses ECR repository:
aws ecr get-login-password --region us-east-1 --profile eclypses-customer-on-prem \
| docker login --username AWS \
--password-stdin 321186633847.dkr.ecr.us-east-1.amazonaws.com
Then pull the image:
docker pull 321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-relay-server:4.4.8
Server Configuration
MTE Relay Server is configured using environment variables.
Required Variables
UPSTREAM
– Upstream API or service URL.CORS_ORIGINS
– Comma-separated list of allowed origins.CLIENT_ID_SECRET
– Secret for signing client IDs (minimum 32 characters).REDIS_URL
(recommended for production) – Redis cluster for maintaining session pairs across load-balanced containers.
Optional Variables
PORT
– Default:8080
.LOG_LEVEL
– One of trace, debug, info, warning, error, panic, off. Default:info
.PASS_THROUGH_ROUTES
– Routes proxied without MTE encoding.MTE_ROUTES
– If set, only listed routes use encoding; others return404
.CORS_METHODS
– Default:GET, POST, PUT, DELETE
.HEADERS
– Object of custom headers.
Minimal Example
UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
CORS_ORIGINS='https://www.my-company.com,https://dashboard.my-company.com'
Full Example
UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
CORS_ORIGINS='https://www.my-company.com,https://dashboard.my-company.com'
REDIS_URL='redis://10.0.1.230:6379'
PORT=3000
LOG_LEVEL=info
PASS_THROUGH_ROUTES='/health,/version'
MTE_ROUTES='/api/v1/*,/api/v2/*'
CORS_METHODS='GET,POST,DELETE'
HEADERS='{"x-service-name":"mte-relay"}'
Deployment Steps
Option A: Docker Run
docker run --rm -it \
--name mte-relay \
-p 8080:8080 \
-e UPSTREAM="<YOUR_BACKEND_URL>" \
-e CLIENT_ID_SECRET="<YOUR_CLIENT_ID_SECRET>" \
-e CORS_ORIGINS="<YOUR_CORS_ORIGINS>" \
-e REDIS_URL="<YOUR_REDIS_URL>" \
321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-relay-server:4.4.8
Option B: Docker Compose
version: "3.8"
services:
mte-relay-server:
image: 321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-relay-server:4.4.8
ports:
- "8080:8080"
environment:
- UPSTREAM=https://api.my-company.com
- CLIENT_ID_SECRET=YOUR_32_PLUS_CHARACTER_CLIENT_ID_SECRET
- CORS_ORIGINS="https://example.com,http://localhost:3000"
- REDIS_URL=redis://redis:6379
depends_on:
- redis
redis:
image: "redis:alpine"
Option C: Kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
name: mte-relay-deployment
spec:
replicas: 2
selector:
matchLabels:
app: mte-relay
template:
metadata:
labels:
app: mte-relay
spec:
containers:
- name: mte-relay-server
image: 321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-relay-server:4.4.8
ports:
- containerPort: 8080
env:
- name: UPSTREAM
value: "<YOUR_BACKEND_URL>"
- name: CLIENT_ID_SECRET
value: "<YOUR_CLIENT_ID_SECRET>"
- name: CORS_ORIGINS
value: "<YOUR_CORS_ORIGINS>"
- name: REDIS_URL
value: "redis://my-redis-service:6379"
---
apiVersion: v1
kind: Service
metadata:
name: mte-relay-service
spec:
type: LoadBalancer
selector:
app: mte-relay
ports:
- protocol: TCP
port: 80
targetPort: 8080
kubectl apply -f mte-relay-deployment.yaml
kubectl get all
kubectl delete -f mte-relay-deployment.yaml
Testing & Health Checks
-
Monitor container logs for startup messages:
MTE instantiated successfully.
Server listening at http://[0.0.0.0]:8080
-
Test echo route:
curl http://<MTE_RELAY_HOST_OR_IP>:<PORT>/api/mte-echo?msg=test
Expected response:
{
"echo": "test",
"time": "<timestamp>"
}
Troubleshooting
- Invalid Configuration
- Check logs for missing/invalid environment variables.
- Relay unreachable
- Verify firewall, networking, or Kubernetes service configuration.
- Redis connection issues
- Ensure
REDIS_URL
is reachable in your environment.
- Ensure
Security
- No sensitive data stored in the container.
- No root privileges required.
Costs
Private infrastructure costs (VMs, storage, networking, Redis clusters) are customer-managed.
Maintenance
Routine Updates
- Updated container images are distributed through Eclypses.
Fault Recovery
- Relaunch the Relay container; clients will automatically re-pair.
Support
For assistance, contact Eclypses Support:
📧 customer_support@eclypses.com
🕒 Monday–Friday, 8:00 AM–5:00 PM MST (excluding holidays)