MTE API Relay On-Premise Deployment
Introduction
MTE API Relay is an end-to-end encryption system that protects HTTP traffic between server applications. It acts as a proxy server in front of your backend services, communicating with another API Relay instance to encode and decode all proxied traffic. This enables secure application-to-application communications using the Eclypses MTE encryption engine.
Below is a typical architecture where one server application communicates through an MTE API Relay container, which transmits proxied traffic to another API Relay that decodes and forwards the request to its target backend service:

MTE API Relay instances are only compatible with each other. Neither an MTE Relay Server nor an MTE Client SDK can communicate with an MTE API Relay. MTE API Relays are strictly for server-to-server communications.
Prerequisites
Technical Requirements
- An existing backend service that accepts HTTP calls.
- We provide a demo using a Postman Collection and https://jsonplaceholder.typicode.com.
 
- Docker installed and running on your system.
- AWS CLI installed.
Skills and Knowledge
- Familiarity with Docker and/or Kubernetes.
- General knowledge of container deployment models.
Credentials
- An AWS Access Key ID and AWS Secret Access Key provided by Eclypses to access the private container repository.
Deployment Options
MTE API Relay is provided as a Docker image and can be deployed on-premise using:
- Docker Run
- Docker Compose
- Kubernetes
- Other container runtimes (e.g., Docker Swarm, Podman, K3s)
1. Configure AWS CLI Access
Configure a new AWS CLI profile with the Eclypses-issued credentials:
aws configure --profile eclypses-customer-on-prem
When prompted:
- AWS Access Key ID: Enter the ID provided by Eclypses.
- AWS Secret Access Key: Enter the secret key provided.
- Default region name: us-east-1
- Default output format: json
2. Pull the Docker Image
Video Guide: https://www.youtube.com/watch?v=Ft5mhREa64A
Authenticate Docker with the Eclypses ECR registry:
- bash
- PowerShell
aws ecr get-login-password \
--region us-east-1 \ 
--profile eclypses-customer-on-prem \
| docker login --username AWS \
--password-stdin 321186633847.dkr.ecr.us-east-1.amazonaws.com
aws ecr get-login-password `
--region us-east-1 `
--profile eclypses-customer-on-prem `
| docker login --username AWS `
--password-stdin 321186633847.dkr.ecr.us-east-1.amazonaws.com
Then pull the image:
docker pull 321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-api-relay:4.4.9
Server Configuration
MTE API Relay is configured using environment variables.
Required Variables
- UPSTREAM– Upstream API or service URL.
- CLIENT_ID_SECRET– Secret for signing client IDs (minimum 32 characters).
- OUTBOUND_TOKEN– Token appended to requests to denote the intended outbound recipient.
- REDIS_URL(recommended for production) – Redis cluster for maintaining session pairs across load-balanced containers.
Optional Variables
- PORT– Default:- 8080.
- DEBUG– Set to- trueto enable verbose logs (default:- false).
- HEADERS– Object of custom headers.
- CORS_ORIGINS– Comma-separated list of allowed origins.
- CORS_METHODS– Default:- GET, POST, PUT, DELETE.
Minimal Example
UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
OUTBOUND_TOKEN='abcdefg1234567'
Full Example
UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
OUTBOUND_TOKEN='abcdefg1234567'
REDIS_URL='redis://10.0.1.230:6379'
PORT=3000
LOG_LEVEL='debug'
HEADERS='{"x-service-name":"mte-api-relay"}'
CORS_ORIGINS='https://www.my-company.com,https://dashboard.my-company.com'
CORS_METHODS='GET,POST,DELETE'
Deployment Steps
Option A: Docker Run
Using the docker run command, we can launch a single MTE Relay container locally for testing or local development purposes.
Copy the command below, modify the environment variable values, and run it in your terminal.
- bash
- PowerShell
docker run --rm -it \
  --name mte-api-relay \
  -p 8080:8080 \
  -e UPSTREAM=__YOUR_UPSTREAM_URL__ \
  -e CLIENT_ID_SECRET=__YOUR_CLIENT_ID_SECRET__ \
  -e OUTBOUND_TOKEN=__YOUR_SECURE_OUTBOUND_ACCESS_TOKEN__ \
  321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-api-relay:4.4.9
docker run --rm -it `
  --name mte-api-relay `
  -p 8080:8080 `
  -e UPSTREAM=__YOUR_UPSTREAM_URL__ `
  -e CLIENT_ID_SECRET=__YOUR_CLIENT_ID_SECRET__ `
  -e OUTBOUND_TOKEN=__YOUR_SECURE_OUTBOUND_ACCESS_TOKEN__ `
  321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-api-relay:4.4.9
Command Explanation:
- docker run
 Runs a container from the specified image.
- --rm
 Automatically removes the container when it exits.
- -it
 Allocates an interactive terminal (- -ifor interactive input,- -tfor pseudo-TTY).
- --name mte-relay
 Assigns a custom name (- mte-relay) to the container.
- -p 8080:8080
 Maps host port- 8080to container port- 8080. You may change the host port if needed. Do not change the container port.
- -e UPSTREAM=__YOUR_UPSTREAM_URL__
 Sets the environment variable- UPSTREAMto the provided URL.
- -e CLIENT_ID_SECRET=__YOUR_CLIENT_ID_SECRET__
 Sets the environment variable- CLIENT_ID_SECRET.
- -e CORS_ORIGINS=__YOUR_CORS_ORIGINS__
 Sets the environment variable- CORS_ORIGINS.
- The last line is the image to run.
Option B: Docker Compose
Docker Compose provides a convenient way to define and manage multi-container applications by allowing you to describe all of your services in a single YAML file. Once defined, Docker Compose can automatically create and start the containers with a single command, ensuring consistency across environments.
Create a new file named docker-compose.yaml with the following content, and update the environment variable values as needed:
version: "3.8"
services:
  mte-api-relay:
    image: 321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-api-relay:4.4.9
    ports:
      - "8080:8080"
    environment:
      - UPSTREAM=__YOUR_UPSTREAM_URL__                  # Update this value!
      - CLIENT_ID_SECRET=__YOUR_CLIENT_ID_SECRET__      # Update this value!
      - OUTBOUND_TOKEN=__YOUR_OUTBOUND_ACCESS_TOKEN__   # Update this value!
      - REDIS_URL=redis://redis:6379
    depends_on:
      - redis
  redis:
    image: "redis:alpine"
Docker Compose Commands
To start the services defined in the docker-compose.yaml file, run:
docker compose up
Helpful CLI Flags:
- -f- Specify an alternate compose file.- Example: docker compose -f custom.yml up
 
- Example: 
- -d- Run containers in the background.- Example: docker compose up -d
 
- Example: 
Check running containers with:
docker compose ps
Stop the services with:
docker compose down
In production environments, it may be better to use a Docker Swarm cluster for improved scalability, isolation of services, rolling updates, and management.
Sources:
Docker Compose Documentation
Docker Compose CLI
Option C: Kubernetes
Kubernetes provides a powerful system for automating the deployment, scaling, and management of containerized applications. By defining resources in YAML files, it ensures reliability, scalability, and consistency across environments.
Create a file named mte-api-relay-deployment.yaml with the following content:
# MTE Relay container deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mte-api-relay-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: mte-api-relay
  template:
    metadata:
      labels:
        app: mte-api-relay
    spec:
      containers:
        - name: mte-api-relay
          image: 321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-api-relay:4.4.9
          ports:
            - containerPort: 8080
          env:
            - name: UPSTREAM
              value: __YOUR_UPSTREAM_URL__              # Update this value!
            - name: CLIENT_ID_SECRET
              value: __YOUR_CLIENT_ID_SECRET__          # Update this value!
            - name: OUTBOUND_TOKEN
              value: __YOUR_OUTBOUND_ACCESS_TOKEN__     # Update this value!
            - name: REDIS_URL
              value: "redis://my-redis-service:6379"
---
# MTE Relay Service to expose the deployment
apiVersion: v1
kind: Service
metadata:
  name: mte-api-relay-service
spec:
  type: LoadBalancer
  selector:
    app: mte-api-relay
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
---
# Redis container deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-redis-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-redis
  template:
    metadata:
      labels:
        app: my-redis
    spec:
      containers:
        - name: redis
          image: redis:7.2
          ports:
            - containerPort: 6379
---
# Redis Service to expose the deployment
apiVersion: v1
kind: Service
metadata:
  name: my-redis-service
spec:
  type: ClusterIP
  selector:
    app: my-redis
  ports:
    - protocol: TCP
      port: 6379
      targetPort: 6379
The above configuration provides these resources:
- mte-relay-deployment (Deployment): Runs 2 replicas of the mte-relay-server container, exposing port 8080 with environment variables for upstream service, client secret, CORS, and Redis connection.
- mte-relay-service (Service - LoadBalancer): Exposes the mte-relay pods externally on port 80, forwarding traffic to container port 8080.
- my-redis-deployment (Deployment): Runs a single Redis instance on port 6379.
- my-redis-service (Service - ClusterIP): Provides internal cluster access to the Redis pod over port 6379.
Kubernetes Commands
To deploy the application to your kubernetes cluster, run:
kubectl apply -f mte-relay-deployment.yaml
You can check the status of your deployment with:
kubectl get deployments
kubectl get pods
kubectl get services
To check the logs of a running pod, use the command
kubectl logs <POD_NAME>
To scale up the number of replicas, use the command
kubectl scale deployment mte-relay-deployment --replicas=<NUMBER_OF_REPLICAS>
To delete the deployment and service, use the command
kubectl delete -f mte-relay-deployment.yaml
Source: Kubernetes Deployments
Testing & Health Checks
- Monitor container logs for startup messages
- Use the default or custom echo routes to test container responsiveness:
- Default: /api/mte-echo
- Custom Message: /api/mte-echo?msg=test
 
- Default: 
Expected response:
{
  "message": "test",
  "timestamp": "<timestamp>"
}
Troubleshooting
- Invalid Configuration
- Check logs for missing/invalid environment variables.
 
- Relay unreachable
- Verify firewall, networking, or Kubernetes service configuration.
 
- Redis connection issues
- Ensure REDIS_URLis reachable in your environment.
 
- Ensure 
- Cannot Reach Upstream Service
- Verify the container can resolve and connect to the target host.
 
Security
- No sensitive data is stored in the container.
- No root privileges required.
- Recommended to deploy close to the upstream service to minimize exposure of unencrypted traffic.
Costs
Private infrastructure costs (VMs, storage, networking, Redis clusters) are customer-managed. No AWS charges are incurred for on-premise usage.
Maintenance
Routine Updates
- Updated container images are distributed through Eclypses.
Fault Recovery
- Relaunch the Relay container; paired API Relays will automatically re-establish secure communication.
Key/Variable Rotation Recommendations
- Rotate the CLIENT_ID_SECRETandOUTBOUND_TOKENevery 90 days as per security best practices.
Support
For assistance, contact Eclypses Support:
📧 customer_support@eclypses.com
🕒 Monday–Friday, 8:00 AM–5:00 PM MST (excluding holidays)