Skip to main content

MTE Relay Server On-Premise Deployment

Introduction

MTE Relay Server is an end-to-end encryption system that protects all network requests with next-generation application data security. MTE Relay Server acts as a proxy-server that sits in front of your normal backend, and communicates with an MTE Relay Client, encoding and decoding all network traffic.

This guide provides instructions for customers to pull the MTE Relay Server Docker image from our private Amazon ECR repository and deploy it in an on-premise environment using Docker, Docker Swarm, Kubernetes, or other container runtimes.

Typical Use Case

Prerequisites

Technical Requirements

  • An existing backend service that accepts HTTP calls.
  • Docker installed and running on your system.
  • The AWS CLI installed on your system.

Credentials

  • You must have an AWS Access Key ID and AWS Secret Access Key provided by Eclypses to access the private container repository.

Step 1: Configure AWS CLI Access

You will need to configure a new AWS CLI profile using the credentials we provided. This keeps your access separate from any other AWS accounts you might have.

  1. Open your terminal or command prompt.

  2. Run the following command to start the configuration process:

    aws configure --profile eclypses-customer-on-prem
  3. The CLI will prompt you for the following information. Use the values provided to you by Eclypses.

    • AWS Access Key ID: Paste the Access Key ID.
    • AWS Secret Access Key: Paste the Secret Access Key.
    • Default region name: Enter us-east-1.
    • Default output format: Enter json.

Step 2: Pull the Docker Image

Next, use your configured AWS profile to authenticate Docker with our Amazon ECR repository and pull the image.

  1. Run the following command to retrieve an authentication token and log Docker in.

    aws ecr get-login-password --region us-east-1 --profile eclypses-customer-on-prem | docker login --username AWS --password-stdin 321186633847.dkr.ecr.us-east-1.amazonaws.com

    A Login Succeeded message confirms a successful authentication.

  2. Pull the MTE Relay Server image to your local machine.

    docker pull 321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-relay-server:4.4.7

Note: Once pulled, you may want to re-tag this image and push it to your own internal container registry for easier distribution within your infrastructure.

Step 3: Server Configuration

Configure the MTE Relay Server using the following environment variables:

Required Configuration

  • UPSTREAM
    • The upstream application IP address or URL that inbound requests will be proxied to.
  • CORS_ORIGINS
    • A comma-separated list of URLs allowed to make cross-origin requests.
  • CLIENT_ID_SECRET
    • A secret (32+ characters recommended) used to sign the x-mte-client-id header for sender validation.
  • REDIS_URL
    • The connection URL for your Redis instance (e.g., redis://<host>:<port>). In load-balanced environments, a shared Redis cache is required for MTE state management between container instances.

Optional Configuration

  • PORT
    • The port the server listens on. Default: 8080.
  • DEBUG
    • Enables verbose debug logging. Set to true to enable. Default: false.
  • PASS_THROUGH_ROUTES
    • A comma-separated list of routes to proxy without MTE protection (e.g., "/health,/version").
  • MTE_ROUTES
    • A comma-separated list of routes to protect with MTE. If set, only these routes are processed, and all others will 404 (unless in PASS_THROUGH_ROUTES). If omitted, all routes not in PASS_THROUGH_ROUTES are protected.
  • CORS_METHODS
    • Allowed HTTP methods for CORS. Default: GET,POST,PUT,DELETE.
  • HEADERS
    • A JSON string of headers to add to all requests/responses (e.g., '{"x-service-name":"mte-relay"}').
  • MAX_POOL_SIZE
    • The number of MTE encoders/decoders held in memory. A larger pool uses more memory but can handle more concurrent traffic. Default: 25.

Step 4: Deploy MTE Relay Server

MTE Relay Server is configured via environment variables. Below are examples for running it with Docker and Kubernetes.

Docker Run Example

docker run --rm -it \
--name mte-relay \
-p 8080:8080 \
-e UPSTREAM="<YOUR_BACKEND_URL>" \
-e CLIENT_ID_SECRET="<YOUR_CLIENT_ID_SECRET>" \
-e CORS_ORIGINS="<YOUR_CORS_ORIGINS>" \
-e REDIS_URL="<YOUR_REDIS_URL>" \
321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-relay-server:4.4.7

Docker Compose Example

# docker-compose.yml
version: "3.8"

services:
mte-relay-server:
image: 321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-relay-server:4.4.7
ports:
- "8080:8080"
environment:
- UPSTREAM=https://api.my-company.com
- CLIENT_ID_SECRET=YOUR_32_PLUS_CHARACTER_CLIENT_ID_SECRET
- CORS_ORIGINS="https://example.com,http://localhost:3000"
- REDIS_URL=redis://redis:6379
depends_on:
- redis

redis:
image: "redis:alpine"

Kubernetes Deployment Example

# mte-relay-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mte-relay-deployment
spec:
replicas: 2
selector:
matchLabels:
app: mte-relay
template:
metadata:
labels:
app: mte-relay
spec:
containers:
- name: mte-relay-server
image: 321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-relay-server:4.4.7
ports:
- containerPort: 8080
env:
- name: UPSTREAM
value: "<YOUR_BACKEND_URL>"
- name: CLIENT_ID_SECRET
value: "<YOUR_CLIENT_ID_SECRET>"
- name: CORS_ORIGINS
value: "<YOUR_CORS_ORIGINS>"
- name: REDIS_URL
value: "redis://my-redis-service:6379"
---
apiVersion: v1
kind: Service
metadata:
name: mte-relay-service
spec:
type: LoadBalancer
selector:
app: mte-relay
ports:
- protocol: TCP
port: 80
targetPort: 8080

Step 5: Testing and Health Checks

Health Check Route

Use the /api/mte-echo route to verify the service is running.

curl http://<MTE_RELAY_HOST_OR_IP>:<PORT>/api/mte-echo

A successful response will be a JSON payload with a timestamp:

{
"echo": "test",
"time": "2025-08-14T20:00:00.000Z"
}

Troubleshooting

  • Review Container Logs: Most issues can be diagnosed by checking the container's logs.
    • docker logs mte-relay
    • kubectl logs -l app=mte-relay
  • ZodError on Startup: This is a configuration validation error. The log will indicate which required environment variable is missing or invalid.
  • Cannot Reach Upstream Service: Ensure the MTE Relay container can resolve and connect to the UPSTREAM URL. Check firewalls and container network policies.
  • Cannot Reach Redis: Verify the REDIS_URL is correct and the container has network access to your Redis instance.

Support

The Eclypses support center is available from 8:00 am to 5:00 pm MST, Monday through Friday. Please direct all inquiries to: customer_support@eclypses.com