Skip to main content

MTE Relay Server On-Premise Deployment

Introduction

MTE Relay Server is an end-to-end encryption system that protects all network requests with next-generation application data security. It acts as a proxy server in front of your backend, communicating with an MTE Relay Client to encode and decode all network traffic. The server is highly customizable and supports integration with other services through custom adapters.

Below is a typical architecture where a client application communicates with an MTE Relay Server, which then proxies decoded traffic to backend services:

Typical Use Case

info

MTE Relay Servers can only communicate with MTE Relay Clients. An MTE API Relay cannot communicate with an MTE Relay Server or an MTE Relay Client SDK. MTE Relay Servers are strictly for client-to-server communications.


Prerequisites

Technical Requirements

  • A web or mobile application project that communicates with a backend API over HTTP.
  • Docker installed and running on your system.
  • AWS CLI installed.

Skills and Knowledge

  • Familiarity with Docker and/or Kubernetes.
  • Basic knowledge of configuring containerized services.

Credentials

  • An AWS Access Key ID and AWS Secret Access Key provided by Eclypses to access the private container repository.

Deployment Options

MTE Relay Server is provided as a Docker image and can be deployed on-premise using:

  • Docker Run
  • Docker Compose
  • Kubernetes
  • Other container runtimes (e.g., Podman, Docker Swarm, K3s)

1. Configure AWS CLI Access

You must configure a new AWS CLI profile using the credentials provided by Eclypses.

aws configure --profile eclypses-customer-on-prem

When prompted:

  • AWS Access Key ID: Enter the ID provided by Eclypses.
  • AWS Secret Access Key: Enter the secret key provided.
  • Default region name: us-east-1
  • Default output format: json

2. Pull the Docker Image

Video Guide: https://www.youtube.com/watch?v=Ft5mhREa64A

Authenticate Docker with the Eclypses ECR repository:

aws ecr get-login-password \
--region us-east-1 \
--profile eclypses-customer-on-prem \
| docker login --username AWS \
--password-stdin 321186633847.dkr.ecr.us-east-1.amazonaws.com

Then pull the image:

docker pull 321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-relay-server:4.4.9

Server Configuration

MTE Relay Server is configured using environment variables.

Required Variables

  • UPSTREAM – Upstream API or service URL.
  • CORS_ORIGINS – Comma-separated list of allowed origins.
  • CLIENT_ID_SECRET – Secret for signing client IDs (minimum 32 characters).
  • REDIS_URL (recommended for production) – Redis cluster for maintaining session pairs across load-balanced containers.

Optional Variables

  • PORT – Default: 8080.
  • LOG_LEVEL – One of trace, debug, info, warning, error, panic, off. Default: info.
  • PASS_THROUGH_ROUTES – Routes proxied without MTE encoding.
  • MTE_ROUTES – If set, only listed routes use encoding; others return 404.
  • CORS_METHODS – Default: GET, POST, PUT, DELETE.
  • HEADERS – Object of custom headers.

Minimal Example

UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
CORS_ORIGINS='https://www.my-company.com,https://dashboard.my-company.com'

Full Example

UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
CORS_ORIGINS='https://www.my-company.com,https://dashboard.my-company.com'
REDIS_URL='redis://10.0.1.230:6379'
PORT=3000
LOG_LEVEL=info
PASS_THROUGH_ROUTES='/health,/version'
MTE_ROUTES='/api/v1/*,/api/v2/*'
CORS_METHODS='GET,POST,DELETE'
HEADERS='{"x-service-name":"mte-relay"}'

Deployment Steps

Option A: Docker Run

Video Guide: https://www.youtube.com/watch?v=7AVqdJoGmcE

Using the docker run command, we can launch a single MTE Relay container locally for testing or local development purposes.

Copy the command below, modify the environment variable values, and run it in your terminal.

docker run --rm -it \
--name mte-relay \
-p 8080:8080 \
-e UPSTREAM=__YOUR_UPSTREAM_URL__ \
-e CLIENT_ID_SECRET=__YOUR_CLIENT_ID_SECRET__ \
-e CORS_ORIGINS=__YOUR_CORS_ORIGINS__ \
321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-relay-server:4.4.9

Command Explanation:

  • docker run
    Runs a container from the specified image.
  • --rm
    Automatically removes the container when it exits.
  • -it
    Allocates an interactive terminal (-i for interactive input, -t for pseudo-TTY).
  • --name mte-relay
    Assigns a custom name (mte-relay) to the container.
  • -p 8080:8080
    Maps host port 8080 to container port 8080. You may change the host port if needed. Do not change the container port.
  • -e UPSTREAM=__YOUR_UPSTREAM_URL__
    Sets the environment variable UPSTREAM to the provided URL.
  • -e CLIENT_ID_SECRET=__YOUR_CLIENT_ID_SECRET__
    Sets the environment variable CLIENT_ID_SECRET.
  • -e CORS_ORIGINS=__YOUR_CORS_ORIGINS__
    Sets the environment variable CORS_ORIGINS.
  • The last line is the image to run.

Option B: Docker Compose

Video Guide: https://www.youtube.com/watch?v=Ej_3PmyUEtA

Docker Compose provides a convenient way to define and manage multi-container applications by allowing you to describe all of your services in a single YAML file. Once defined, Docker Compose can automatically create and start the containers with a single command, ensuring consistency across environments.

Create a new file named docker-compose.yaml with the following content, and update the environment variable values as needed:

docker-compose.yaml
services:
mte-relay-server:
image: 321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-relay-server:4.4.9
ports:
- "8080:8080"
environment:
- UPSTREAM=__YOUR_UPSTREAM_URL__ # Update this value!
- CLIENT_ID_SECRET=__YOUR_CLIENT_ID_SECRET__ # Update this value!
- CORS_ORIGINS=__YOUR_CORS_ORIGINS__ # Update this value!
- REDIS_URL=redis://redis:6379
depends_on:
- redis

redis:
image: "redis:alpine"

Docker Compose Commands

To start the services defined in the docker-compose.yaml file, run:

docker compose up

Helpful CLI Flags:

  • -f - Specify an alternate compose file.
    • Example: docker compose -f custom.yml up
  • -d - Run containers in the background.
    • Example: docker compose up -d

Check running containers with:

docker compose ps

Stop the services with:

docker compose down
info

In production environments, it may be better to use a Docker Swarm cluster for improved scalability, isolation of services, rolling updates, and management.

Sources:
Docker Compose Documentation
Docker Compose CLI


Option C: Kubernetes

Video Guide: https://www.youtube.com/watch?v=4HqYdn_5-9c

Kubernetes provides a powerful system for automating the deployment, scaling, and management of containerized applications. By defining resources in YAML files, it ensures reliability, scalability, and consistency across environments.

Create a file named mte-relay-deployment.yaml with the following content:

mte-relay-deployment.yaml
# MTE Relay container deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mte-relay-deployment
spec:
replicas: 2
selector:
matchLabels:
app: mte-relay
template:
metadata:
labels:
app: mte-relay
spec:
containers:
- name: mte-relay-server
image: 321186633847.dkr.ecr.us-east-1.amazonaws.com/customer/on-prem/mte-relay-server:4.4.9
ports:
- containerPort: 8080
env:
- name: UPSTREAM
value: __YOUR_UPSTREAM_URL__ # Update this value!
- name: CLIENT_ID_SECRET
value: __YOUR_CLIENT_ID_SECRET__ # Update this value!
- name: CORS_ORIGINS
value: __YOUR_CORS_ORIGINS__ # Update this value!
- name: REDIS_URL
value: "redis://my-redis-service:6379"

---

# MTE Relay Service to expose the deployment
apiVersion: v1
kind: Service
metadata:
name: mte-relay-service
spec:
type: LoadBalancer
selector:
app: mte-relay
ports:
- protocol: TCP
port: 8080 # External port, you may change this if needed
targetPort: 8080

---

# Redis container deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-redis-deployment
spec:
replicas: 1
selector:
matchLabels:
app: my-redis
template:
metadata:
labels:
app: my-redis
spec:
containers:
- name: redis
image: redis:7.2
ports:
- containerPort: 6379

---

# Redis Service to expose the deployment
apiVersion: v1
kind: Service
metadata:
name: my-redis-service
spec:
type: ClusterIP
selector:
app: my-redis
ports:
- protocol: TCP
port: 6379
targetPort: 6379

The above configuration provides these resources:

  • mte-relay-deployment (Deployment): Runs 2 replicas of the mte-relay-server container, exposing port 8080 with environment variables for upstream service, client secret, CORS, and Redis connection.
  • mte-relay-service (Service - LoadBalancer): Exposes the mte-relay pods externally on port 80, forwarding traffic to container port 8080.
  • my-redis-deployment (Deployment): Runs a single Redis instance on port 6379.
  • my-redis-service (Service - ClusterIP): Provides internal cluster access to the Redis pod over port 6379.

Kubernetes Commands

To deploy the application to your kubernetes cluster, run:

kubectl apply -f mte-relay-deployment.yaml

You can check the status of your deployment with:

kubectl get deployments
kubectl get pods
kubectl get services

To check the logs of a running pod, use the command

kubectl logs <POD_NAME>

To scale up the number of replicas, use the command

kubectl scale deployment mte-relay-deployment --replicas=<NUMBER_OF_REPLICAS>

To delete the deployment and service, use the command

kubectl delete -f mte-relay-deployment.yaml

Source: Kubernetes Deployments


Testing & Health Checks

  • Monitor container logs for startup messages
  • Use the default or custom echo routes to test container responsiveness:
    • Default: /api/mte-echo
    • Custom Message: /api/mte-echo?msg=test

Expected response:

{
"message": "test",
"timestamp": "<timestamp>"
}

Troubleshooting

  1. Invalid Configuration
    • Check logs for missing/invalid environment variables.
  2. Relay unreachable
    • Verify firewall, networking, or Kubernetes service configuration.
  3. Redis connection issues
    • Ensure REDIS_URL is reachable in your environment.

Security

  • No sensitive data stored in the container.
  • No root privileges required.

Costs

Private infrastructure costs (VMs, storage, networking, Redis clusters) are customer-managed.


Maintenance

Routine Updates

  • Updated container images are distributed through Eclypses.

Fault Recovery

  • Relaunch the Relay container; clients will automatically re-pair.

Support

For assistance, contact Eclypses Support:
📧 customer_support@eclypses.com
🕒 Monday–Friday, 8:00 AM–5:00 PM MST (excluding holidays)