Skip to main content

MTE Relay Server on Azure

Introduction

MTE Relay Server is an end-to-end encryption system that protects all network requests with next-generation application data security. It acts as a proxy server in front of your backend, communicating with an MTE Relay Client to encode and decode all network traffic. The server is highly customizable and supports integration with other services through custom adapters.

Below is a typical architecture where a client application communicates with an MTE Relay Server, which then proxies decoded traffic to backend services:

Typical Use Case


Prerequisites

Technical Requirements

  • A web or mobile application project that communicates with a backend API over HTTP.

Skills and Knowledge

  • Familiarity with AKS and Kubernetes.
  • Ability to manage deployments with kubectl.

Deployment Options

MTE Relay Server is provided as a Docker image and is typically run on Azure Kubernetes Service (AKS).

Azure Kubernetes Service (AKS)

Assuming kubectl is configured for your AKS cluster:

Example Deployment (deployment.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-mte-relay-deployment
spec:
replicas: 1
selector:
matchLabels:
app: azure-mte-relay
template:
metadata:
labels:
app: azure-mte-relay
spec:
containers:
- name: azure-mte-relay
image: <CONTAINER_IMAGE>
ports:
- containerPort: 8080
env:
- name: CLIENT_ID_SECRET
value: <YOUR CLIENT ID SECRET HERE>
- name: CORS_ORIGINS
value: <YOUR CORS ORIGINS HERE>
- name: UPSTREAM
value: <UPSTREAM VALUE HERE>

Deploy:

kubectl apply -f deployment.yaml

Example Service (service.yaml)

Expose the deployment via a load balancer:

apiVersion: v1
kind: Service
metadata:
name: azure-mte-relay-service
spec:
type: LoadBalancer
selector:
app: azure-mte-relay
ports:
- protocol: TCP
port: 80
targetPort: 8080

Deploy:

kubectl apply -f service.yaml
kubectl get services

Remove Deployment and Service

kubectl delete -f deployment.yaml
kubectl delete -f service.yaml

Server Configuration

MTE Relay Server is configured using environment variables.

Required Variables

  • UPSTREAM – Upstream API or service URL.
  • CORS_ORIGINS – Comma-separated list of allowed origins.
  • CLIENT_ID_SECRET – Secret for signing client IDs (minimum 32 characters).
  • REDIS_URL (recommended for production) – Redis cluster for maintaining session pairs across load-balanced containers.

Optional Variables

  • PORT – Default: 8080.
  • LOG_LEVEL – One of trace, debug, info, warning, error, panic, off. Default: info.
  • PASS_THROUGH_ROUTES – Routes proxied without MTE encoding.
  • MTE_ROUTES – If set, only listed routes use encoding; others return 404.
  • CORS_METHODS – Default: GET, POST, PUT, DELETE.
  • HEADERS – Object of custom headers.

Minimal Example

UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
CORS_ORIGINS='https://www.my-company.com,https://dashboard.my-company.com'

Full Example

UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
CORS_ORIGINS='https://www.my-company.com,https://dashboard.my-company.com'
REDIS_URL='redis://10.0.1.230:6379'
PORT=3000
LOG_LEVEL=info
PASS_THROUGH_ROUTES='/health,/version'
MTE_ROUTES='/api/v1/*,/api/v2/*'
CORS_METHODS='GET,POST,DELETE'
HEADERS='{"x-service-name":"mte-relay"}'

Client-Side Setup

Eclypses provides client-side SDKs to integrate with MTE Relay Server:

Client-side integration typically requires less than one day of effort.


Testing & Health Checks

  • Monitor logs in AKS or Azure Monitor.

  • Test Echo route:

    curl 'http://<DOMAIN>/api/mte-echo?msg=test'
  • Expected response:

    {
    "echo": "test",
    "time": "<timestamp>"
    }

Monitoring

Azure Monitor and Grafana

Grafana Dashboard Configuration

Import the provided Grafana dashboard:
Download Grafana Dashboard

Dashboard Metrics:

  • Requests Processed [req/sec]
  • Request Time [ms]
  • Upstream Proxy Time [ms]
  • Average Decode Time [ms]
  • Average Encode Time [ms]

Performance Metrics

Performance was measured with ~1 kb request/response payloads:

ConcurrencyReq/Sec RelayReq/Sec APIRelay %Extra Latency (Median)
40092.69498.5%+10 ms
50017918397.8%+17 ms
55021522695.1%+45 ms
600225.7267.684.3%+202 ms

Note: At higher volumes (≥550 concurrent), scale using multiple Relay pods and Azure load balancing.


Troubleshooting

  1. Invalid Configuration
    • Check logs for missing/invalid environment variables.
  2. Relay unreachable
    • Verify AKS service and NSG rules.
  3. Redis connection issues
    • Ensure Redis is reachable with valid credentials.

Security

  • No sensitive data is stored in the container.
  • No root privileges required.

Maintenance

Routine Updates

  • Updated container images are distributed through the Azure Marketplace.

Fault Recovery

  • Restart the Relay container pod; clients will re-pair automatically.

Support

For assistance, contact Eclypses Support:
📧 customer_support@eclypses.com
🕒 Monday–Friday, 8:00 AM–5:00 PM MST (excluding holidays)