MTE Relay Server on AWS
Introduction:
MTE Relay Server is an end-to-end encryption system that protects all network requests with next-generation application data security. MTE Relay Server acts as a proxy-server that sits in front of your normal backend, and communicates with an MTE Relay Client, encoding and decoding all network traffic. MTE Relay Server is highly customizable and can be configured to integrate with a number of other services through the use of custom adapters.
Introductory Video
Customer Deployment
The MTE Relay Server is typically used to decode HTTP Requests from a browser-based web application (or mobile WebView) and proxy the decoded request to the intended API. In AWS, the MTE Relay Server container can be orchestrated as a single or multi-container in an ECS (Elastic Container Service) Task. Orchestration and setup of the container service could take up to 1-2 days. While it is possible to deploy this workload in an EKS (Elastic Kubernetes Service) Cluster, this document will focus on a typical deployment in AWS ECS.
Typical Customer Deployment
In an ideal situation, a customer will already have (or plan to create):
- A web application with a JavaScript front-end or a mobile application using WebView.
- A RESTful Web API back-end.
When completed, the following services and resources will be set up:
Service | Required | Purpose |
---|---|---|
AWS ECS | True | Container Orchestration |
AWS ElastiCache | True | MTE State Management |
AWS CloudWatch | True | Logging |
AWS VPC | True | Virtual Private Cloud |
Elastic Load Balancer | True | Required if orchestrating multiple containers |
AWS Secrets Manager | False | Recommended for Environment Variables |
Resource | Required | Purpose |
---|---|---|
Eclypses MTE Relay Server container(s) | True | Purchased from the AWS Marketplace |
ElastiCache for Redis | True | ElastiCache |
Application Load Balancer | True | Recommended - even for a single container workflow |
ECS Task | True | Orchestration |
Deployment Options:
- The default deployment: Multiple AZ, Single Region
- For Multi-AZ or Multi-Region deployments, the MTE Relay Container is a stateful service and will create unique, one-to-one MTE States with individual client-side browser/application sessions. As such, it is important to understand that to deploy multi-region configurations, a single ElastiCache service must be accessible to all regions that might be processing HTTP requests from a single client session.
Supported Regions:
Not currently supported in:
- GovCloud
- Middle East (Bahrain)
- Middle East (UAE)
- China
Port Mappings
- Container runs on port 8080 for HTTP traffic
Client-Side Implementation:
MTE Relay Server is intended for use with your client-side application configured to use MTE Relay Client. As a result, a client can pair with the server and send MTE-encoded payloads. Without a compatible client-side application, this product has extremely limited utility. The MTE Relay server provides access to the client JavaScript module required to secure/unsecure data on the client. Once the server is set up, the client JS code is available at the route /public/mte-relay-browser.js.
Reference the setup and usage instructions here.
Client-Side implementation is typically less than one day of work but may take longer in more complex DevOps pipelines.
Prerequisites and Requirements
Technical
The following elements are previously required for a successful deployment:
-
An application (or planned application) that communicates with a Web API using HTTP Requests
- Ideally, this is an application the purchaser owns or can import packages and add/change custom code.
Skills or Specialized Knowledge
- Familiarity with AWS ECS (or EKS orchestration)
- General familiarity with AWS Services like ElastiCache
- Ability to write/edit front-end client application code. JavaScript knowledge is ideal.
Configuration
The customer is required to create the following keys. It is recommended that these keys be stored in AWS Secrets Manager.
Configuration Video
The MTE Relay is configurable using the following environment variables :
Required Configuration Variables:
UPSTREAM
- Required
- The upstream application IP address, ingress, or URL that inbound requests will be proxied to.
CORS_ORIGINS
- Required
- A comma-separated list of URLs that will be allowed to make cross-origin requests to the server.
CLIENT_ID_SECRET
- Required
- A secret that will be used to sign the x-mte-client-id header. A 32+ character string is recommended.
- Note: This will allow you to personalize your client/server relationship. It is required to validate the sender.
AWS_REGION
- Required
- The region you are running your Relay server in. Example:
us-east-1
REDIS_URL
- Strongly Recommended in Production Environments
- The entry point to your Redis ElastiCache cluster. If null, the container will use internal memory. In load-balanced workflows, a common cache location is required to maintain a paired relationship with the upstream API Relay.
Optional Configuration Variables:
The following configuration variables have default values. If the customer does choose to create the following keys, it is recommended that these keys be stored in AWS Secrets Manager.
PORT
- The port that the server will listen on.
- Default:
8080
. - Note: If this value is changed, make sure it is also changed in your application load balancer.
DEBUG
- A flag that enables debug logging.
- Default:
false
PASS_THROUGH_ROUTES
- A list of routes that will be passed through to the upstream application without being MTE encoded/decoded.
- example: "/some_route_that_is_not_secret"
MTE_ROUTES
- A list of routes that will be MTE encoded/decoded. If this optional property is included, only the routes listed will be MTE encoded/decoded, and any routes not listed here or in
PASS_THROUGH_ROUTES
will 404. If this optional property is not included, all routes not listed inPASSTHROUGH_ROUTES
will be MTE encoded/decoded.
- A list of routes that will be MTE encoded/decoded. If this optional property is included, only the routes listed will be MTE encoded/decoded, and any routes not listed here or in
CORS_METHODS
- A list of HTTP methods that will be allowed to make cross-origin requests to the server.
- Default:
GET, POST, PUT, DELETE
. - Note:
OPTIONS
andHEAD
are always allowed.
HEADERS
- An object of headers that will be added to all request/responses.
MAX_POOL_SIZE
- The number of encoder objects and decoder objects held in a pool. A larger pool will consume more memory, but it will also handle more traffic more quickly. This number is applied to all four pools; the MTE Encoder, MTE Decoder, MKE Encoder, and MKE Decoder pools.
- Default:
25
YAML Configuration Examples
AWS Task Definition Parameters
Minimal Configuration Example
upstream: https://api.my-company.com
clientIdSecret: 2DkV4DDabehO8cifDktdF9elKJL0CKrk
corsOrigins:
- https://www.my-company.com
- https://dashboard.my-company.com
redisURL: redis://10.0.1.230:6379
Full Configuration Example
upstream: https://api.my-company.com
clientIdSecret: 2DkV4DDabehO8cifDktdF9elKJL0CKrk
corsOrigins:
- https://www.my-company.com
- https://dashboard.my-company.com
redisURL: redis://10.0.1.230:6379
port: 3000
debug: true
passThroughRoutes:
- /health
- /version
mteRoutes:
- /api/v1/*
- /api/v2/*
corsMethods:
- GET
- POST
- DELETE
headers:
x-service-name: mte-relay
maxPoolSize: 10
ENV Configuration Examples
Minimal Configuration Example
UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
CORS_ORIGINS='https://www.my-company.com,https://dashboard.my-company.com'
REDIS_URL='redis://10.0.1.230:6379'
Full Configuration Example
UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
CORS_ORIGINS='https://www.my-company.com,https://dashboard.my-company.com'
REDIS_URL='redis://10.0.1.230:6379'
PORT=3000
DEBUG=true
PASS_THROUGH_ROUTES='/health,/version'
MTE_ROUTES='/api/v1/*,/api/v2/*'
CORS_METHODS='GET,POST,DELETE'
HEADERS='{"x-service-name":"mte-relay"}'
MAX_POOL_SIZE=10
Database Credentials:
None
Key/Variable Rotation Recommendations:
It is not necessary to rotate any keys in the Environment Variables section as they do not have any cryptographic value. However, it would good practice to rotate the "CLIENT_ID_SECRET" every 90 days (about 3 months) as recommended by many modern best-practices.
Architecture Diagrams
Default Deployment
ECS Deployment
Alternative Deployment
EKS Deployment – Multiple Load- Balanced Containers
*This process is not described in this document
Security
The MTE Relay does not require AWS account root privilege for deployment or operation. The container only facilitates the data as it moves from client to server and does not store any sensitive data.
AWS Identity and Access Management (IAM) Roles:
In Elastic Container Services (ECS), create a new Task Definition.
Give your task definition appropriate roles:
-
ecsTaskExecutionRole
-
Note: The task must have access to the
AWSMarketplaceMeteringRegisterUsage
role.
For dependencies:
- AWSServiceRoleForElastiCache: Permission to read/write from ElastiCache instance
- AWSServiceRoleForCloudWatch: Permission to write to AWS CloudWatch log service
VPC (Virtual Private Cloud)
While traffic is protected between the client application and MTE Relay Server, the traffic is not encoded while it is proxied to the upstream service. The MTE Relay container should be deployed in the same VPC as the upstream servers so that proxied traffic can remain internal only. See the architecture diagrams for a visual representation.
Costs
The MTE Relay container is a usage-based model based on Cost/Unit/Hr, where a unit = AWS ECS Task. See the marketplace listing for costs.
Billable Services
Service | Required | Purpose |
---|---|---|
AWS ECS | True | Container Orchestration |
AWS ElastiCache | True | MTE State Management |
AWS CloudWatch | True | Logging |
AWS VPC | True | Recommended |
Elastic Load Balancer | True | Required if orchestrating multiple containers |
AWS Secrets Manager | False | Recommended for Environment Variables |
Sizing
ECS
The Relay Server can be load-balanced as needed and configured for autoscaling. There are no size requirements.
- The MTE Relay Server Container workload is subject to ECS Service Limits.
Deployment Assets
VPC Setup
It is not necessary to create a dedicated VPC for this workflow. Ideally, your MTE Relay Server is run in the same VPC as your upstream API.
AWS ElastiCache Redis Setup
ElastiCache Security Group Settings
- Create a new security group.
- Give your security group a name and description.
MTERelayElastiCacheSG
- Allow traffic from MTE Relay ECS service.
- Add a new inbound rule.
- Type: All TCP
- Source: Custom
- Click in the search box, and scroll down to select your security group
MteRelayServiceSG
.
- Click in the search box, and scroll down to select your security group
- Description: Allow traffic from MTE Relay ECS Service.
- If an outbound rule does not already exist to allow all outbound traffic, create one.
- Type: All Traffic
- Destination: Anywhere IPv4
- Click Create Security Group
ElastiCache Setup
- Create ElastiCache cluster
- Select Redis Interface
- Subject to quota limits
- The minimum/maximum defaults should be sufficient
- Assign your Security Group that allows traffic on port
6379
and6380
from your MTE Relay Server ECS instance.- Select your newly created cache, then click Modify from the top right.
- Scroll down to Security, then under Security Groups click Manage
- Remove the default security group and add the security group
MTERelayElastiCacheSG
- Save these changes.
- Configure ElastiCache for Redis to the private subnet
Setting up Elastic Container Service (ECS)
ECS will run the MTE Relay container and dynamically scale it when required. To do this, create a task definition to define what a single MTE Relay container looks like, create a Service to manage running the containers, and finally create an ECS Cluster for your service to run in.