MTE API Relay - AWS Usage Guide
Introduction:
The MTE API Relay Container is a NodeJS application that encodes and proxies HTTP payloads to another MTE API Relay Container, which decodes and proxies the request to another API Service. The Eclypses MTE is a compiled software library combining quantum-resistant algorithms with proprietary, patented techniques to encode data. A common use case for the MTE API Relay is to add an MTE translation layer of security to HTTP Requests being made across the open internet (Example: A server-side application calls a third-party API service).
Introductory Video
Customer Deployment
The MTE API Relay Container is typically used to encode HTTP Requests from a server-side application, proxy the request to another MTE API Relay Container who decodes the request, and proxies the request to the intended API. In AWS, the MTE API Relay Container can be orchestrated as a single or multi-container in an ECS (Elastic Container Service) Task. Orchestration and setup of the container service could take a few hours. While it is possible to deploy this workload in an EKS (Elastic Kubernetes Service) Cluster, this document will focus on a typical deployment in AWS ECS.
Typical Customer Deployment
In an ideal situation, a customer will already have (or plan to create):
- An application that sends HTTP Requests to another RESTful API Service.
- A second RESTful API Service. This could be in another region, AWS Account, or third-party environment (assuming the third-party has the ability to stand-up an MTE API Relay Container of their own).
Typical Use Case:
- To encode/decode HTTP Requests between services, an API Relay Container must exist on each side of the transmission.
- At least one API Relay Container will run in the sending environment.
- At least one API Relay Container will run in the receiving environment.
When completed, the following services and resources will be set up:
Service | Required | Purpose |
---|---|---|
AWS ECS | True | Container Orchestration |
AWS ElastiCache | True | MTE State Management |
AWS CloudWatch | True | Logging |
AWS VPC | True | Virtual Private Cloud |
Elastic Load Balancer | False | Required if orchestrating multiple containers and/or acting as a receiver of encoded data |
AWS Secrets Manager | False | Recommended for Environment Variables |
Resource | Required | Purpose |
---|---|---|
Eclypses MTE API Relay container(s) | True | Purchased from the AWS Marketplace |
ElastiCache for Redis | True | ElastiCache |
Application Load Balancer | False | Recommended - even for a single container workflow |
ECS Task | True | Orchestration |
Deployment Options:
- The default deployment: Multiple AZ, Single Region
- For Multi-AZ or Multi-Region deployments, the MTE API Relay Container is a stateful service and will create unique, one-to-one MTE States with individual application sessions. As such, it is important to understand that to deploy multi-region configurations, a single ElastiCache service must be accessible to all regions that might be processing HTTP requests from a single application proxy session.
Supported Regions:
Not currently supported in:
- GovCloud
- Middle East (Bahrain)
- Middle East (UAE)
- China
Port Mappings
- Container runs on port 8080 for HTTP traffic
Prerequisites and Requirements
Technical
The following elements are previously required for a successful deployment:
- An application (or planned application) that communicates with a Web API using HTTP Requests
Skills or Specialized Knowledge
- Familiarity with AWS ECS (or EKS orchestration)
- General familiarity with AWS Services like ElastiCache
Configuration
The customer is required to create the following keys. It is recommended that these keys be stored in AWS Secrets Manager.
Configuration Video
The MTE API Relay is configurable using the following environment variables :
Required Configuration Variables:
UPSTREAM
- Required
- The upstream application IP address, ingress, or URL that inbound requests will be proxied to.
CLIENT_ID_SECRET
- Required
- A secret that will be used to sign the x-mte-client-id header. A 32+ character string is recommended.
- Note: This will allow you to personalize your client/server relationship. It is required to validate the sender.
OUTBOUND_TOKEN
- Required
- The value appended to the x-mte-outbound-token header to denote the intended outbound recipient.
REDIS_URL
- Strongly Recommended in Production Environments
- The entry point to your Redis ElastiCache cluster. If null, the container will use internal memory. In load-balanced workflows, a common cache location is essential to maintain a paired relationship with the upstream API Relay.
Optional Configuration Variables:
The following configuration variables have default values. If the customer does choose to create the following keys, it is recommended that these keys be stored in AWS Secrets Manager.
PORT
- The port that the server will listen on.
- Default:
8080
. - Note: If this value is changed, make sure it is also changed in your application load balancer.
DEBUG
- A flag that enables debug logging.
- Default:
false
HEADERS
- An object of headers that will be added to all request/responses.
YAML Configuration Examples
AWS Task Definition Parameters
Minimal Configuration Example
upstream: https://api.my-company.com
clientIdSecret: 2DkV4DDabehO8cifDktdF9elKJL0CKr
redisURL: redis://10.0.1.230:6379
outboundToken: abcdefg1234567
Full Configuration Example
upstream: https://api.my-company.com
clientIdSecret: 2DkV4DDabehO8cifDktdF9elKJL0CKrk
redisURL: redis://10.0.1.230:6379
outboundToken: abcdefg1234567
port: 3000
debug: true
headers:
x-service-name: mte-relay
ENV Configuration Examples
Minimal Configuration Example
UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
REDIS_URL='redis://10.0.1.230:6379'
OUTBOUND_TOKEN='abcdefg1234567`
Full Configuration Example
UPSTREAM='https://api.my-company.com'
CLIENT_ID_SECRET='2DkV4DDabehO8cifDktdF9elKJL0CKrk'
REDIS_URL='redis://10.0.1.230:6379'
OUTBOUND_TOKEN='abcdefg1234567`
PORT=3000
DEBUG=true
HEADERS='{"x-service-name":"mte-relay"}'
Database Credentials:
None
Key/Variable Rotation Recommendations:
It is not necessary to rotate any keys in the Environment Variables section as they do not have any cryptographic value. However, it would good practice to rotate the "CLIENT_ID_SECRET" every 90 days (about 3 months) as recommended by many modern best-practices.
Architecture Diagrams
Default Deployment
ECS Deployment
Alternative Deployment
EKS Deployment – Multiple Load- Balanced Containers
*This process is not described in this document
Security
The MTE API Relay does not require AWS account root privilege for deployment or operation. The container only facilitates the data as it moves from client to server and does not store any sensitive data.
AWS Identity and Access Management (IAM) Roles:
In Elastic Container Services (ECS), create a new Task Definition.
Give your task definition appropriate roles:
-
ecsTaskExecutionRole
-
Note: The task must have access to the
AWSMarketplaceMeteringRegisterUsage
role.
For dependencies:
- AWSServiceRoleForElastiCache: Permission to read/write from ElastiCache instance
- AWSServiceRoleForCloudWatch: Permission to write to AWS CloudWatch log service
VPC (Virtual Private Cloud)
While traffic is protected between the client application and MTE API Relay Server, the traffic is not encoded while it is proxied to the upstream service. The MTE API Relay Container should be deployed in the same VPC as the upstream servers so that proxied traffic can remain internal only. See the architecture diagrams for a visual representation.
Costs
The MTE API Relay Container is a usage-based model based on Cost/Unit/Hr, where a unit = AWS ECS Task. See the marketplace listing for costs.
Billable Services
Service | Required | Purpose |
---|---|---|
AWS ECS | True | Container Orchestration |
AWS ElastiCache | True | MTE State Management |
AWS CloudWatch | True | Logging |
AWS VPC | True | Recommended |
Elastic Load Balancer | False | Required if orchestrating multiple containers and/or acting as a receiver of encoded data |
AWS Secrets Manager | False | Recommended for Environment Variables |
Sizing
ECS
The API Relay can be load-balanced as needed and configured for autoscaling. There are no size requirements.
- The MTE API Relay Container workload is subject to ECS Service Limits.
Deployment Assets
VPC Setup
It is not necessary to create a dedicated VPC for this workflow. Ideally, your MTE API Relay Server is run in the same VPC as your upstream API.
AWS ElastiCache Redis Setup
ElastiCache Security Group Settings
- Create a new security group.
- Give your security group a name and description.
MTERelayElastiCacheSG
- Allow traffic from MTE API Relay ECS service.
- Add a new inbound rule.
- Type: All TCP
- Source: Custom
- Click in the search box, and scroll down to select your security group
MteRelayServiceSG
.
- Click in the search box, and scroll down to select your security group
- Description: Allow traffic from MTE API Relay ECS Service.
- If an outbound rule does not already exist to allow all outbound traffic, create one.
- Type: All Traffic
- Destination: Anywhere IPv4
- Click Create Security Group
ElastiCache Setup
- Create ElastiCache cluster
- Select Redis Interface
- Subject to quota limits
- The minimum/maximum defaults should be sufficient
- Assign your Security Group that allows traffic on port
6379
and6380
from your MTE API Relay Server ECS instance.- Select your newly created cache, then click Modify from the top right.
- Scroll down to Security, then under Security Groups click Manage
- Remove the default security group and add the security group
MTERelayElastiCacheSG
- Save these changes.
- Configure ElastiCache for Redis to the private subnet
Setting up Elastic Container Service (ECS)
ECS will run the MTE API Relay Container and dynamically scale it when required. To do this, create a task definition to define what a single MTE API Relay Container looks like, create a Service to manage running the containers, and finally create an ECS Cluster for your service to run in.
Server Setup Video
Create an ECS Cluster
- Navigate to ECS, then click "Clusters" from the left side menu.
- Click "Create cluster"
- Enter a cluster name.
- Select
AWS Fargate (serverless)
as the infrastructure. - Click Create.
AWS ECS Setup – Task Definition
Infrastructure Requirements
- From the left side menu, select Task Definitions
- Click to Create new task definition.
- Create a Task Definition family name;
- Select Launch Type of
AWS Fargate
- Operating System
Linux x86_64
- Select
1vCPU
and2GB Memory
- Task Roles should be
ecsTaskExecutionRole
Task Definition Directions
- Subscribe to the Container Product from the marketplace
- Create new Task Definition called MTE API Relay Server
- Choose ECS Instance launch type; AWS Fargate
- Choose CPU and Memory. There is no minimum requirement, but
1 CPU
and2GB memory
is recommended. - Give your task definition appropriate roles:
ecsTaskExecutionRole
- Note: The task must have access to the
AWSMarketplaceMeteringRegisterUsage
role.
- Provide the MTE API Relay Docker image URI and give your container a name.
- It is an essential container
- Port Mappings
- Container runs on port 8080 for HTTP traffic
- Provide Required Environment Variables
- Additional environment variables can be set. Please see Optional Configuration Variables for more info.
- Select to export logs to AWS Cloud Watch
- Create a new CloudWatch log group, and select the same region your MTE API Relay Server is running in.
- Subject to CloudWatch limits
- Save Task Definition
ECS Service Setup
After creating the MTE API Relay Task Definition, you can create a new ECS service that utilizes that task definition.
- Select the MTE API Relay Task Definition and click Deploy > Create Service
- Leave "Compute Configuration" defaults.
- Under Deployment Configuration, name your service MTE API Relay Service
- For the number of desired tasks, select how many instances of MTE API Relay you would like.
- Minimum is 1, and recommended is 2 or more.
- Under the networking sections, ensure you are launching MTE API Relay in the same VPC as your upstream service.
- Select, or create a new, service group that will allow traffic from your load balancer.
- Create a new load balancer for this service.
- Set any additional configuration regarding service scaling or tags. These are optional.
- Save and create the new service.
ECS Cluster Security Group
- Create a new security group.
- Give your security group a name and description
MteRelayServiceSG
- Allow Traffic to MTE API Relay Service
- Add a new inbound rule.
- Type: All TCP
- Source: Custom
- Click in the search box, and scroll down to select your security group for web traffic to your load balancer.
- Description: Allow traffic from load balancer.
- If an outbound rule does not already exist to allow all outbound traffic, create one.
- Type: All Traffic
- Destination: Anywhere IPv4
- Click Create Security Group
Load Balancer Settings
- Navigate to the Elastic Compute Cloud (EC2) service.
- From the menu on the left, scroll down to Network & Security > Security Groups
- Click Create Security Group to create a new security group.
- Give your security group a name and description.
PublicWebTrafficSG
- Allow public web traffic on ports 80 and 443
- Add a new Inbound rule.
- Custom TCP
- Port 80
- Source from Anywhere-IPv4
- Description: Allow incoming traffic on port 80.
- Create a second inbound rule for traffic on port 443.
- If an outbound rule does not already exist to allow all outbound traffic, create one.
- Type: All Traffic
- Destination: Anywhere IPv4
- Click Create Security Group
Service Configuration
- From the left side menu, click Clusters, then click the name of your newly create cluster
- Halfway down the page, ensure the Services tab is selected, then click Create
Environment
All default options are fine.
- Capacity Provider is Fargate
- Weight is 1
- Version is latest.
Deployment Configuration
- Use the Family dropdown to select the Family Name you create in your task definition.
- Select the latest revision of that task definition
- Name your service
- Desired tasks: 1 is ok to test. If you have more traffic, you may choose to run 2 or more instances of MTE API Relay Server.
Networking
- Select your desired VPC, if you have more than one.
- Choose your subnets. Select all if you're not sure.
- Under security groups, remove any default groups, and add the security group MteRelayServiceSG
EKS Setup
Deploying MTE Relay on EKS is very simple, assuming you have already configured your kubectl
to connect to your EKS cluster.
Deployment File:
Copy this deployment file to your machine, and update the values for image
and the environment variables.
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aws-mte-api-relay-deployment
spec:
replicas: 1
selector:
matchLabels:
app: aws-mte-api-relay
template:
metadata:
labels:
app: aws-mte-api-relay
spec:
containers:
- name: aws-mte-api-relay
image: <CONTAINER_IMG>
ports:
- containerPort: 8080
env:
- name: CLIENT_ID_SECRET
value: <YOUR CLIENT ID SECRET HERE>
- name: UPSTREAM
value: <UPSTREAM VALUE HERE>
- name: AWS_REGION
value: <AWS REGION HERE>
To release the deployment, run the command: kubectl apply -f deployment.yaml
.
Application Load Balancer
Expose a service in your EKS cluster to receive incoming traffic and direct it to your Relay server.
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: aws-mte-api-relay-service
spec:
type: LoadBalancer
selector:
app: aws-mte-api-relay
ports:
- protocol: TCP
port: 80
targetPort: 8080
Run the command: kubectl apply -f service.yaml
You may then run the command kubectl get services
to get information about the newly created load balancer.
Remove deployment and service
If you need to remove the deployment and service you may run the command: kubectl delete -f deployment.yaml && kubectl delete -f service.yaml
Usage Guide
How does MTE API Relay protect network traffic?
The first Relay server will take in this request and validate the x-mte-outbound-token
header exists and is valid. It will then attempt to establish and MTE Relay connection with the Relay Server specified in the x-mte-upstream
header. If that is successful, the request data will be encrypted and sent to the upstream server. The upstream server we decrypt the request and proxy the data to it's API server.
The reverse happens for the response, which allows the request and response to be encrypted while traversing public networks. Finally, the first Relay Server will catch the encrypted response, decrypt the data, and respond to it's API with the decrypted data.
Minimum Requirements
To use the MTE API Relay to protect HTTP traffic between two server, you need to:
- Send the initial request to the out-bound Relay Server
- Include the required headers
x-mte-outbound-token
This header should contain the value of theOUTBOUND_TOKEN
environment variable. Only requests that include this token will be allowed to make outbound requests.x-mte-upstream
The destination of this request once it has been encoded. It must be an MTE API Relay, so that the request can be decoded.
An example curl will look like this:
curl --location 'http://<RELAY_SERVER_1>/api/login' \
--header 'x-mte-outbound-token: 12098312098123098120398' \
--header 'x-mte-upstream: http://<RELAY_SERVER_2>' \
--header 'Content-Type: application/json' \
--header 'Cookie: Cookie_1=value' \
--data-raw '{
"email": "jim.halpert@example.com",
"password": "P@ssw0rd!"
}'
Additional Request Options
These additional options may be included as header in order to change how the request is encrypted.
x-mte-encode-type
Value can beMTE
orMKE
. Default isMKE
.x-mte-encode-headers
Can be"true"
or"false"
, default is"true"
.x-mte-encode-url
Can be"true"
or"false"
, default is"true"
.
Example Curl:
curl --location 'http://<RELAY_SERVER_1>/api/login' \
--header 'x-mte-outbound-token: 12098312098123098120398' \
--header 'x-mte-upstream: http://<RELAY_SERVER_2>' \
--header 'x-mte-encode-type: MTE' \
--header 'x-mte-encode-headers: false' \
--header 'x-mte-encode-url: false' \
--header 'Content-Type: application/json' \
--data-raw '{
"email": "jim.qweqwe@example.com",
"password": "P@ssw0rd!"
}'
Testing
Once the API Relay Server is configured:
- Monitor logs in CloudWatch – check for initialization errors.
- On successful startup, you should see two logs
- MTE instantiated successfully.
- Server listening at http://[0.0.0.0]:8080
- On successful startup, you should see two logs
- To test that the API Service is active and running, submit an HTTPGet request to the echo route:
- curl 'https://[your_domain]/api/mte-echo/test'
- Successful response:
{
"echo":"test",
"time"[UTC datetime]
}
Once both sides of the transmission have at least one API Relay Container:
- Import The Postman Collection following these directions
- Once imported, click the ellipses (...) next to the created collection and select "Edit".
- Navigate to the "Variables" tab.
- Change the "Current value" of the "DOMAIN" variable to the URL for API Relay #1.
- Change the "Current value" of the "UPSTREAM" variable to the URL for API Relay #2.
- Test the /Echo and /MTE-Relay Routes for general tests.
- Success Criteria
- Failure Criteria
- Test the HTTP Requests in the Demo Directory for full continuity.
- Success Criteria
- Failure Criteria
Troubleshooting
Most problems can be determined by consulting the logs in AWS CloudWatch. Some common problems that might occur are:
- Invalid Configuration
- Network misconfiguration
Some specific error examples include:
- I cannot reach my API Relay Container.
- Double check your Security Group allows traffic from your load balancer.
- Check CloudWatch
- Server exits with a
ZodError
- This is a config validation error. Look at the "path" property to determine which of the required Environment Variables you did not set. For example, if the path property shows "upstream," then you forgot to set the environment variable "UPSTREAM."
- Server cannot reach ElastiCache.
- Check that ElastiCache is started in same VPC.
- Check that ElastiCache security group allows traffic from MTE API Relay ECS instance.
- If using credentials, check that credentials are correct.
MTE API Relay Server includes a Debug flag, which you can enable by setting the environment variable "DEBUG" to true. This will enable additional logging that you can review in CloudWatch to determine the source of any issues.
Health Check
For short and long-term health – monitor logs in the AWS CloudWatch service.
Echo Route
The Echo route can be called by an automated system to determine if the service is still running. To test that the API Service is active and running, submit an HTTPGet request to the echo route:
curl 'https://[your\_domain]/api/mte-echo/test'
Successful response:
{
"echo":"test",
"time"[UTC datetime]
}
Performance Metrics
Performance metrics were completed on an AWS t2.micro virtual machine, with 1vcpu and 1GB memory.
100 concurrent connections, 5 minute duration, 1kb request and response
# Requests | Req/second | Median Response Time (ms) | p90 | p99 | |
---|---|---|---|---|---|
Control API | 28050 | 94 | 52 | 59 | 87 |
MTE Relay | 27763 | 92.6 | 62 | 71 | 87 |
Result | 98.9% | 98.5% | +10ms | +12ms | +0ms |
200 concurrent connections, 5 minute duration, 1kb request and response
# Requests | Req/second | Median Response Time (ms) | p90 | p99 | |
---|---|---|---|---|---|
Control API | 54938 | 183 | 55 | 66 | 110 |
MTE Relay | 53724 | 179 | 72 | 110 | 190 |
Result | 97.8% | 97.8% | +17ms | +44ms | +80ms |
250 concurrent connections, 5 minute duration, 1kb request and response
# Requests | Req/second | Median Response Time (ms) | p90 | p99 | |
---|---|---|---|---|---|
Control API | 67966 | 226 | 55 | 72 | 130 |
MTE Relay | 64465 | 215 | 100 | 170 | 280 |
Result | 94.8% | 95.1% | +45ms | +98ms | +150ms |
Note: We recommend load-balancing requests between multiple instances of MTE Relay once you reach this volume of traffic. Please monitor your application carefully.
300 concurrent connections, 5 minute duration, 1kb request and response
# Requests | Req/second | Median Response Time (ms) | p90 | p99 | |
---|---|---|---|---|---|
Control API | 80249 | 267.57 | 58 | 92 | 190 |
MTE Relay | 67696 | 225.74 | 260 | 350 | 450 |
Result | 84.3% | 84.3% | +202ms | +258ms | +260ms |
Routine Maintenance
Patches/Updates
Updated images are distributed through the marketplace.
Service Limits
- ECS
- The MTE API Relay Container workload is subject to ECS Service Limits.
- ElastiCache
- CloudWatch
- ELB (Elastic Load Balancer)
- The MTE API Relay Container workload is subject to ELB Service Limits:
Rotation of Secrets
Rotate the CLIENT_ID_SECRET
every 90 days (about 3 months) as recommended by many modern best-practices.
Emergency Maintenance
Handling Fault Conditions
Tips for solving error states:
- Review all tips from Trouble Shooting section above.
- Check ECS CloudWatch Logs for more information
- Configuration mismatch
- Double-check environment variables for errors
How to recover the software
The MTE API Relay Container ECS task can be relaunched. While current sessions may be affected, the container will seamlessly manage the re-pairing process with the MTE API Relay Upstream Server and the end-user should not be affected.
Support
The Eclypses support center is available to assist with inquiries about our products from 8:00 am to 8:00 pm MST, Monday through Friday, excluding Eclypses holidays. Our committed team of expert developer support representatives handles all incoming questions directed to the following email address: customer_support@eclypses.com