Skip to content

Deploy Satori CH on AWS

The following section describes the main components of the Satori Customer Hosted (CH) platform and how to deploy them on AWS.

Introduction to Satori CH for AWS

The Satori CH platform consists of two main components: - The Satori Customer Hosted Application -The Satori CH Data Access Controller (DAC) - a Kubernetes application that is either consumed as a service or deployed on an AWS Elastic Kubernetes Service (EKS) inside the customer VPC.

Deploying Satori Customer Hosted (CH)

Deploy the Satori CH DAC in the same public cloud region as the data stores that the Satori CH DAC is meant to protect.

For example, customers using Redshift on AWS us-east-1 should deploy the Satori DAC on a VPC in the same region (AWS us-east-1).

Multi-Region Deployments

For customers who operate data stores in multiple regions, a Satori CH DAC should be deployed for each separate region.

Satori VPC Deployment Architecture on AWS

The following two diagrams illustrate the Satori architecture when deployed in a customer's virtual private cloud (VPC) on AWS.

Screenshot Illustration 1 - High Level Satori Deployment on AWS Architecture

Screenshot Illustration 2 - Kubernetes Cluster Architecture

Screenshot

Satori CH DAC Network Configuration

The Satori CH DAC requires the following network path configurations:

  1. User Connection to the Satori DAC - Users connect to data stores via the Satori DAC, therefore a network path from users to the Satori DAC is required.
  2. Satori DAC Connection to the Data Store - the Satori DAC receives queries from users and then sends them to the data stores it protects, so a network path from the Satori DAC to the data stores is required. Typically, this is established by deploying the Satori DAC in the same VPC as the data stores it protects and ensuring that the AWS security groups allow access from the Satori DAC to the data stores.
  3. Satori DAC Connection to the Internet - the Satori DAC requires connectivity to the internet for the following purposes:
  4. Set the HTTPS port 443 to google.com, googleapis.com and gcr.io - Satori uses several services from the Google cloud platform (GCP) as well as a GIT repository that contains the Satori DAC's configuration files, a secret manager for secure storage of secrets and a messaging service to publish data access metadata that is shared with the management console. The full list of fields that are sent is available here: Metadata Shared by Data Access Controllers with the Management Console.
  5. Set the HTTPS to port 443 to cortex.satoricyber.net - (the product telemetry metrics are uploaded here).

Private or Public Facing Data Access Controller

You can choose to deploy a private, VPC-only facing Satori CH DAC, or a public, internet-facing Satori CH DAC.

Screenshot

The Satori CH Deployment Process

The following section describes how to deploy Satori CH on your VPC.

Satori CH Deployment Prerequisites

To deploy Satori CH in your VPC, ensure the following access privileges and third party products are installed and made available:

  1. Administrator Level Access to the Following AWS Services - VPC, Nat gateway, Internet gateway, Network Load Balancer, CloudWatch, Route53, KMS, EKS, EFS (only for Fargate-based deployments).
  2. Python3 is Installed on the Command Line - To verify python3 is installed run the following command: python3 --version. To download python go Python Downloads.
  3. Helm 3 is installed on the Command Line - To verify helm is installed run the following command: helm version. To download helm go Helm.
  4. kubectl is installed on the Command Line - To verify kubectl is installed run the following command: kubectl version. To download kubectl go Kubernates Tool Installations.
  5. AWS Command Line Tools are Installed - To verify that aws cli is installed and then run the following command: aws --version. To download aws cli go to the following AWS amazon CLI Installation.
  6. Kubernetes Cluster Requirements:

    • Kubernetes version: 1.21
    • 3 m5.large VM nodes. We higly recommend to create 3 node groups. One node group in each availability zone.
    • 20Gb Ephemeral storage in every node. By default, AWS creates nodes with 20GB total gross disk space. This means that there is only 18GB net disk space for the ephemeral storage (the volatile temporary storage attached to your instances).Satori recommends that you create nodes with 50 GB disk space.
    • AWS Load Balancer Controller
    • 3 public subnets, one in each availability zone with NAT gateway deployed.
    • 3 private subnets, one in each availability zone. Each EC2 node will require around 60 IP addresses. To ensure the cluster can scale to multiple EC2 nodes per availability zone, and to properly handle failover across availability zones, choose subnets with at least 400-500 IP addresses.
    • For more information on the Kubernetes cluster see the Additional Information section below
    • Satori provides a tool for verifying kubernetes prerequisites, for more info please see the test_kubernetes tool
    • Satori provides a tool for verifying aws cluster prerequisites, for more info please see the test_aws_settings tool

Screenshot

First Time Satori CH Deployment

The first time deployment process for Satori CH consists of the following three steps:

  1. Setting up a Kubernetes cluster on EKS.
  2. Deploying the Satori helm chart on the cluster.
  3. Creating a DNS zone for the cluster.

Once the Satori CH DAC is deployed, subsequent upgrades are performed in the Satori CH management console.

A - Setting Up a Kubernetes Cluster on EKS

Satori recommends using the Satori eksctl tool tool to create the EKS cluster.

  • Clone the eksctl-bootstrap-cluster Github repository. git clone git@github.com:SatoriCyber/eksctl-bootstrap-cluster.git.
  • Go into the eksctl-bootstrap-cluster directory, cd eksctl-bootstrap-cluster.
  • To deploy the cluster in an existing VPC, edit the properties in the create-cluster.sh script for your AWS region, VPC and subnets. To deploy the cluster to a new VPC, change the EXISTING_VPC property to false. For more information go here.
  • Before running the create-cluster.sh script, ensure that the AWS CLI on the terminal is authenticated to the correct AWS account by running the following command and validating the output: aws sts get-caller-identity.
  • Run the create-cluster.sh script.
  • The script will create an AWS CloudFormation YAML file that describes the resources that will be created. Verify the YAML file is correct and press yes to continue. It may take up to one hour to create the cluster and VM nodes.
  • After the cluster has been created, you need to generate credentials to access it from the command line. Run the following command: aws eks updatekube-config --region <REGION> --name <CLUSTER_NAME>. For example:
aws eks update-kubeconfig --region us-east-1 --name satori-dac-1
  • To test that you have access to the cluster run the following command: kubectl get pods -A. You should get an output similar to the following:
NAMESPACE     NAME                                                         READY   STATUS    RESTARTS   AGE
kube-system   aws-load-balancer-controller-bf9bdccb6-fhzpx                 1/1     Running   0          1m
kube-system   aws-load-balancer-controller-bf9bdccb6-sbs7k                 1/1     Running   0          1m
...
  • Once the Satori helm chart is deployed on the cluster, Kubernetes will create a network load-balancer to distribute traffic across the Satori proxy pods. To ensure Kubernets chooses the right subnets for the load-balancer, set the following tag on each subnet.

For private-facing DACs, set the following tag on each private subnet the cluster is deployed on:

Key: kubernetes.io/cluster/<CLUSTER_NAME>
Value: shared
Key: kubernetes.io/role/internal-elb
Value: 1

For public-facing DACs, set the following tag on each public subnet the cluster is deployed on:

Key: kubernetes.io/cluster/<CLUSTER_NAME>
Value: shared
Key: kubernetes.io/role/elb
Value: 1

For more information see cluster endpoint documentation.

Important: by default, AWS creates the API server endpoint of the Kubernetes cluster to be accessible from any IP address, to allow you to interact with the cluster using tools such as kubectl. It is highly recommended to limit access to the API server endpoint. For more information see cluster endpoint documentation.

B - Deploying the Satori Helm Chart on the Cluster

The Satori helm chart is available in a deployment package which you download from the Satori management console for first time installation of the DAC. Follow these steps to download and deploy the deployment package:

  • Login to the Satori management console at Satori Management Console.

  • Go to Settings, Data Access Controllers and select the DAC to deploy to. Please contact Support if a new DAC needs to be created.

  • Select the Upgrade Settings tab and download the recommended deployment package.
  • Extract the deployment package and open a terminal window to the directory where the deployment package was extracted. For example:
tar -xf satori-dac-1.2143.2.tar
cd satori-2143
  • The package contains a helm chart that is installed using a helm command. For example:
helm upgrade --install --create-namespace -n satori-runtime  --wait --values version-values.yaml --values customer-values.yaml --values customer-override.yaml runtime satori-runtime

C - Creating a DNS Zone for the Cluster

Satori generates a unique hostname for each data store that it protects in a DNS zone that is unique for each DAC.

For private-facing DACs, create a private DNS zone on the AWS Route53. This step is only required for private-facing DACs. For public-facing customer-hosted DACs, Satori hosts the DNS zone.

To create the DNS zone perform the following steps:

  • Login to the Satori management console at Satori Management Console.
  • Go to Settings, Data Access Controllers and select the DAC.
  • Copy the value in the DAC Domain field, this will be the root of the DNS zone.
  • The DNS zone should point to the AWS load balancer that is deployed in front of the Satori proxy service. To obtain the address of the load balancer run the following command:
kubectl get service satori-runtime-proxy-service -n satori-runtime
NAME                           TYPE           CLUSTER-IP       EXTERNAL-IP                                                                     PORT(S)                                                                                                                                                                                                                                                  AGE
satori-runtime-proxy-service   LoadBalancer   172.20.135.199   k8s-satoriru-satoriru-b8321e97a-3790e137d5d9ef3c.elb.us-east-1.amazonaws.com   5432:30936/TCP,5439:30864/TCP,1433:32722/TCP,3306:31923/TCP,443:30833/TCP,80:31347/TCP,12340:31123/TCP,12341:30752/TCP,12342:31377/TCP,12343:30646/TCP,12344:32062/TCP,12345:31472/TCP,12346:30770/TCP,12347:32501/TCP,12348:31104/TCP,12349:32005/TCP   24h
  • Create a DNS zone on Route53 with a wildcard subdomain that has a CNAME record pointing to the load balancer address. For example:
*.dac1.us-east-1.a.p1.satoricyber.net.   CNAME   k8s-satoriru-satoriru-b8321e97a-3790e137d5d9ef3c.elb.us-east-1.amazonaws.com.

The DAC is now set up and you can proceed to adding your first data store.

Screenshot

Additional Information about Kubernetes Cluster

The following section provides additional tips and recommendations for deploying, running and configuring your Kubernetes cluster.

EC2 Based Compute

When running the cluster on EC2 (node group) based compute resources, a minimum of 3 m5.large EC2 VMs are required by the Satori DAC, with each VM processing up to 20MB/s of data traffic.

For most deployments that is sufficient, however, additional VMs should be added if higher traffic loads are expected. The DAC is horizontally scalable and will automatically request more resources as required.

It is recommended to deploy the cluster on more than one availability zone. For example, when deploying a cluster with two VMs on two availability zones, one VM will be created in each zone.

Fargate Based Compute

Fargate based clusters are supported by Satori.

The provisioning of persistent volumes in Fargate clusters is limited to EFS and requires the pre-installation of the EFS CSI driver.

To install EFS CSI driver on your Kubernetes cluster, perform the steps detailed in the AWS Amazon EFS CSI User Guide.

AWS currently does not support dynamic provisioning of EFS volumes for Fargate pods. EFS file systems and access points must be created prior to the Satori deployment. The EFS IDs and access point IDs must be defined in the Satori helm chart values.yaml.

For more information on EFS dynamic provisioning go to the Amazon Containers - Dynamic EFS CSI Provisioning Blog

Network Load Balancer

The following section details the configuration requirements for setting up the network load balancer and two to configure the network cluster.

Load Balancer Provisioning - Satori requires AWS network load balancer provisioning which is a prerequisite of installing the AWS Load Balancer Controller.

To learn more about the AWS Load Balancer Controller and install it on your Kubernetes cluster perform the steps provided by AWS from the following link: AWS Load Balancer Controller

Network Cluster Minimum Requirements - To install the AWS controller your network cluster must adhere to the minimum version requirements. Verify on your EKS-cluster AWS portal that the vpc-cni version is 1.8.0 or later.

Screenshot

Screenshot

Upgrading Satori CH DAC

To upgrade your Satori CH DAC and enjoy the latest features, fixes and security improvements go to the Upgrading Satori section.