Skip to content

Satori AWS Deployment Guide

The following section describes the main components that comprise the Satori DataSecOps platform and how to deploy it on AWS.

Introduction to Satori for AWS

The Satori secure data access platform consists of two main components:

  • The Satori Management Console Satori Cyber Application - a SaaS application hosted by Satori.
  • The Satori Data Access Controller (DAC) - a Kubernetes application that is either consumed as a service or deployed on an AWS Elastic Kubernetes Service (EKS) inside the customer VPC.

The DAC should be deployed in the same public cloud region as the data stores that the DAC protects. For example, customers using Redshift on AWS us-east-1 should deploy a DAC in a VPC on AWS us-east-1. For customers who operate data stores in multiple regions, a DAC should be deployed in each region.

Satori VPC Deployment Architecture on AWS

The following two diagrams illustrate the Satori architecture when deployed in a customers virtual private cloud (VPC) on the AWS.

Screenshot

Illustration 1 - High Level Satori Deployment on AWS Architecture

Screenshot

Illustration 2 - Kubernetes Cluster Architecture

Satori DAC Network Configuration

Satori requires the following network path configurations:

  1. User to DAC - Users connect to data stores via the Satori DAC, therefore a network path from users to the DAC is required.
  2. DAC to Data Store - the DAC receives queries from users and then sends them to the data stores it protects, so a network path from the DAC to the data stores is required. Typically, this is established by deploying the DAC in the same VPC as the data stores it protects and ensuring AWS security groups allow access from the DAC to the data stores.
  3. DAC to the Internet - the DAC requires connectivity to the internet for the following purposes:

    • Set the HTTPS port 443 to google.com, googleapis.com and gcr.io - Satori uses several services from the Google cloud platform (GCP) as well as a Git repository that holds the DAC's configuration files, a secret manager for secure storage of secrets and a messaging service to publish data access metadata which is shared with the management console. The full list of fields that are sent is available here Metadata Shared by Data Access Controllers with the Management Console.
    • Set the HTTPS on port 443 to cortex.satoricyber.net - this is where product telemetry (metrics) are uploaded to.

Satori Deployment Options

You can choose to deploy a private, VPC-only facing DAC, or a public, internet-facing DAC.

The Satori Deployment Process

The following section describes how deploy Satori on your VPC.

Satori Deployment Prerequisites

To deploy Satori in your VPC, ensure the following is access privileges and third party products are installed and available:

  1. Administrator Level Access to the Following AWS Services - VPC, Nat gateway, Internet gateway, Network Load Balancer, CloudWatch, Route53, KMS, EKS, EFS (only for Fargate-based deployments).
  2. Python3 is Installed on the Command Line - To verify python3 is installed run the following command: python3 --version. To download python go Python Downloads.
  3. Helm 3 is installed on the Command Line - To verify helm is installed run the following command: helm version. To download helm go Helm.
  4. kubectl is installed on the Command Line - To verify kubectl is installed run the following command: kubectl version. To download kubectl go Kubernates Tool Installations.
  5. AWS Command Line Tools are Installed - To verify that aws cli is installed and then run the following command: aws --version. To download aws cli go to the following AWS amazon CLI Installation.
  6. Kubernetes Cluster Requirements:

    • Kubernetes version: 1.21
    • 3 m5.large VM nodes. We higly recommend to create 3 node groups. One node group in each availability zone.
    • 20Gb Ephemeral storage in every node. By default, AWS creates nodes with 20GB total gross disk space. This means that there is only 18GB net disk space for the ephemeral storage (the volatile temporary storage attached to your instances).Satori recommends that you create nodes with 50 GB disk space.
    • AWS Load Balancer Controller
    • 3 public subnets, one in each availability zone with NAT gateway deployed.
    • 3 private subnets, one in each availability zone. Each EC2 node will require around 60 IP addresses. To ensure the cluster can scale to multiple EC2 nodes per availability zone, and to properly handle failover across availability zones, choose subnets with at least 400-500 IP addresses.
    • For more information on the Kubernetes cluster see the Additional Information section below

First Time Satori Deployment

The first time deployment process consists of the following steps:

  1. Setting up a Kubernetes cluster on EKS
  2. Deploying the Satori helm chart on the cluster
  3. Creating a DNS zone for the cluster

Once the DAC is deployed, subsequent upgrades are orchestrated via the management console.

1 - Setting Up a Kubernetes Cluster on EKS

Satori recommends using the eksctl-bootstrap-cluster tool to create the EKS cluster.

  • Clone the eksctl-bootstrap-cluster Github repository. For example: git@github.com:SatoriCyber/eksctl-bootstrap-cluster.git.
  • Go into the eksctl-bootstrap-cluster directory, for example: cd eksctl-bootstrap-cluster.
  • To deploy the cluster in an existing VPC, edit the properties in the create-cluster.sh script for your AWS region, VPC and subnets. To deploy the cluster to a new VPC, change the EXISTING_VPC property to false. For more information see here.
  • Before running the create-cluster.sh script, ensure that the AWS CLI on the terminal is authenticated to the correct AWS account by running the following command and validating the output: aws sts get-caller-identity.
  • Run the create-cluster.sh script.
  • The script will create an AWS CloudFormation YAML file that describes the resources that will be created. Verify the YAML file is correct and press yes to continue. It may take up to one hour to create the cluster and VM nodes.
  • After the cluster has been created, you need to generate credentials to access it from the command line. Run the following command: aws eks updatekube-config --region <REGION> --name <CLUSTER_NAME>. For example:
aws eks update-kubeconfig --region us-east-1 --name satori-dac-1
  • To test that you have access to the cluster run the following command: kubectl get pods -A. You should get an output similar to the following:
NAMESPACE     NAME                                                         READY   STATUS    RESTARTS   AGE
kube-system   aws-load-balancer-controller-bf9bdccb6-fhzpx                 1/1     Running   0          1m
kube-system   aws-load-balancer-controller-bf9bdccb6-sbs7k                 1/1     Running   0          1m
...
  • Once the Satori helm chart is deployed on the cluster, Kubernetes will create a network load-balancer to distribute traffic across the Satori proxy pods. To ensure Kubernets chooses the right subnets for the load-balancer, set the following tag on each subnet.

For private-facing DACs, set the following tag on each private subnet the cluster is deployed on:

Key: kubernetes.io/cluster/<CLUSTER_NAME>
Value: shared
Key: kubernetes.io/role/internal-elb
Value: 1

For public-facing DACs, set the following tag on each public subnet the cluster is deployed on:

Key: kubernetes.io/cluster/<CLUSTER_NAME>
Value: shared
Key: kubernetes.io/role/elb
Value: 1

For more information see here.

Important: by default, AWS creates the API server endpoint of the Kubernetes cluster to be accessible from any IP address, to allow you to interact with the cluster using tools such as kubectl. It is highly recommended to limit access to the API server endpoint. For more information see cluster endpoint documentation.

2 - Deploying the Satori Helm Chart on the Cluster

The Satori helm chart is available in a deployment package which you download from the Satori management console for first time installation of the DAC. Follow these steps to download and deploy the deployment package:

  • Login to the Satori management console at https://app.satoricyber.com.
  • Go to Settings, Data Access Controllers and select the DAC to deploy to. Please contact Support if a new DAC needs to be created.
  • Select the Upgrade Settings tab and download the recommended deployment package.
  • Extract the deployment package and open a terminal window to the directory where the deployment package was extracted. For example:
tar -xf satori-dac-1.2143.2.tar
cd satori-2143
  • The package contains a satori.py python script that you need to run. The script requires a few libraries to be installed before it can run. To install the requirements run the following commands:
python3 -m venv ./venv
. ./venv/bin/activate
python3 -m pip install --upgrade pip
pip3 install -r requirements.txt
  • Run python3 satori.py deploy to start the deployment process. The script will prompt the name of the Kubernetes cluster it will deploy on. Ensure the cluster is the correct one by entering Y.

3 - Creating a DNS Zone for the Cluster

Satori generates a unique hostname for each data store that it protects, in a DNS zone that is unique for each DAC. For private facing DACs, customers should create a private DNS zone on AWS Route53. This step is only required for private-facing DACs. For public facing customer-hosted DACs, Satori will host the DNS zone.

To create the DNS zone follow these steps: - Login to the Satori management console at https://app.satoricyber.com. - Go to Settings, Data Access Controllers and select the DAC. - Copy the value in the DAC Domain field, this will be the root of the DNS zone. - The DNS zone should point to the AWS load balancer that is deployed in front of the Satori proxy service. To obtain the address of the load balancer run the following command:

kubectl get service satori-runtime-proxy-service -n satori-runtime
NAME                           TYPE           CLUSTER-IP       EXTERNAL-IP                                                                     PORT(S)                                                                                                                                                                                                                                                  AGE
satori-runtime-proxy-service   LoadBalancer   172.20.135.199   k8s-satoriru-satoriru-b8321e97a-3790e137d5d9ef3c.elb.us-east-1.amazonaws.com   5432:30936/TCP,5439:30864/TCP,1433:32722/TCP,3306:31923/TCP,443:30833/TCP,80:31347/TCP,12340:31123/TCP,12341:30752/TCP,12342:31377/TCP,12343:30646/TCP,12344:32062/TCP,12345:31472/TCP,12346:30770/TCP,12347:32501/TCP,12348:31104/TCP,12349:32005/TCP   24h
  • Create a DNS zone on Route53 with a wildcard subdomain that has a CNAME record pointing to the load balancer address. For example:
*.dac1.us-east-1.a.p1.satoricyber.net.   CNAME   k8s-satoriru-satoriru-b8321e97a-3790e137d5d9ef3c.elb.us-east-1.amazonaws.com.

The DAC is now set up and you can proceed to adding your first data store.

Additional Information about Kubernetes Cluster

The following section provides additional considerations, tips and recommendations when deploying, running and configuring your Kubernetes cluster.

EC2 Based Compute

When running the cluster on EC2 (node group) based compute resources, a minimum of 3 m5.large EC2 VMs are required by the DAC, with each VM processing up to 20MB/s of data traffic. For most deployments that is sufficient, however, additional VMs should be added if higher traffic loads are expected. The DAC is horizontally scalable and will automatically request more resources as required.

It is recommended to deploy the cluster on more than one availability zone. For example, when deploying a cluster with two VMs on two availability zones, one VM will be created in each zone.

Fargate Based Compute

Fargate based clusters are supported by Satori.

Provisioning of persistent volumes in Fargate clusters is limited to EFS and require a pre-installation of the EFS CSI driver.

To install it on your Kubernetes cluster perform the steps provided by AWS Amazon EFS CSI User Guide.

AWS currently does not support dynamic provisioning of EFS volumes for Fargate pods. EFS file systems and access points must be created prior to the Satori deployment. The EFS IDs and access point IDs must be defined in the Satori helm chart values.yaml. More information on EFS dynamic provisioning can be found on the Amazon Containers - Dynamic EFS CSI Provisioning Blog

Network Load Balancer

  • Satori requires provisioning of the AWS network load balancer which has a prerequisite of installing the AWS Load Balancer Controller. To learn more about the AWS Load Balancer Controller and install it on your Kubernetes cluster perform the steps provided by AWS from the following link: https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
  • To successfully install the AWS controller your network cluster must adhere to the minimum version requirements. Verify on your EKS-cluster AWS portal that the vpc-cni version is 1.8.0 or later.

Screenshot