← All How-to Guides
AWS Zscaler Zero Trust GWLB Security Cloud Connector

How to Implement Zero Trust Egress on AWS with Zscaler Cloud Connector

Deploy Zscaler Cloud Connector with AWS Gateway Load Balancer for scalable zero trust egress security without backhauling

Advanced ⏱ 45 min

Prerequisites

  • AWS account with Transit Gateway and multi-VPC networking
  • Zscaler Cloud Connector subscription and provisioning key
  • Zscaler Internet Access (ZIA) and/or Zscaler Private Access (ZPA) license
  • Familiarity with GWLB and Geneve protocol
  • Terraform 1.5+ installed

I helped a financial services org rearchitect their AWS egress security. They had 200+ workload VPCs and were backhauling all internet traffic through on-premises proxy appliances — adding 150ms latency to every request. Compliance was a mess because different teams used different security controls. They needed consistent zero trust egress without the backhauling penalty. We deployed Zscaler Cloud Connector with AWS GWLB, and average egress latency dropped from 180ms to 25ms.

Understand the Architecture

Cloud Connector is a lightweight EC2 instance that redirects traffic to Zscaler’s Zero Trust Exchange via encrypted tunnels (GRE or IPsec). Deployed behind a GWLB in a security VPC, it inspects every egress packet against your security policies — SSL/TLS inspection, DLP, threat prevention — all at the cloud edge without backhauling.

Zscaler Cloud Connector Centralized Architecture Centralized deployment: Cloud Connector in security VPC with GWLB and Transit Gateway

Component Purpose Where
Cloud Connector Tunnels traffic to Zscaler Zero Trust Exchange Security VPC
Gateway Load Balancer Distributes traffic across Cloud Connectors via Geneve Security VPC
GWLB Endpoint Entry point in each workload VPC Workload VPCs
Transit Gateway Connects spoke VPCs to security VPC Network Account
Zscaler ZEN Zero Trust Exchange Node — inspects and routes traffic Zscaler Cloud

Choose Centralized or Distributed

Centralized: Cloud Connectors in one security VPC, all traffic via TGW. Fewer instances, easier management, lower cost. Best for most orgs.

Distributed: Cloud Connectors in each workload VPC. Better latency, no cross-VPC hops. Best for high-throughput workloads.

We used a hybrid — centralized for shared services, distributed for trading systems where every millisecond mattered.

Deploy Cloud Connector with Terraform

resource "aws_instance" "cloud_connector" {
  count                = 2
  ami                  = data.aws_ami.cloud_connector.id
  instance_type        = "m6i.xlarge"
  subnet_id            = aws_subnet.security[count.index % 2].id
  iam_instance_profile = aws_iam_instance_profile.cc.name

  user_data = base64encode(templatefile("${path.module}/cc_bootstrap.sh", {
    api_key          = var.zscaler_api_key
    provisioning_key = var.zscaler_provisioning_key
    cc_group         = "aws-cc-${var.environment}"
  }))

  vpc_security_group_ids = [aws_security_group.cc.id]
  tags = { Name = "zscaler-cc-${count.index + 1}" }
}

Set Up GWLB and Endpoints

resource "aws_lb" "gwlb" {
  name               = "zscaler-gwlb"
  internal           = true
  load_balancer_type = "gateway"
  subnets            = aws_subnet.security[*].id
}

resource "aws_lb_target_group" "cc" {
  name     = "zscaler-cc-targets"
  port     = 6081
  protocol = "GENEVE"
  vpc_id   = aws_vpc.security.id

  health_check {
    interval = 10
    port     = "443"
    protocol = "TCP"
  }
}

resource "aws_vpc_endpoint_service" "gwlb" {
  gateway_load_balancer_arns = [aws_lb.gwlb.arn]
  acceptance_required        = false
}

# Create endpoint in each workload VPC
resource "aws_vpc_endpoint" "gwlbe" {
  for_each          = var.workload_vpcs
  vpc_id            = each.value.vpc_id
  vpc_endpoint_type = "GatewayLoadBalancer"
  service_name      = aws_vpc_endpoint_service.gwlb.service_name
  subnet_ids        = [each.value.subnet_id]
}

Redirect Egress Traffic

Update workload VPC route tables to point 0.0.0.0/0 at the GWLB endpoint:

resource "aws_route" "egress_via_gwlbe" {
  for_each               = var.workload_vpcs
  route_table_id         = each.value.route_table_id
  destination_cidr_block = "0.0.0.0/0"
  vpc_endpoint_id        = aws_vpc_endpoint.gwlbe[each.key].id
}

Trace the Packet Flow

  1. Workload sends HTTPS to api.example.com
  2. Route table sends to GWLB endpoint → GWLB encapsulates in Geneve
  3. GWLB distributes to Cloud Connector (5-tuple hash for flow affinity)
  4. Cloud Connector decapsulates, checks forwarding policy
  5. Internet traffic → GRE/IPsec tunnel to Zscaler ZEN
  6. ZEN applies security policies (URL filtering, DLP, threat prevention)
  7. If allowed, forwards to destination. If denied, drops and logs.

Pro Tip: Cloud Connector routes internet traffic to ZIA (Zscaler Internet Access) and private app traffic to ZPA (Zscaler Private Access) based on destination IP ranges. Document your routing policies carefully — we spent hours debugging cases where private app traffic was being sent to ZIA instead of ZPA.

Key Takeaway

Zscaler Cloud Connector with GWLB eliminates backhauling while enforcing consistent egress security across every VPC. Our customer went from 180ms latency and inconsistent policies to 25ms direct-to-cloud egress with 100% policy coverage. Deploy at least 2 Cloud Connectors per AZ for redundancy, store bootstrap keys in Secrets Manager, and monitor GWLB flow metrics in CloudWatch.

Questions? Connect with me on LinkedIn.