A Recipe for Handling an AWS Network Request from Start to Finish at Global Scale

by Lance Pollard   Last Updated July 31, 2020 22:05 PM - source

I have been reading the AWS documentation for a solid 2 weeks, and configuring a terraform system to deploy a multi-region network. I have it mostly wired up, but I am not sure I have accounted for all the pieces in the system. I would like to ask if the system or "recipe" I am about to outline is wired up correctly from a high-level architecture standpoint, and if I'm missing anything important or doing anything wrong in these pieces.

It starts with the Global Accelerator, goes through VPN to Load Balancer to Subnet to Instance basically. But some stuff feels a bit redundant and I'm not sure if I'm using things entirely correctly yet. Here is the recipe:

  • 1 Global Accelerator for the world.
  • 1 Global Accelerator Listener for the different internet ports your application works under. (HTTP 80, HTTPS 443 by convention).
  • 1 Global Accelerator Endpoint Group per Global Accelerator Listener.
    • Connect it to each VPC Load Balancer.
  • 1 VPN per user. Use this to connect to the network.
  • 1 VPC per region.
  • Public and Private subnets for each Availability Zone (can subdivide however you like).
  • 1 Application Load Balancer per VPN. Or should it be 1 per Subnet?
    • Connect the 1 Load Balancer to the Public Subnet in each Availability Zone. So 1 Load Balancer spans all Availability Zones.
    • Connect 1 Security Group to the Load Balancer.
  • 1 Security Group per Subnet. This applies to instances in a subnet.
  • 1 Network ACL for each Subnet. This applies to the network as a whole.
  • 1 NAT Gateway per private subnet that connects to the internet (egress). This means 1 NAT Gateway per subnet per availability zone.
  • 1 Internet Gateway per public subnet that connects to the internet (egress). This means 1 Internet Gateway per subnet per availability zone.
  • 1 Route Table for each Subnet.
  • 1 Route for each Route Table for each connection.
  • 1 Network Interface per Instance (can be attached to different instance if one fails).
  • IAM roles and users play a role on the instance, but I haven't gotten there yet.
  • Autoscaling groups play a role as well, but haven't gotten there yet.

The question I have is:

  • What to do with Elastic IP addresses? They are connected to a Network Interface (and there is 1 Network Interface per Instance). Does that mean each Instance has essentially 1 Elastic IP Address? They are also connected to the NAT Gateway (allocation_id). What does this mean?
  • What is the order of steps a request goes through? It starts at HTTPS in the browser, then goes to the DNS which somehow goes to the Global Accelerator anycast system. This goes to the Endpoint group configuration (load balancer) that is closest to the request origin. HTTPS is terminated at the load balancer and goes to HTTP internally. The Load Balancer is connected to a Public Subnet, which hahs a security group, NACL, and route table which tells it how to direct traffic to the next layer in the system. Where does the Internet Gateway fit in, specifically with the Load Balancer already handling internet traffic? The HTTP requests then goes to the next subnet, and next subnet, until it reaches the proper subnet based on the target IP. Then it goes to the Network Interface, which does something? Then it goes to the instance.
  • Is there anything I am missing in this configuration? Anything that can be improved or in which there are clear best practices?

This is just for making a request. I haven't done Route53 yet, but I think I can figure that part out.



Related Questions



Requesting from HTTP Server to HTTP Client's IP

Updated May 21, 2019 08:05 AM

HTTP vs TCP vs UDP with this example?

Updated March 06, 2019 03:05 AM