This lab was focused on one core security idea: segmentation. The objective wasn’t just to deploy resources on AWS, but to deliberately design network boundaries, enforce least‑privilege communication, and prove—through testing and logs—that unauthorized traffic is blocked.
From my point of view, I treated this lab as a realistic cloud security architecture exercise, not a checkbox task. I approached it the same way I would if I were securing a small production workload: define trust zones, reduce blast radius, and make sure every allowed connection has a reason to exist.
The target architecture was a classic 3‑tier application:
- Web tier (internet‑facing)
- Application tier (private)
- Database tier (highly restricted)
My success criteria were simple:
- Only the web tier should be reachable from the internet
- Each tier should only talk to the tier directly below it
- Lateral movement should be difficult, even if one tier is compromised
- All important traffic decisions should be observable via logs
Network Design Decisions
I started by creating a dedicated VPC using the 10.0.0.0/16 address space. This gave me enough room to segment cleanly while keeping the design easy to reason about.
Within the VPC, I carved out three subnets, each representing a security boundary rather than just a networking convenience:
- Public Web Subnet (
10.0.1.0/24)
This subnet represents the untrusted edge. It contains only the web server and has a route to the Internet Gateway. - Private App Subnet (
10.0.2.0/24)
This subnet hosts the application logic. It has no direct internet exposure and relies on controlled paths for any outbound access. - Private DB Subnet (
10.0.3.0/24)
This subnet is treated as the most sensitive zone. No internet access, no broad outbound rules, and only a single trusted inbound path.
An Internet Gateway was attached to the VPC, but only the web subnet’s route table referenced it. For the application tier, I used a NAT Gateway strictly for outbound updates, reinforcing the idea that outbound access should be intentional, not default.
At this stage, the network layout already enforced a basic security principle: routing alone prevents the internet from ever seeing the app or database tiers.

Deploying the Application Tiers
I launched three compute resources, each aligned to its subnet and security role:
- A web server EC2 instance in the public subnet, assigned a public IP
- An application server EC2 instance in the app subnet, with no public IP
- A database instance in the DB subnet, fully isolated
Before touching security groups, I validated basic routing assumptions:
- The web instance could reach the app instance over the private network
- The app instance could reach the database
- Direct access from the internet to the app or DB failed
This confirmed that the network segmentation itself was working before adding enforcement layers.

Security Groups: Primary Enforcement Layer
I treated security groups as my main policy engine. Each tier had its own security group, and I avoided IP‑based rules wherever possible, preferring security group references to reduce misconfiguration risk.
The web tier security group allowed inbound HTTP/HTTPS traffic from the internet but restricted outbound traffic so it could only talk to the application tier. This ensured the web server couldn’t freely scan or reach other internal resources.
The application tier security group was much stricter. It only accepted inbound traffic from the web tier’s security group and only allowed outbound connections to the database tier. Even if someone gained shell access on the app server, the network policy would still limit where they could go.
The database tier security group was the most locked down. It only allowed inbound traffic from the application tier on the database port and had extremely restricted outbound rules. This effectively made the database a dead end from a network perspective.
At this point, the trust chain was explicit and enforced:
Internet → Web → App → Database
Anything outside that chain was denied by default.

Network ACLs: Guardrails and Defense in Depth
While security groups handled most enforcement, I used Network ACLs as an additional safety net at the subnet level.
The NACLs were configured to:
- Allow only necessary inbound and outbound ports per subnet
- Explicitly deny unnecessary traffic between tiers
- Block unexpected traffic patterns that might slip through misconfigured security groups
This gave me defense in depth. Even if a security group was accidentally loosened, the subnet‑level controls would still provide resistance.
Micro‑Segmentation and Lateral Movement Control
To push the lab further, I applied micro‑segmentation concepts within the application tier.
Instead of treating the app subnet as a flat trust zone, I:
- Assigned distinct security groups to different application roles
- Restricted management or admin ports so they were not reachable from peer instances
This meant that instances in the same subnet could not automatically trust each other. Any lateral movement attempt had to cross an explicit policy boundary.
This design aligns closely with zero‑trust thinking: network location alone does not imply trust.
Validation and Testing
I validated the architecture using both positive and negative tests:
- Internet access to the web server succeeded
- Attempts to reach the app or database directly from the internet failed
- The web server could communicate with the app tier
- The web server could not communicate directly with the database
- The app tier could communicate with the database
Each failure was intentional and expected.
To make these decisions observable, I enabled VPC Flow Logs. Reviewing the logs allowed me to see:
- Accepted connections that matched intended traffic flows
- Rejected connections that confirmed segmentation was actively enforced
Seeing denied traffic in logs was just as valuable as seeing allowed traffic—it proved the controls were actually doing their job.

What This Architecture Achieves
This design limits blast radius in a very practical way. If the web tier is compromised:
- The attacker cannot directly reach the database
- Lateral movement inside the VPC is restricted
- Every step toward more sensitive tiers is gated by explicit rules
Security groups provided the most flexibility and clarity, while NACLs added structural safety. Flow Logs closed the loop by making security decisions visible.
Reflection
This lab reinforced that cloud security is mostly about design, not tools. The strongest control wasn’t a firewall rule; it was deciding where trust should stop.
Micro‑segmentation stood out as the biggest improvement over traditional designs. By removing implicit trust even within a tier, the architecture becomes far more resilient to real‑world attack scenarios.
In a production environment, I would extend this design with continuous monitoring and drift detection using services like AWS Config, GuardDuty, and centralized log analysis to ensure the architecture stays secure over time.
Overall, this lab closely mirrors how I think about securing cloud environments: start with clear boundaries, enforce least privilege everywhere, and always verify through logs and testing.