such as containerized applications.
AWS operates dedicated Cloud regions for USA public sector organizations, their GovCloud services. Over 5,000 public sector organizations use their services, and they offer a supporting guide on getting started with buying GovCloud services.
In this presentation they provide a detailed overview of the GovCloud services, highlighting that these are fundamentally exactly the same services as their standard AWS products, but with distinguishing features such as:
- Physically and logically isolated from other regions, operated on USA soil by USA citizens.
- Separate authentication methods.
- A dedicated GovCloud management console.
AWS has established two GovCloud regions in the USA, AWS GovCloud US-West in August 2011 and US-East in November 2018.
The principle feature of the GovCloud services is the engineering required to achieve compliance with a broad set of regulatory requirements.
AWS reviewed a broad and deep spectrum of regulatory programs, including general technology requirements like NIST and FIPS, as well as those specific to the US Government like ITAR and within that individual agencies like the DOD’s SRG and Dept of Justice ‘CJIS‘, and also other industries such as HIPPA.
Key standards include FIPS 140-2 and FISMA, the requirements that provide the foundation for FedRAMP compliance, with some departmental specific programs utilizing this as a baseline and then adding additional controls unique to their needs. Many of these programs are underpinned by NIST standards, in particular 800-53 and 800-71.
AWS has engineered their platform to be compliant wholesale with these requirements, supporting FedRAMP Moderate and High and additionally supporting the DOD SRG impact levels two, four and five.
A framework key to getting started with adopting this compliant infrastructure is the AWS Shared Responsibility Model.
A fundamental requirement of NIST is that the standards defines the security controls for operational procedures across this stack, that the SRM lays out for implementation.
This is then married with the use of appropriate AWS services and usage of them.
For example they provide guidance on where and where not to store sensitive ITAR information across AWS products, advocate encrypting all information via AWS KMS and using CloudTrail to log all actions and changes across their environment and AWS IDAM to ensure access control, such as permitting only US citizens to access ITAR data.
AWS also furnishes customers with a package of documentation that acts as their body of evidence that can be provided to auditors to validate their compliance.
As well as Federal Govt a number of different types of organizations make use of this compliance infrastructure, such as educational providers, research institutions and nonprofits, and private sector vendors serving industries such as aerospace, healthcare and energy industries.
From 31:00m they describe their ATO on AWS program. The scene is set for this by highlighting just how long it can take to achieve compliance, 12-24 months on average with one example taking five years for FedRAMP.
By building process capacity atop their compliant infrastructure and a highly skilled partner network they have established a capability that can repeatedly achieve accreditation in a greatly reduced timescale.
The key goal is not only to achieve compliance but to maintain it on an ongoing basis, achieved through ‘DevSecOps’ best practices, the fusion of Cloud Security and DevOps.
On AWS Security Hub you can run automated, continuous account-level configuration and compliance checks, and customers can deploy a standardized architecture from the CIS AWS Foundations Benchmark from CIS, deploying a set of security configuration best practices for hardening AWS accounts, and provides continuous monitoring capabilities for these security configurations.
AWS Config, CloudTrail and CloudWatch work together to continuously track, audit and assess the overall compliance of your AWS resource configurations with your organization’s policies and guidelines. Data from AWS Config enables you to continuously monitor the configurations of your resources and evaluate these configurations for potential security weaknesses needed to meet common security frameworks and many others across the globe.
This AWS case study for HSIN (Homeland Security Information Network) provides a walk through of the migration process and challenges.
HSIN is the ‘front door’ to information sharing for Homeland Security, a web-based Sensitive But Unclassified information sharing platform that connects with multiple other agencies, providing them a suite of applications such as secure messaging among many others.
Services are mission critical, with agencies using them on a daily basis, a key motivation for moving to the Cloud.
Security is naturally of paramount importance, and thus the need for a FedRAMP High level of service, essentially narrowing the supplier choice down to one as only AWS had achieved this at the time of decision.
This didn’t mean entirely smooth sailing, the types of challenges the agency faced included the fact there was no DHS-approved network connection and there was limited DHS contracting models.
Prior to the move HSIN conducted an extensive analysis of their current systems, including code reviews, software licencing and staffing requirements, with the migration revealing a huge factor was the increased operational responsibility. Where they had had a full service from the federal data centre service, the move to AWS required them to take on more infrastructure management than before.
In essence this forced them to take on more of a DevOps team approach, and they initially set out to embrace this philosophy, automating as much as possible, however they soon found out this proved to be a ‘boil the ocean’ ambition and scaled back to an MVP of getting production live.
Other challenges included finding that security assessors had little experience of Cloud implementations, and a situation of having both legacy and Cloud systems, each needing production-level management.
The backbone of the challenge was the process of migrating data from on-premise to the Cloud, their systems operated terabytes of data. This was achieved via an initial transfer using the AWS Snowball and then the use of synchronizing tools.
Despite the scale of this exercise they found that it went surprisingly smoothly and quickly, taking only seven hours when they had been prepared for it to take up to a week. Similarly they were very pleased to find that performance was also very good.