Thursday, April 22, 2021

S3 replication cross region to another account

preparation

create the above buckets in two AWS accounts

  • source: leo-functions
  • destination: leo-functions-rep

Create IAM role in source account, role name leo-s3-rep:

Ref AWS Doc

  • a. trust policy, where you identify Amazon S3 as the service principal who can assume the role.

{

   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Principal":{
            "Service":"s3.amazonaws.com"
         },
         "Action":"sts:AssumeRole"
      }
   ]
}

  • b. Access policy, where you grant the role permissions to perform replication tasks on your behalf. When Amazon S3 assumes the role, it has the permissions that you specify in this policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetReplicationConfiguration",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::leo-functions"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObjectVersionForReplication",
                "s3:GetObjectVersionAcl",
                "s3:GetObjectVersionTagging"
            ],
            "Resource": [
                "arn:aws:s3:::leo-functions/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ReplicateObject",
                "s3:ReplicateDelete",
                "s3:ReplicateTags"
            ],
            "Resource": "arn:aws:s3:::leo-functions-rep/*"
        }
    ]
}

bucket policy on the destination bucket

This allows the owner of the source bucket to replicate objects.

AWS DOC

{
    "Version": "2012-10-17",
    "Id": "PolicyForDestinationBucket",
    "Statement": [
        {
            "Sid": "Permissions on objects",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::SourceBucket-AcctID:role/leo-s3-rep"
            },
            "Action": [
                "s3:ReplicateDelete",
                "s3:ReplicateObject"
            ],
            "Resource": "arn:aws:s3:::leo-functions-rep/*"
        },
        {
            "Sid": "Permissions on bucket",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::SourceBucket-AcctID:role/leo-s3-rep"
            },
            "Action": [
                "s3:List*",
                "s3:GetBucketVersioning",
                "s3:PutBucketVersioning"
            ],
            "Resource": "arn:aws:s3:::leo-functions-rep"
        }
    ]
}

Create replication rule

name: rep-to-leo-function-rep

Destination:

  • Account ID: DestinationBucket-AcctID
  • Bucket name: leo-functions-rep
  • Change object ownership to destination bucket owner

IAM role:
leo-s3-role

NOTE:

  • Source and Destination bucket must have versioning enabled.
  • Existing objects in the source bucket will not be replicated.

Monday, March 29, 2021

AWS DevOps Engineer Pro - 2

 1. A company is using AWS for an application. The Development team must automate its deployments. The team has set up an AWS CodePipeline to deploy the application to Amazon EC2 instances by using AWS CodeDeploy after it has been built using the AWS CodeBuild service.


The team would like to add automated testing to the pipeline to confirm that the application is healthy before deploying it to the next stage of the pipeline using the same code. The team requires a manual approval action before the application is deployed, even if the test is successful. The testing and approval must be accomplished at the lowest costs, using the simplest management solution.

Which solution will meet these requirements?

A. Add a manual approval action after the last deploy action of the pipeline. Use Amazon SNS to inform the team of the stage being triggered. Next, add a test action using CodeBuild to do the required tests. At the end of the pipeline, add a deploy action to deploy the application to the next stage.

B. Add a test action after the last deploy action of the pipeline. Configure the action to use CodeBuild to perform the required tests. If these tests are successful, mark the action as successful. Add a manual approval action that uses Amazon SNS to notify the team, and add a deploy action to deploy the application to the next stage.

C. Create a new pipeline that uses a source action that gets the code from the same repository as the first pipeline. Add a deploy action to deploy the code to a test environment. Use a test action using AWS Lambda to test the deployment. Add a manual approval action by using Amazon SNS to notify the team, and add a deploy action to deploy the application to the next stage.

D. Add a test action after the last deployment action. Use a Jenkins server on Amazon EC2 to do the required tests and mark the action as successful if the tests pass. Create a manual approval action that uses Amazon SQS to notify the team and add a deploy action to deploy the application to the next stage.

B

https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html


2. A company is building a solution for storing files containing Personally Identifiable Information (PII) on AWS.

Requirements state:

✑ All data must be encrypted at rest and in transit.

✑ All data must be replicated in at least two locations that are at least 500 miles apart.

Which solution meets these requirements?


A. Create primary and secondary Amazon S3 buckets in two separate Availability Zones that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce Amazon S3 SSE-C on all objects uploaded to the bucket. Configure cross-region replication between the two buckets.

B. Create primary and secondary Amazon S3 buckets in two separate AWS Regions that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce S3-Managed Keys (SSE-S3) on all objects uploaded to the bucket. Configure cross-region replication between the two buckets.

C. Create primary and secondary Amazon S3 buckets in two separate AWS Regions that are at least 500 miles apart. Use an IAM role to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce Amazon S3-Managed Keys (SSE-S3) on all objects uploaded to the bucket. Configure cross- region replication between the two buckets.

D. Create primary and secondary Amazon S3 buckets in two separate Availability Zones that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce AWS KMS encryption on all objects uploaded to the bucket. Configure cross-region replication between the two buckets. Create a KMS Customer Master Key (CMK) in the primary region for encrypting objects.


B


both B and C are valid options. like cross-account access is with a bucket policy.

Check the "When to use IAM policies vs. S3 policies" section in this article:

https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/

We can control the access from the source or from the destination. From destination is the better way. That is to use policy to restrict every access from any sources.


3. A mobile application running on eight Amazon EC2 instances is relying on a third-party API endpoint. The third-party service has a high failure rate because of limited capacity, which is expected to be resolved in a few weeks.

In the meantime, the mobile application developers have added a retry mechanism and are logging failed API requests. A DevOps Engineer must automate the monitoring of application logs and count the specific error messages; if there are more than 10 errors within a 1-minute window, the system must issue an alert.

How can the requirements be met with MINIMAL management overhead?


A. Install the Amazon CloudWatch Logs agent on all instances to push the application logs to CloudWatch Logs. Use metric filters to count the error messages every minute, and trigger a CloudWatch alarm if the count exceeds 10 errors.

B. Install the Amazon CloudWatch Logs agent on all instances to push the access logs to CloudWatch Logs. Create a CloudWatch Events rule to count the error messages every minute, and trigger a CloudWatch alarm if the count exceeds 10 errors.

C. Install the Amazon CloudWatch Logs agent on all instances to push the application logs to CloudWatch Logs. Use a metric filter to generate a custom CloudWatch metric that records the number of failures and triggers a CloudWatch alarm if the custom metric reaches 10 errors in a 1-minute period.

D. Deploy a custom script on all instances to check application logs regularly in a cron job. Count the number of error messages every minute, and push a data point to a custom. CloudWatch metric. Trigger a CloudWatch alarm if the custom metric reaches 10 errors in a 1-minute period.

C

hreshold and Period in minutes for CloudWatch alarm - 10 errors in a 1-minute period.

Example for creating Alarm:

https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html


4. A company mandates the creation of capture logs for everything running in its AWS account. The account has multiple VPCs with Amazon EC2 instances,

Application Load Balancers, Amazon RDS MySQL databases, and AWS WAF rules configured. The logs must be protected from deletion. A daily visual analysis of log anomalies from the previous day is required.

Which combination of actions should a DevOps Engineer take to accomplish this? (Choose three.)

A. Configure an AWS Lambda function to send all CloudWatch logs to an Amazon S3 bucket. Create a dashboard report in Amazon QuickSight.

B. Configure AWS CloudTrail to send all logs to Amazon Inspector. Create a dashboard report in Amazon QuickSight.

C. Configure Amazon S3 MFA Delete on the logging Amazon S3 bucket.

D. Configure an Amazon S3 object lock legal hold on the logging Amazon S3 bucket.

E. Configure AWS Artifact to send all logs to the logging Amazon S3 bucket. Create a dashboard report in Amazon QuickSight.

F. Deploy an Amazon CloudWatch agent to all Amazon EC2 instances.


ADF


5.  A DevOps Engineer wants to prevent Developers from pushing updates directly to the company×’€™s master branch in AWS CodeCommit. These updates should be approved before they are merged.

Which solution will meet these requirements?


A. Configure an IAM role for the Developers with access to CodeCommit and an explicit deny for write actions when the reference is the master. Allow Developers to use feature branches and create a pull request when a feature is complete. Allow an approver to use CodeCommit to view the changes and approve the pull requests.

B. Configure an IAM role for the Developers to use feature branches and create a pull request when a feature is complete. Allow CodeCommit to test all code in the feature branches, and dynamically modify the IAM role to allow merging the feature branches into the master. Allow an approver to use CodeCommit to view the changes and approve the pull requests.

C. Configure an IAM role for the Developers to use feature branches and create a pull request when a feature is complete. Allow CodeCommit to test all code in the feature branches, and issue a new AWS Security Token Service (STS) token allowing a one-time API call to merge the feature branches into the master. Allow an approver to use CodeCommit to view the changes and approve the pull requests.

D. Configure an IAM role for the Developers with access to CodeCommit and attach an access policy to the CodeCommit repository that denies the Developers role access when the reference is master. Allow Developers to use feature branches and create a pull request when a feature is complete. Allow an approver to use CodeCommit to view the changes and approve the pull requests.

A


6.  A company has migrated its container-based applications to Amazon EKS and wants to establish automated email notifications. The notifications sent to each email address are for specific activities related to EKS components. The solution will include Amazon SNS topics and an AWS Lambda function to evaluate incoming log events and publish messages to the correct SNS topic.

Which logging solution will support these requirements?


A. Enable Amazon CloudWatch Logs to log the EKS components. Create a CloudWatch subscription filter for each component with Lambda as the subscription feed destination.

B. Enable Amazon CloudWatch Logs to log the EKS components. Create CloudWatch Logs Insights queries linked to Amazon CloudWatch Events events that trigger Lambda.

C. Enable Amazon S3 logging for the EKS components. Configure an Amazon CloudWatch subscription filter for each component with Lambda as the subscription feed destination.

D. Enable Amazon S3 logging for the EKS components. Configure S3 PUT Object event notifications with AWS Lambda as the destination.


A

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html


7.  A web application with multiple services runs on Amazon EC2 instances behind an Application Load Balancer. The application stores data in an Amazon RDS

Multi-AZ DB instance. The instance health check used by the load balancer returns PASS if at least one service is running on the instance.

The company uses AWS CodePipeline with AWS CodeBuild and AWS CodeDeploy steps to deploy code to test and production environments. Recently, a new version was unable to connect to the database server in the test environment. One process was running, so the health checks reported healthy and the application was promoted to production, causing a production outage. The company wants to ensure that test builds are fully functional before a promotion to production.

Which changes should a DevOps Engineer make to the test and deployment process? (Choose two.)


A. Add an automated functional test to the pipeline that ensures solid test cases are performed.

B. Add a manual approval action to the CodeDeploy deployment pipeline that requires a Testing Engineer to validate the testing environment.

C. Refactor the health check endpoint the Elastic Load Balancer is checking to better validate actual application functionality.

D. Refactor the health check endpoint the Elastic Load Balancer is checking to return a text-based status result and configure the load balancer to check for a valid response.

E. Add a dependency checking step to the existing testing framework to ensure compatibility.

BC

8. A company's application is currently deployed to a single AWS Region. Recently, the company opened a new office on a different continent. The users in the new office are experiencing high latency. The company's application runs on Amazon EC2 instances behind an Application Load Balancer (ALB) and uses Amazon

DynamoDB as the database layer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones. A DevOps Engineer is tasked with minimizing application response times and improving availability for users in both Regions.

Which combination of actions should be taken to address the latency issues? (Choose three.)


A. Create a new DynamoDB table in the new Region with cross-Region replication enabled.

B. Create new ALB and Auto Scaling group global resources and configure the new ALB to direct traffic to the new Auto Scaling group.

C. Create new ALB and Auto Scaling group resources in the new Region and configure the new ALB to direct traffic to the new Auto Scaling group.

D. Create Amazon Route 53 records, health checks, and latency-based routing policies to route to the ALB.

E. Create Amazon Route 53 aliases, health checks, and failover routing policies to route to the ALB.

F. Convert the DynamoDB table to a global table.

CDF 



9. An application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). A DevOps Engineer is using AWS CodeDeploy to release a new version. The deployment fails during the AllowTraffic lifecycle event, but a cause for the failure is not indicated in the deployment logs.

What would cause this?


A. The appspec.yml file contains an invalid script to execute in the AllowTraffic lifecycle hook.

B. The user who initiated the deployment does not have the necessary permissions to interact with the ALB.

C. The health checks specified for the ALB target group are misconfigured.

D. The CodeDeploy agent was not installed in the EC2 instances that are part of the ALB target group.


C


10.  A DevOps Engineer must ensure all IAM entity configurations across multiple AWS accounts in AWS Organizations are compliant with corporate IAM policies.

Which combination of steps will accomplish this? (Choose two.)

A. Enable AWS Trusted Advisor in Organizations for all accounts to report on noncompliant IAM entities.

B. Configure an AWS Config aggregator in the Organizations master account for all accounts.

C. Deploy AWS Config rules to the master account in Organizations that match corporate IAM policies.

D. Apply an SCP in Organizations to ensure compliance of IAM entities.

E. Deploy AWS Config rules to all accounts in Organizations that match the corporate IAM policies.

B E


11.   A company has a web application that uses an Amazon DynamoDB table in a single AWS Region to store user information. To support an increasingly global user base, the application must run in a secondary Region and allow users to connect to their closest Region and fail over to the secondary Region.

Which approach should be used to ensure the deployment meets these requirements?

A. Configure DynamoDB streams to copy data between Regions, deploy the web stack in both Regions, and configure Amazon Route 53 to use a geoproximity routing policy with health checks.

B. Convert the DynamoDB table to a global table, deploy the web stack in both Regions, and configure Amazon Route 53 to use a geoproximity routing policy with health checks.

C. Define DynamoDB cross-region backups to copy data to the secondary Region, deploy the web stack in both Regions, and configure Amazon Route 53 to use a latency-based routing policy with health checks.

D. Use DynamoDB Accelerator to copy data to the secondary Region, deploy the web stack in both Regions, and configure Amazon Route 53 to use a failover routing policy.


B


12.   A DevOps Engineer is setting up a container-based architecture. The Engineer has decided to use AWS CloudFormation to automatically provision an Amazon

ECS cluster and an Amazon EC2 Auto Scaling group to launch the EC2 container instances. After successfully creating the CloudFormation stack, the Engineer noticed that, even though the ECS cluster and the EC2 instances were created successfully and the stack finished the creation, the EC2 instances were associating with a different cluster.

How should the DevOps Engineer update the CloudFormation template to resolve this issue?

A. Reference the EC2 instances in the AWS::ECS::Cluster resource and reference the ECS cluster in the AWS::ECS::Service resource.

B. Reference the ECS cluster in the AWS::AutoScaling::LaunchConfiguration resource of the UserData property.

C. Reference the ECS cluster in the AWS::EC2::Instance resource of the UserData property.

D. Reference the ECS cluster in the AWS::CloudFormation::CustomResource resource to trigger an AWS Lambda function that registers the EC2 instances with the appropriate ECS cluster.

B

13.  A company is running an application on Amazon EC2 instances in an Auto Scaling group. Recently, an issue occurred that prevented EC2 instances from launching successfully, and it took several hours for the Support team to discover the issue. The Support team wants to be notified by email whenever an EC2 instance does not start successfully.

Which action will accomplish this?

  • A. Add a health check to the Auto Scaling group to invoke an AWS Lambda function whenever an instance status is impaired.
  • B. Configure the Auto Scaling group to send a notification to an Amazon SNS topic whenever a failed instance launch occurs.
  • C. Create an Amazon CloudWatch alarm that invokes an AWS Lambda function when a failed AttachInstances Auto Scaling API call is made.
  • D. Create a status check alarm on Amazon EC2 to send a notification to an Amazon SNS topic whenever a status check fail occurs.

14. A company's popular global web application is hosted on Amazon EC2 instances behind an Application Load Balancer (ALB) using an Auto Scaling group. The company is launching a new feature and expects unpredictable spikes in web traffic. The site currently includes a large amount of media content, and the new feature adds the ability to submit ratings and comments that will be stored in a new Amazon DynamoDB table. A DevOps Engineer is tasked with ensuring the web application can scale with the increased traffic and workload

Which combination of steps will accomplish this? (Choose two.)

  • A. Configure an Amazon CloudFront distribution to cache the web application's static and dynamic content.
  • B. Configure the web application's ALB to cache content in Amazon ElastiCache, honoring the HTTP cache headers.
  • C. Process the new ratings and comments asynchronously using Amazon SQS.
  • D. Replace the DynamoDB table with DynamoDB Accelerator to store the ratings and comments to reduce latency.
  • E. Set up AWS Global Accelerator to cache static content and pass dynamic requests to the web application's ALB endpoint.

15. An application is deployed on Amazon EC2 instances running in an Auto Scaling group. During the bootstrapping process, the instances register their private IP addresses with a monitoring system. The monitoring system performs health checks frequently by sending ping requests to those IP addresses and sending alerts if an instance becomes non-responsive.

The existing deployment strategy replaces the current EC2 instances with new ones. A DevOps Engineer has noticed that the monitoring system is sending false alarms during a deployment, and is tasked with stopping these false alarms.
Which solution will meet these requirements without affecting the current deployment method?

  • A. Define an Amazon CloudWatch Events target, an AWS Lambda function, and a lifecycle hook attached to the Auto Scaling group. Configure CloudWatch Events to invoke Amazon SNS to send a message to the Systems Administrator group for remediation.
  • B. Define an AWS Lambda function and a lifecycle hook attached to the Auto Scaling group. Configure the lifecycle hook to invoke the Lambda function, which removes the entry of the private IP from the monitoring system upon instance termination.
  • C. Define an Amazon CloudWatch Events target, an AWS Lambda function, and a lifecycle hook attached to the Auto Scaling group. Configure CloudWatch Events to invoke the Lambda function, which removes the entry of the private IP from the monitoring system upon instance termination.
  • D. Define an AWS Lambda function that will run a script when instance termination occurs in an Auto Scaling group. The script will remove the entry of the private IP from the monitoring system.

16. A company plans to stop using Amazon EC2 key pairs for SSH access, and instead plans to use AWS Systems Manager Session Manager. To further enhance security, access to Session Manager must take place over a private network only.

Which combinations of actions will accomplish this? (Choose two.)

  • A. Allow inbound access to TCP port 22 in all associated EC2 security groups from the VPC CIDR range.
  • B. Attach an IAM policy with the necessary Systems Manager permissions to the existing IAM instance profile.
  • C. Create a VPC endpoint for Systems Manager in the desired Region.
  • D. Deploy a new EC2 instance that will act as a bastion host to the rest of the EC2 instance fleet.
  • E. Remove any default routes in the associated route tables.

17.  A company has built a web service that runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The company has deployed the application in us-east-1. Amazon Route 53 provides an external DNS that routes traffic from example.com to the application, created with appropriate health checks.
The company has deployed a second environment for the application in eu-west-1. The company wants traffic to be routed to whichever environment results in the best response time for each user. If there is an outage in one Region, traffic should be directed to the other environment.
Which configuration will achieve these requirements?
A.
✑ A subdomain us.example.com with weighted routing: the US ALB with weight 2 and the EU ALB with weight 1.
✑ Another subdomain eu.example.com with weighted routing: the EU ALB with weight 2 and the US ALB with weight 1.
✑ Geolocation routing records for example.com: North America aliased to us.example.com and Europe aliased to eu.example.com.
B.
✑ A subdomain us.example.com with latency-based routing: the US ALB as the first target and the EU ALB as the second target.
✑ Another subdomain eu.example.com with latency-based routing: the EU ALB as the first target and the US ALB as the second target.
✑ Failover routing records for example.com aliased to us.example.com as the first target and eu.example.com as the second target.
C.
✑ A subdomain us.example.com with failover routing: the US ALB as primary and the EU ALB as secondary.
✑ Another subdomain eu.example.com with failover routing: the EU ALB as primary and the US ALB as secondary.
✑ Latency-based routing records for example.com that are aliased to us.example.com and eu.example.com.
D.
✑ A subdomain us.example.com with multivalue answer routing: the US ALB first and the EU ALB second.
✑ Another subdomain eu.example.com with multivalue answer routing: the EU ALB first and the US ALB second.
✑ Failover routing records for example.com that are aliased to us.example.com and eu.example.com.
C

Saturday, March 20, 2021

AWS Storage Gateway

What is AWS Storage Gateway?


AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Storage Gateway provides a standard set of storage protocols such as iSCSI, SMB, and NFS, which allow you to use AWS storage without rewriting your existing applications. It provides low-latency performance by caching frequently accessed data on premises, while storing data securely and durably in Amazon cloud storage services. Storage Gateway optimizes data transfer to AWS by sending only changed data and compressing data. Storage Gateway also integrates natively with Amazon S3 cloud storage which makes your data available for in cloud processing, AWS Identity and Access Management (AWS IAM) for securing access management to services and resources, AWS Key Management Service (AWS KMS) for encrypting data at rest in cloud, Amazon CloudWatch for monitoring, and AWS CloudTrail for logging account activity.

  • The File Gateway enables you to store and retrieve objects in Amazon S3 using file protocols such as Network File System (NFS) and Server Message Block (SMB). Objects written through File Gateway can be directly accessed in S3.
  • The Volume Gateway provides block storage to your on-premises applications using iSCSI connectivity. Data on the volumes is stored in Amazon S3 and you can take point in time copies of volumes which are stored in AWS as Amazon EBS snapshots. You can also take copies of volumes and manage their retention using AWS Backup. You can restore EBS snapshots to a Volume Gateway volume or an EBS volume.
  • The Tape Gateway provides your backup application with an iSCSI virtual tape library (VTL) interface, consisting of a virtual media changer, virtual tape drives, and virtual tapes. Virtual tapes are stored in Amazon S3 and can be archived to Amazon S3 Glacier or Amazon S3 Glacier Deep Archive.



aws DynamoDB

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html


Tables, Items, and Atributes

Tables, Items, and Atributes are the core components

Tables, a colletion of items. 

Items, a collection of atrributes.  Each item in the table has a unique identifier, or primary key, that distinguishes the item from all of the others in the table. Other than the primary key, each item can have its own distinct attributes, which means that neither the attributes nor their data types need to b e defined beforehand.

Attributes, each item is composed of one or more attributes, this is the fundamental data element.  Most of the attributes are scalar, which means that they can have only one value.


Primary Key

DynamoDB supports two different kinds of primary keys:

  • Partition Key - A simple primary key, composed of one attribute known as the partition key.
  • Partiion key and sort key - Referred to as a composite primary key, composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key.
Secondary Indexes

Your can create one or more secondary indexes on a table. A secondary index lets you query the data in the table using an alternate key, in addition to queries against the primary key.

Two kinds of indexes:
  • Global secondary index - An index with a partition key and sort key that can be different from those o the table.
  • Local secondary index - An index that has the same partition key as the table, but a different sort key.
Each table has a quote of 20 global secondary indexes and 5 local secondary indexes per table.

DynamoDB Streams


DynamoDB Streams is an optional feature that captures data modification events in DynamoDB tables. The data about these events appear in the stream in near-real time, and in the order that the events occurred.

Each event is represented by a stream record. If you enable a stream on a table, DynamoDB Streams writes a stream record whenever one of the following events occurs:

  • A new item is added to the table: The stream captures an image of the entire item, including all of its attributes.
  • An item is updated: The stream captures the "before" and "after" image of any attributes that were modified in the item.
  • An item is deleted from the table: The stream captures an image of the entire item before it was deleted.

Each stream record also contains the name of the table, the event timestamp, and other metadata. Stream records have a lifetime of 24 hours; after that, they are automatically removed from the stream.



Saturday, June 6, 2020

Amazon AWS Certified Solutions Architect - sample questions

  1. A solutions architect is designing a solution where users will be directed to a backup static error page if the primary website is unavailable. The primary website's DNS records are hosted in Amazon Route 53 where their domain is pointing to an Application Load Balancer (ALB).Which configuration should the solutions architect use to meet the company's needs while minimizing changes and infrastructure overhead?
A. Point a Route 53 alias record to an Amazon CloudFront distribution with the ALB as one of its origins. Then, create custom error pages for the distribution.
B. Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page hosted within an Amazon S3 bucket when Route 53 health checks determine that the ALB endpoint is unhealthy.
C. Update the Route 53 record to use a latency-based routing policy. Add the backup static error page hosted within an Amazon S3 bucket to the record so the traffic is sent to the most responsive endpoints.
D. Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance hosting a static error page as endpoints. Route 53 will only send requests to the instance if the heal checks fail for the ALB.

B

2. A solutions architect is designing a high performance computing (HPC) workload on Amazon EC2. The EC2 instances need to communicate to each other frequently and require network performance with low latency and high throughput.Which EC2 configuration meets these requirements?

A. Launch the EC2 instances in a cluster placement group in one Availability Zone.
B. Launch the EC2 instances in a spread placement group in one Availability Zone.
C. Launch the EC2 instances in an Auto Scaling group in two Regions and peer the VPCs.
D. Launch the EC2 instances in an Auto Scaling group spanning multiple Availability Zones.
A

3. A company wants to host a scalable web application on AWS. The application will be accessed by users from different geographic regions of the world. Figured FourObviously pointApplication users will be able to download and upload unique data up to gigabytes in size. The development team wants a cost-effective solution to minimize upload and download latency and maximize performance.
What should a solutions architect do to accomplish this?
A. Use Amazon S3 with Transfer Acceleration to host the application.
B. Use Amazon S3 with CacheControl headers to host the application.
C. Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application.
D. Use Amazon EC2 with Auto Scaling and Amazon ElastiCache to host the application. 
A

4. A company is migrating from an on-premises infrastructure to the AWS Cloud. One of the company's applications stores files on a Windows file server farm that uses Distributed File System Replication (DFSR) to keep data in sync. A solutions architect needs to replace the file server farm.
Which service should the solutions architect use?
A. Amazon EFS
B. Amazon FSx
C. Amazon S3
D. AWS Storage Gateway

A

5. A company has a legacy application that process data in two parts. The second part of the process takes longer than the first, so the company has decided to rewrite the application as two microservices running on Amazon ECS that can scale independently.
How should a solutions architect integrate the microservices?
A. Implement code in microservice 1 to send data to an Amazon S3 bucket. Use S3 event notifications to invoke microservice 2.
B. Implement code in microservice 1 to publish data to an Amazon SNS topic. Implement code in microservice 2 to subscribe to this topic.
C. Implement code in microservice 1 to send data to Amazon Kinesis Data Firehose. Implement code in microservice 2 to read from Kinesis Data Firehose.
D. Implement code in microservice 1 to send data to an Amazon SQS queue. Implement code in microservice 2 to process messages from the queue

A is not correct as S3 supports the following destinations; sqs, sns, lambda,
D

6. A company captures clickstream data from multiple websites and analyzes it using batch processing. The data is loaded nightly into Amazon Redshift and is consumed by business analysts. The company wants to move towards near-real-time data processing for timely insights. The solution should process the streaming data with minimal effort and operational overhead.
Which combination of AWS services are MOST cost-effective for this solution? (Choose two.)
A. Amazon EC2
B. AWS Lambda
C. Amazon Kinesis Data Streams
D. Amazon Kinesis Data Firehose
E. Amazon Kinesis Data Analytics

I will go c and d, not sure the answer.

7. A company's application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. On the first day of every month at midnight, the application becomes much slower when the month-end financial calculation batch executes. This causes the CPU utilization of the EC2 instances to immediately peak to 100%, which disrupts the application.
What should a solutions architect recommend to ensure the application is able to handle the workload and avoid downtime?
A. Configure an Amazon CloudFront distribution in front of the ALB.
B. Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization.
C. Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule.
D. Configure Amazon ElastiGache to remove some of the workload from the EC2 instances.

C

8. A company runs a multi-tier web application that hosts news content. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones and use an Amazon Aurora database. A solutions architect needs to make the application more resilient to periodic increases in request rates.
Which architecture should the solutions architect implement? (Choose two.)
A. Add AWS Shield.
B. Add Aurora Replica.
C. Add AWS Direct Connect.
D. Add AWS Global Accelerator.
E. Add an Amazon CloudFront distribution in front of the Application Load Balancer.

DE

9. An application running on AWS uses an Amazon Aurora Multi-AZ deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database.
What should the solutions architect do to separate the read requests from the write requests?
A. Enable read-through caching on the Amazon Aurora database.
B. Update the application to read from the Multi-AZ standby instance.
C. Create a read replica and modify the application to use the appropriate endpoint.
D. Create a second Amazon Aurora database and link it to the primary database as a read replica.

C

10. A recently acquired company is required to build its own infrastructure on AWS and migrate multiple applications to the cloud within a month. Each application has approximately 50 TB of data to be transferred. After the migration is complete, this company and its parent company will both require secure network connectivity with consistent throughput from their data centers to the applications. A solution architect must ensure one-time data migration and ongoing network connectivity.
Which solution will meet these requirements?
A. AWS Direct Connect for both the initial transfer and ongoing connectivity.
B. AWS Site-to-Site VPN for both the initial transfer and ongoing connectivity.
C. AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity.
D. AWS Snowball for the initial transfer and AWS Site-to-Site VPN for ongoing connectivity.

C NOT D, as D will have latency.

11. A company serves content to its subscribers across the world using an application running on AWS. The application has several Amazon C2 instances in a private subnet behind an Application Load Balancer (ALB). Due to a recent change in copyright restrictions, the chief information officer (CIO) wants to block access for certain countries.
Which action will meet these requirements?
A. Modify the ALB security group to deny incoming traffic from blocked countries.
B. Modify the security group for EC2 instances to deny incoming traffic from blocked countries.
C. Use Amazon CloudFront to serve the application and deny access to blocked countries.
D. Use ALB listener rules to return access denied responses to incoming traffic from blocked countries.

C

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html


12. A product team is creating a new application that will store a large amount of data. The data will be analyzed hourly and modified by multiple Amazon EC2 Linux instances. The application team believes the amount of space needed will continue to grow for the next 6 months.
Which set of actions should a solutions architect take to support these needs?
A. Store the data in an Amazon EBS volume. Mount the EBS volume on the application instances.
B. Store the data in an Amazon EFS file system. Mount the file system on the application instances.
C. Store the data in Amazon S3 Glacier. Update the vault policy to allow access to the application instances.
D. Store the data in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Update the bucket policy to allow access to the application instances.

B

13. A company is migrating a three-tier application to AWS. The application requires a MySQL database. In the past, the application users reported poor application performance when creating new entries. These performance issues were caused by users generating different real-time reports from the application during working hours.
Which solution will improve the performance of the application when it is moved to AWS?
A. Import the data into an Amazon DynamoDB table with provisioned capacity. Refactor the application to use DynamoDB for reports.
B. Create the database on a compute optimized Amazon EC2 instance. Ensure compute resources exceed the on-premises database.
C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the application reader endpoint for reports.
D. Create an Amazon Aurora MySQL Multi-AZ DB cluster. Configure the application to use the backup instance of the cluster as an endpoint for the reports.

C.

14.A solutions architect is deploying a distributed database on multiple Amazon EC2 instances. The database stores all data on multiple instances so it can withstand the loss of an instance. The database requires block storage with latency and throughput to support several million transactions per second per server.
Which storage solution should the solutions architect use?
A. Amazon EBS
B. Amazon EC2 instance store
C. Amazon EFS
D. Amazon S3

 
15.Organizers for a global event want to put daily reports online as static HTML pages. The pages are expected to generate millions of views from users around the world. The files are stored in an Amazon S3 bucket. A solutions architect has been asked to design an efficient and effective solution.
Which action should the solutions architect take to accomplish this?
A. Generate presigned URLs for the files.
B. Use cross-Region replication to all Regions.
C. Use the geoproximity feature of Amazon Route 53.
D. Use Amazon CloudFront with the S3 bucket as its origin.

D

16. A solutions architect is designing a new service behind Amazon API Gateway. The request patterns for the service will be unpredictable and can change suddenly from 0 requests to over 500 per second. The total size of the data that needs to be persisted in a database is currently less than 1 GB with unpredictable future growth. Data can be queried using simple key-value requests.
Which combination of AWS services would meet these requirements? (Choose two.)
A. AWS Fargate
B. AWS Lambda
C. Amazon DynamoDB
D. Amazon EC2 Auto Scaling
E. MySQL-compatible Amazon Aurora


cd, bc.
Lambda autoscale and can handle 500 requests per second and integrates with API Gateway and DynamoDB.


17. A start-up company has a web application based in the us-east-1 Region with multiple Amazon EC2 instances running behind an Application Load Balancer across multiple Availability Zones. As the company's user base grows in the us-west-1 Region, it needs a solution with low latency and high availability.
What should a solutions architect do to accomplish this?
A. Provision EC2 instances in us-west-1. Switch the Application Load Balancer to a Network Load Balancer to achieve cross-Region load balancing.
B. Provision EC2 instances and an Application Load Balancer in us-west-1. Make the load balancer distribute the traffic based on the location of the request.
C. Provision EC2 instances and configure an Application Load Balancer in us-west-1. Create an accelerator in AWS Global Accelerator that uses an endpoint group that includes the load balancer endpoints in both Regions.
D. Provision EC2 instances and configure an Application Load Balancer in us-west-1. Configure Amazon Route 53 with a weighted routing policy. Create alias records in Route 53 that point to the Application Load Balancer.

C

18. A solutions architect is designing a solution to access a catalog of images and provide users with the ability to submit requests to customize images. Image customization parameters will be in any request sent to an AWS API Gateway API. The customized image will be generated on demand, and users will receive a link they can click to view or download their customized image. The solution must be highly available for viewing and customizing images.
What is the MOST cost-effective solution to meet these requirements?
A. Use Amazon EC2 instances to manipulate the original image into the requested customization Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances.
B. Use AWS Lambda to manipulate the original image to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin.
C. Use AWS Lambda to manipulate the original image to the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances.
D. Use Amazon EC2 instances to manipulate the original image into the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin.
 
B

19. A company is planning to migrate a business-critical dataset to Amazon S3. The current solution design uses a single S3 bucket in the us-east-1 Region with versioning enabled to store the dataset. The company's disaster recovery policy states that all data multiple AWS Regions.
How should a solutions architect design the S3 solution?
A. Create an additional S3 bucket in another Region and configure cross-Region replication.
B. Create an additional S3 bucket in another Region and configure cross-origin resource sharing (CORS).
C. Create an additional S3 bucket with versioning in another Region and configure cross-Region replication.
D. Create an additional S3 bucket with versioning in another Region and configure cross-origin resource (CORS).

C

20. A company has application running on Amazon EC2 instances in a VPC. One of the applications needs to call an Amazon S3 API to store and read objects. The company's security policies restrict any internet-bound traffic from the applications.
Which action will fulfill these requirements and maintain security?
A. Configure an S3 interface endpoint.
B. Configure an S3 gateway endpoint.
C. Create an S3 bucket in a private subnet.
D. Create an S3 bucket in the same Region as the EC2 instance.

22. A company wants to migrate a high performance computing (HPC) application and data from on-premises to the AWS Cloud. The company uses tiered storage on premises with hot high-performance parallel storage to support the application during periodic runs of the application, and more economical cold storage to hold the data when the application is not actively running.
Which combination of solutions should a solutions architect recommend to support the storage needs of the application? (Choose two.)
A. Amazon S3 for cold data storage
B. Amazon EFS for cold data storage
C. Amazon S3 for high-performance parallel storage
D. Amazon FSx for Lustre for high-performance parallel storage
E. Amazon FSx for Windows for high-performance parallel storage

D

23. A company's application is running on Amazon EC2 instances in a single Region. In the event of a disaster, a solutions architect needs to ensure that the resources can also be deployed to a second Region.
Which combination of actions should the solutions architect take to accomplish this? (Choose two.)
A. Detach a volume on an EC2 instance and copy it to Amazon S3.
B. Launch a new EC2 instance from an Amazon Machine Image (AMI) in a new Region.
C. Launch a new EC2 instance in a new Region and copy a volume from Amazon S3 to the new instance.
D. Copy an Amazon Machine Image (AMI) of an EC2 instance and specify a different Region for the destination.
E. Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2 instance in the destination Region using that EBS volume.
 
D.B

24. A solutions architect needs to ensure that API calls to Amazon DynamoDB from Amazon EC2 instances in a VPC do not traverse the internet.
What should the solutions architect do to accomplish this? (Choose two.)
A. Create a route table entry for the endpoint.
B. Create a gateway endpoint for DynamoDB.
C. Create a new DynamoDB table that uses the endpoint.
D. Create an ENI for the endpoint in each of the subnets of the VPC.
E. Create a security group entry in the default security group to provide access.

AB
 
25. A company's legacy application is currently relying on a single-instance Amazon RDS MySQL database without encryption. Due to new compliance requirements, all existing and new data in this database must be encrypted.
How should this be accomplished?
A. Create an Amazon S3 bucket with server-side encryption enabled. Move all the data to Amazon S3. Delete the RDS instance.
B. Enable RDS Multi-AZ mode with encryption at rest enabled. Perform a failover to the standby instance.
C. Take a Snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot.
D. Create an RDS read replica with encryption at rest enabled. Promote the read replica to master and switch the over to the new master. Delete the old RDS instance.

C

26. A manufacturing company wants to implement predictive maintenance on its machinery equipment. The company will install thousands of IoT sensors that will send data to AWS in real time. A solutions architect is tasked with implementing a solution that will receive events in an ordered manner for each machinery asset and ensure that data is saved for further processing at a later time.
Which solution would be MOST efficient?
A. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.
B. Use Amazon Kinesis Data Streams for real-time events with a shard for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon EBS.
C. Use an Amazon SQS FIFO queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS.
D. Use an Amazon SQS standard queue for real-time events with one queue for each equipment asset. Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3.

A

27. A company's website runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The website has a mix of dynamic and static content. Users around the globe are reporting that the website is slow.
Which set of actions will improve website performance for users worldwide?
A. Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution.
B. Create a latency-based Amazon Route 53 record for the ALB. Then launch new EC2 instances with larger instance sizes and register the instances with the ALB.
C. Launch new EC2 instances hosting the same web application in different Regions closer to the users. Then register instances with the same ALB using cross- Region VPC peering.
D. Host the website in an Amazon S3 bucket in the Regions closest to the users and delete the ALB and EC2 instances. Then update an Amazon Route 53 record to point to the S3 buckets.

A

28. A company has been storing analytics data in an Amazon RDS instance for the past few years. The company asked a solutions architect to find a solution that allows users to access this data using an API. The expectation is that the application will experience periods of inactivity but could receive bursts of traffic within seconds.
Which solution should the solutions architect suggest?
A. Set up an Amazon API Gateway and use Amazon ECS.
B. Set up an Amazon API Gateway and use AWS Elastic Beanstalk.
C. Set up an Amazon API Gateway and use AWS Lambda functions.
D. Set up an Amazon API Gateway and use Amazon EC2 with Auto Scaling.

C or D 
I can see that Lambda functions:
The first time you invoke your function, AWS Lambda creates an instance of the function and runs its handler method to process the event. When the function returns a response, it stays active and waits to process additional events. If you invoke the function again while the first event is being processed, Lambda initializes another instance, and the function processes the two events concurrently. As more events come in, Lambda routes them to available instances and creates new instances as needed. When the number of requests decreases, Lambda stops unused instances to free up scaling capacity for other functions.


29. A company must generate sales reports at the beginning of every month. The reporting process launches 20 Amazon EC2 instances on the first of the month. The process runs for 7 days and cannot be interrupted. The company wants to minimize costs.
Which pricing model should the company choose?

A. Reserved Instances
B. Spot Block Instances
C. On-Demand Instances
D. Scheduled Reserved Instances
"Scheduled Reserved Instances (Scheduled Instances) enable you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term. "

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-scheduled-instances.html

30. A gaming company has multiple Amazon EC2 instances in a single Availability Zone for its multiplayer game that communicates with users on Layer 4. The chief technology officer (CTO) wants to make the architecture highly available and cost-effective.
Which should a solutions architect do to meet these requirements? (Choose two.)?
A. Increase the number of EC2 instances.
B. Decrease the number of EC2 instances.
C. Configure a Network Load Balancer in front of the EC2 instances.
D. Configure an Application Load Balancer in front of the EC2 instances.
E. Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically.

DE, or CE

C and E. Layer 4 traffic is handled using Network load balancer.


31. A company currently operates a web application backed by an Amazon RDS MySQL database. It has automated backups that are run daily and are not encrypted. A security audit requires future backups to be encrypted and the unencrypted backups to be destroyed. The company will make at least one encrypted backup before destroying the old backups.
What should be done to enable encryption for future backups?
A. Enable default encryption for the Amazon S3 bucket where backups are stored.
B. Modify the backup section of the database configuration to toggle the Enable encryption check box.
C. Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot.
D. Enable an encrypted read replica on RDS for MySQL. Promote the encrypted read replica to primary. Remove the original database instance.
 
32. A company is hosting a website behind multiple Application Load Balancers. The company has different distribution rights for its content around the world. A solutions architect needs to ensure that users are served the correct content without violating distribution rights.
Which configuration should the solutions architect choose to meet these requirements?
A. Configure Amazon CloudFront with AWS WAF.
B. Configure Application Load Balancers with AWS WAF.
C. Configure Amazon Route 53 with a geolocation policy.
D. Configure Amazon Route 53 with a geoproximity routing policy.


33. A solution architect has created a new AWS account and must secure AWS account root user access.
Which combination of actions will accomplish this? (Choose two.)
A. Ensure the root user uses a strong password.
B. Enable multi-factor authentication to the root user.
C. Store root user access keys in an encrypted Amazon S3 bucket.
D. Add the root user to a group containing administrative permissions.
E. Apply the required permissions to the root user with an inline policy document.

34. A solutions architect at an ecommerce company wants to back up application log data to Amazon S3. The solutions architect is unsure how frequently the logs will be accessed or which logs will be accessed the most. The company wants to keep costs as low as possible by using the appropriate S3 storage class.
Which S3 storage class should be implemented to meet these requirements?
A. S3 Glacier
B. S3 Intelligent-Tiering
C. S3 Standard-Infrequent Access (S3 Standard-IA)
D. S3 One Zone-Infrequent Access (S3 One Zone-IA)

35.  A company's website is used to sell products to the public. The site runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). There is also an Amazon CloudFront distribution, and AWS WAF is being used to protect against SQL injection attacks. The ALB is the origin for the CloudFront distribution. A recent review of security logs revealed an external malicious IP that needs to be blocked from accessing the website.
What should a solutions architect do to protect the application?
A. Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP address.
B. Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address.
C. Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the malicious IP address.
D. Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the malicious IP address. 
B

36. A solutions architect is designing an application for a two-step order process. The first step is synchronous and must return to the user with little latency. The second step takes longer, so it will be implemented in a separate component. Orders must be processed exactly once and in the order in which they are received.
How should the solutions architect integrate these components?
A. Use Amazon SQS FIFO queues.
B. Use an AWS Lambda function along with Amazon SQS standard queues.
C. Create an SNS topic and subscribe an Amazon SQS FIFO queue to that topic.
D. Create an SNS topic and subscribe an Amazon SQS Standard queue to that topic. 
 
 
37. A web application is deployed in the AWS Cloud. It consists of a two-tier architecture that includes a web layer and a database layer. The web server is vulnerable to cross-site scripting (XSS) attacks.
What should a solutions architect do to remediate the vulnerability?
A. Create a Classic Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.
B. Create a Network Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.
C. Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.
D. Create an Application Load Balancer. Put the web layer behind the load balancer and use AWS Shield Standard. 

C

38. A company's website is using an Amazon RDS MySQL Multi-AZ DB instance for its transactional data storage. There are other internal systems that query this DB instance to fetch data for internal batch processing. The RDS DB instance slows down significantly the internal systems fetch data. This impacts the website's read and write performance, and the users experience slow response times.
Which solution will improve the website's performance?
A. Use an RDS PostgreSQL DB instance instead of a MySQL database.
B. Use Amazon ElastiCache to cache the query responses for the website.
C. Add an additional Availability Zone to the current RDS MySQL Multi.AZ DB instance.
D. Add a read replica to the RDS DB instance and configure the internal systems to query the read replica.

39. An application runs on Amazon EC2 instances across multiple Availability Zones. The instances run in an Amazon EC2 Auto Scaling group behind an Application Load Balancer. The application performs best when the CPU utilization of the EC2 instances is at or near 40%.
What should a solutions architect do to maintain the desired performance across all instances in the group?
A. Use a simple scaling policy to dynamically scale the Auto Scaling group.
B. Use a target tracking policy to dynamically scale the Auto Scaling group.
C. Use an AWS Lambda function to update the desired Auto Scaling group capacity.
D. Use scheduled scaling actions to scale up and scale down the Auto Scaling group.

B

40. A company runs an internal browser-based application. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales up to 20 instances during work hours, but scales down to 2 instances overnight. Staff are complaining that the application is very slow when the day begins, although it runs well by mid-morning.
How should the scaling be changed to address the staff complaints and keep costs to a minimum?
A. Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens.
B. Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period.
C. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period.
D. Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens.
 
41. A financial services company has a web application that serves users in the United States and Europe. The application consists of a database tier and a web server tier. The database tier consists of a MySQL database hosted in us-east-1. Amazon Route 53 geoproximity routing is used to direct traffic to instances in the closest Region. A performance review of the system reveals that European users are not receiving the same level of query performance as those in the United
States.
Which changes should be made to the database tier to improve performance?
A. Migrate the database to Amazon RDS for MySQL. Configure Multi-AZ in one of the European Regions.
B. Migrate the database to Amazon DynamoDB. Use DynamoDB global tables to enable replication to additional Regions.
C. Deploy MySQL instances in each Region. Deploy an Application Load Balancer in front of MySQL to reduce the load on the primary instance.
D. Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure read replicas in one of the European Regions.

D

42. A company hosts a static website on-premises and wants to migrate the website to AWS. The website should load as quickly as possible for users around the world. The company also wants the most cost-effective solution.
What should a solutions architect do to accomplish this?
A. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Replicate the S3 bucket to multiple AWS Regions.
B. Copy the website content to an Amazon S3 bucket. Configure the bucket to serve static webpage content. Configure Amazon CloudFront with the S3 bucket as the origin.
C. Copy the website content to an Amazon EBS-backed Amazon EC2 instance running Apache HTTP Server. Configure Amazon Route 53 geolocation routing policies to select the closest origin.
D. Copy the website content to multiple Amazon EBS-backed Amazon EC2 instances running Apache HTTP Server in multiple AWS Regions. Configure Amazon CloudFront geolocation routing policies to select the closest origin.'

B

43. A solutions architect is designing storage for a high performance computing (HPC) environment based on Amazon Linux. The workload stores and processes a large amount of engineering drawings that require shared storage and heavy computing.
Which storage option would be the optimal solution?
A. Amazon Elastic File System (Amazon EFS)
B. Amazon FSx for Lustre
C. Amazon EC2 instance store
D. Amazon EBS Provisioned IOPS SSD (io1)
B

44. A company is performing an AWS Well-Architected Framework review of an existing workload deployed on AWS. The review identified a public-facing website running on the same Amazon EC2 instance as a Microsoft Active Directory domain controller that was install recently to support other AWS services. A solutions architect needs to recommend a new design that would improve the security of the architecture and minimize the administrative demand on IT staff.
What should the solutions architect recommend?
A. Use AWS Directory Service to create a managed Active Directory. Uninstall Active Directory on the current EC2 instance.
B. Create another EC2 instance in the same subnet and reinstall Active Directory on it. Uninstall Active Directory.
C. Use AWS Directory Service to create an Active Directory connector. Proxy Active Directory requests to the Active domain controller running on the current EC2 instance.
D. Enable AWS Single Sign-On (AWS SSO) with Security Assertion Markup Language (SAML) 2.0 federation with the current Active Directory controller. Modify the EC2 instance's security group to deny public access to Active Directory.

A

45. A company hosts a static website within an Amazon S3 bucket. A solutions architect needs to ensure that data can be recovered in case of accidental deletion.
Which action will accomplish this?
A. Enable Amazon S3 versioning.
B. Enable Amazon S3 Intelligent-Tiering.
C. Enable an Amazon S3 lifecycle policy.
D. Enable Amazon S3 cross-Region replication.

A

46. A company's production application runs online transaction processing (OLTP) transactions on an Amazon RDS MySQL DB instance. The company is launching a new reporting tool that will access the same data. The reporting tool must be highly available and not impact the performance of the production application.
How can this be achieved?
A. Create hourly snapshots of the production RDS DB instance.
B. Create a Multi-AZ RDS Read Replica of the production RDS DB instance.
C. Create multiple ROS Read Replicas of the production RDS DB instance. Place the Read Replicas in an Auto Scaling group.
D. Create a Single-AZ RDS Read Replica of the production RDS DB instance. Create a second Single-AZ RDS Read Replica from the replica.

B

47. A company runs an application in a branch office within a small data closet with no virtualized compute resources. The application data is stored on an NFS volume. Compliance standards require a daily offsite backup of the NFS volume.
Which solution meet these requirements?
A. Install an AWS Storage Gateway file gateway on premises to replicate the data to Amazon S3.
B. Install an AWS Storage Gateway file gateway hardware appliance on premises to replicate the data to Amazon S3.
C. Install an AWS Storage Gateway volume gateway with stored volumes on premises to replicate the data to Amazon S3.
D. Install an AWS Storage Gateway volume gateway with cached volumes on premises to replicate the data to Amazon S3.

48. A company runs an application in a branch office within a small data closet with no virtualized compute resources. The application data is stored on an NFS volume. Compliance standards require a daily offsite backup of the NFS volume.
Which solution meet these requirements?
A. Install an AWS Storage Gateway file gateway on premises to replicate the data to Amazon S3.
B. Install an AWS Storage Gateway file gateway hardware appliance on premises to replicate the data to Amazon S3.
C. Install an AWS Storage Gateway volume gateway with stored volumes on premises to replicate the data to Amazon S3.
D. Install an AWS Storage Gateway volume gateway with cached volumes on premises to replicate the data to Amazon S3.
 
B

49. A security team to limit access to specific services or actions in all of the team's AWS accounts. All accounts belong to a large organization in AWS Organizations.
The solution must be scalable and there must be a single point where permission can be maintained.
What should a solutions architect do to accomplish this?
A. Create an ACL to provide access to the services or actions.
B. Create a security group to allow accounts and attach it to user groups.
C. Create cross-account roles in each account to deny access to the services or actions.
D. Create a service control policy in the root organizational unit to deny access to the services or actions.

D
50.  A data science team requires storage for nightly log processing. The size and number of logs is unknown and will persist for 24 hours only.
What is the MOST cost-effective solution?
A. Amazon S3 Glacier
B. Amazon S3 Standard
C. Amazon S3 Intelligent-Tiering
D. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

51. A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.
What should a solutions architect propose to ensure users see all of their documents at once?
A. Copy the data so both EBS volumes contain all the documents.
B. Configure the Application Load Balancer to direct a user to the server with the documents.
C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS.
D. Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server.

D

52.  A company is planning to use Amazon S3 to store images uploaded by its users. The images must be encrypted at rest in Amazon S3. The company does not want to spend time managing and rotating the keys, but it does want to control who can access those keys.
What should a solutions architect use to accomplish this?
A. Server-Side Encryption with keys stored in an S3 bucket
B. Server-Side Encryption with Customer-Provided Keys (SSE-C)
C. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
D. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)

D

53. A company is running an ecommerce application on Amazon EC2. The application consists of a stateless web tier that requires a minimum of 10 instances, and a peak of 250 instances to support the application's usage. The application requires 50 instances 80% of the time.
Which solution should be used to minimize costs?
A. Purchase Reserved Instances to cover 250 instances.
B. Purchase Reserved Instances to cover 80 instances. Use Spot Instances to cover the remaining instances.
C. Purchase On-Demand Instances to cover 40 instances. Use Spot Instances to cover the remaining instances.
D. Purchase Reserved Instances to cover 50 instances. Use On-Demand and Spot Instances to cover the remaining instances.
D

54. A company has deployed an API in a VPC behind an internet-facing Application Load Balancer (ALB). An application that consumes the API as a client is deployed in a second account in private subnets behind a NAT gateway. When requests to the client application increase, the NAT gateway costs are higher than expected. A solutions architect has configured the ALB to be internal.
Which combination of architectural changes will reduce the NAT gateway costs? (Choose two.)
A. Configure a VPC peering connection between the two VPCs. Access the API using the private address.
B. Configure an AWS Direct Connect connection between the two VPCs. Access the API using the private address.
C. Configure a ClassicLink connection for the API into the client VPC. Access the API using the ClassicLink address.
D. Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address.
E. Configure an AWS Resource Access Manager connection between the two accounts. Access the API using the private address.

AC or AD?

55Topic 1
A solutions architect is tasked with transferring 750 TB of data from a network-attached file system located at a branch office Amazon S3 Glacier. The solution must avoid saturating the branch office's low-bandwidth internet connection.
What is the MOST cost-effective solution?
A. Create a site-to-site VPN tunnel to an Amazon S3 bucket and transfer the files directly. Create a bucket VPC endpoint.
B. Order 10 AWS Snowball appliances and select an S3 Glacier vault as the destination. Create a bucket policy to enforce VPC endpoint.
C. Mount the network-attached file system to Amazon S3 and copy the files directly. Create a lifecycle policy to S3 objects to Amazon S3 Glacier.
D. Order 10 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.
 
D

B。 Create a bucket policy to enforce VPC endpoint.

A solutions architect is implementing a document review application using an Amazon S3 bucket for storage. The solution must prevent an accidental deletion of the documents and ensure that all versions of the documents are available. Users must be able to download, modify, and upload documents.
Which combination of actions should be taken to meet these requirements? (Choose two.)
A. Enable a read-only bucket ACL.
B. Enable versioning on the bucket.
C. Attach an IAM policy to the bucket.
D. Enable MFA Delete on the bucket.
E. Encrypt the bucket using AWS KMS.

BD


S3 replication cross region to another account

preparation create the above buckets in two AWS accounts source: leo-functions destination: leo-functions-rep Create IAM role in source acco...