400+ Câu hỏi luyện thi chứng chỉ AWS Certified Solutions Architect Associate (SAA-C03) - Phần 1
aws
aws certification
cloud computing
solutions architect
saa c03
aws exam
practice test
Question 1
A large engineering company plans to deploy a distributed application with Amazon Aurora as a
database. The database should be restored with a Recovery Time objective (RTO) of one minute when
there is a service degradation in the primary region. The service restoration should incur the least
admin work.
What approach can be initiated to design an Aurora database to meet cross-region disaster recovery
requirements?
A.
Use Amazon Aurora Global Database and use the secondary region as a failover for service degradation
in the primary region
B.
Use Multi-AZ deployments with Aurora Replicas which will go into failover to one of the Replicas for
service degradation in the primary region
C.
Create DB Snapshots from the existing Amazon Aurora database and save them in the Amazon S3 bucket.
Create a new database instance in a new region using these snapshots when service degradation occurs
in the primary region
D.
Use Amazon Aurora point-in-time recovery to automatically store backups in the Amazon S3 bucket.
Restore a new database instance in a new region when service degradation occurs in the primary
region using these backups
Correct Answer: A
For distributed applications, Global databases can be used. With this, Amazon Aurora spans a single
database across multiple regions. This enables fast reads from each region and helps quick
cross-region disaster recovery. With the Global database, failover to the secondary region can be
completed with an RTO of one minute from the degradation or complete failures in the primary
region.
- Option B is incorrect as this will work only in case of service impact with one of the
Availability
Zones. It won't work for regional outages.
- Option C is incorrect as creating a new DB instance with snapshots will involve manual work in
provisioning the instance which will delay service restoration.
- Option D is incorrect as although these backups are automated using Amazon Aurora
point-in-time recovery, restoration to another region will result in delay in service
restoration.
Question 2
You have hosted an application on an EC2 Instance in a public subnet in a VPC. For this
application’s database layer, you are using an RDS DB instance placed in the private subnet of the
same VPC, but it is not publicly accessible. As the best practice, you have been storing the DB
credentials in AWS Secrets Manager instead of hardcoding them in the application code.
The Security team has reviewed the architecture and is concerned that the internet connectivity to
AWS Secrets Manager is a security risk. How can you resolve this security concern?
A.
Create an Interface VPC endpoint to establish a private connection between your VPC and Secrets
Manager
B.
Access the credentials from Secrets Manager through a Site-to-Site VPN Connection
C.
Create a Gateway VPC endpoint to establish a private connection between your VPC and Secrets Manager
D.
Access the credentials from Secrets Manager by using a NAT Gateway
Correct Answer: A
- Option A is CORRECT because, as per the documentation, “You can establish a private connection
between your VPC and Secrets Manager by creating an interface VPC endpoint.” That’s how your EC2
instance can fetch the DB Credentials from the Secrets Manager privately. Once it gets the
credentials, it can securely establish the connection with the RDS DB instance.
- Option B is incorrect because a Site-to-Site VPN connection would be more appropriate in case of
hybrid environments where you want to connect AWS and on-premises networks. Here, we just have two
AWS services that need to communicate without the internet.
- Option C is incorrect. Although using the VPC Endpoint indicates the correct solution, the type of
interface mentioned is incorrect. For AWS Secrets Manager, the Gateway endpoint is
unavailable.
- Option D is incorrect because NAT Gateway is more appropriate for communicating privately with
other VPCs or on-premises environments. This is more expensive. Mostly it is used when your
private
subnet resources want to communicate one way (only outbound) with the Internet. Here in this
question, we just have the EC2 instance that needs to communicate securely without the internet
with
AWS
Secrets Manager which resides outside VPC. That’s why using a VPC Endpoint would satisfy the
requirement.
Question 3
You are the Solutions Architect of an organization that runs 100 of modern EC2 instances in a
production environment. To avoid non-compliance, you must immediately update the packages on all the
production EC2 instances. There is a DevSecOps team who is in charge of security group policies used
in those EC2, has the SSH access disabled in the security group policy. When you reached them to get
the SSH enabled, they denied that.
Which of the below options will help you to roll out the package for all the EC2 instances despite
having the above restrictions from the DevSecOps team?
A.
Use AWS Config to roll out the package all at once and install it in EC2 instances
B.
Get the System Manager role added to your IAM roles and use Systems Manager Run Command to roll out
the package installation
C.
Get the System Manager role added to your IAM roles and use System Manager Session Manager to SSH
into the EC2s from browser mode to install the package
D.
Get the user credentials of one of the Security members to SSH into the EC2 instance and proceed
with package installation
Correct Answer: B
- Option A is incorrect because AWS Config is used to monitor configuration changes of your AWS
resources. For example, someone disables the CloudTrail log, deletes an account from an OU, etc.
AWS
Config also enables you to simplify compliance auditing, security analysis, change management, and
operational troubleshooting. Hence, AWS Config is not a solution to install packages in EC2
instances.
- Option B is CORRECT because once the user has the suitable IAM SSM role, AWS System manager Run
Command can be used to remotely run commands, like update packages, on all the EC2 instances. One
can use Run Command from the AWS Management Console, AWS Command Line Interface (AWS CLI), AWS
Tools
for Windows PowerShell, or the AWS SDKs with no extra cost.
- Option C is incorrect because this method requires you to manually connect to each of the EC2
instances individually, which would result in a lot of time and effort to patch all 100 EC2
instances. Hence, this - Option Can be easily neglected.
- Option D is incorrect because AWS never encourages sharing of credentials and it can be
treated as a security violation.
Question 4
A finance company is using Amazon S3 to store data for all its customers. During an annual audit, it
was observed that sensitive data is stored by some of the customers. Operations Head is looking for
an automated tool to scan all data in Amazon S3 buckets and create a report based on the findings
from all the buckets with sensitive data.
Which solution can be designed to get the required details?
A.
Enable Amazon GuardDuty on the Amazon S3 buckets
B.
Enable Amazon Detective on the Amazon S3 buckets
C.
Enable Amazon Macie on the Amazon S3 buckets
D.
Enable Amazon Inspector on the Amazon S3 buckets
Correct Answer: C
Amazon Macie uses machine learning and pattern matching to discover, monitor, and protect sensitive
data stored in Amazon S3 buckets.
Once Amazon Macie is enabled on the Amazon S3, it first generates an inventory of S3 buckets. It
automatically discovers and reports any sensitive data stored in an Amazon S3 bucket by creating and
running sensitive data discovery jobs. It further provides a detailed report for any findings with
respect to the sensitive data. These jobs can be initiated at periodic intervals or just once
on-demand basis.
- Option A is incorrect as Amazon GuardDuty protects against the use of compromised credentials or
unusual access to Amazon S3. It does not scan data in Amazon S3 buckets to find any sensitive data
stored in it.
- Option B is incorrect as Amazon Detective analyses and visualizes security data from AWS security
services such as Amazon GuardDuty, Amazon Macie, and AWS Security Hub to get the root cause of the
potential security issues. It does not scan data in Amazon S3 buckets to find any sensitive data
stored in it.
- Option D is incorrect as Amazon Inspector evaluates Amazon EC2 instance for any software
vulnerability or network exposure. It does not scan data in Amazon S3 buckets to find any
sensitive data stored in it.
Question 5
A company wants to build a chatbot to answer customer queries about their products. The chatbot
should be able to understand natural language queries, provide relevant information, and initiate
conversations.
Which of the following AWS services can be used to build this chatbot?
A.
Amazon Rekognition
B.
Amazon Comprehend
C.
Amazon Polly
D.
Amazon Lex
Correct Answer: D
Amazon Lex is specifically designed for building conversational interfaces using voice and text. It
can understand natural language input, process it, and generate appropriate responses. It's the
perfect choice for building a chatbot.
- Option A is incorrect. Amazon Rekognition is for image and video analysis, not suitable for
natural language processing tasks like building a chatbot.
- Option B is incorrect. Amazon Comprehend is for natural language processing tasks like sentiment
analysis and topic modeling. While it can be used for some chatbot functionalities, it's not
designed for building complete conversational interfaces.
- Option C is incorrect. Amazon Polly is for text-to-speech conversion, which can be used for
generating audio responses from the chatbot. However, it's not a complete solution for building a
chatbot.
Question 6
A gaming company stores large size (terabytes to petabytes) of clickstream events data
into their central S3 bucket. The company wants to analyze this clickstream data to generate
business insight. Amazon Redshift, hosted securely in a private subnet of a VPC, is used for all
data warehouse-related and analytical solutions. Using Amazon Redshift, the company wants to explore
some solutions to securely run complex analytical queries on the clickstream data stored in S3
without transforming/copying or loading the data in the Redshift.
As a Solutions Architect, which of the following AWS services would you recommend for this
requirement, knowing that security and cost are two major priorities for the company?
A.
Create a VPC endpoint to establish a secure connection between Amazon Redshift and the S3 central
bucket and use Amazon Athena to run the query
B.
Use NAT Gateway to connect Amazon Redshift to the internet and access the S3 static website. Use
Amazon Redshift Spectrum to run the query
C.
Create a VPC endpoint to establish a secure connection between Amazon Redshift and the S3 central
bucket and use Amazon Redshift Spectrum to run the query
D.
Create Site-to-Site VPN to set up a secure connection between Amazon Redshift and the S3 central
bucket and use Amazon Redshift Spectrum to run the query
Correct Answer: C
- Option A is incorrect because Amazon Athena can directly query data in S3. Hence this will bypass
the use of Redshift, which is not the requirement for the customer. They insisted on Amazon
Redshift
for the query purpose for usage.
- Option B is incorrect. Even though it is possible, NAT Gateway will connect Redshift to the
internet
and make the solution less secure. Plus, this is also not a cost-effective solution. Remember that
security and cost both are important for the company.
- Option C is CORRECT because VPC Endpoint is a secure and cost-effective way to connect a VPC with
Amazon S3 privately, and the traffic does not pass through the internet. Using Amazon Redshift
Spectrum, one can run queries against the data stored in the S3 bucket without needing the data to
be copied to Amazon Redshift. This meets both the requirements of building a secure yet
cost-effective solution.
- Option D is incorrect because Site-to-Site VPN is used to connect an on-premises data center to
AWS Cloud securely over the internet and is suitable for use cases like Migration, Hybrid Cloud,
etc.
Question 7
A drug research team in a Medical Company has decided to use Amazon Elastic File System (EFS) as
shared file system storage for their Linux workloads. All these files are related to new drug
discoveries in the field of cancer treatment and are critically important for the next six months.
The customer is looking for a solution to protect the data by backing up the EFS file system and
simplifying the creation, migration, restoration, and deletion of backups while providing improved
reporting and auditing.
As a Solution Architect, what would be your suggestions for a centralized and easy-to-develop backup
strategy for the above requirement?
A.
Use Amazon S3 File Gateway to back up the Amazon EFS file system
B.
Use AWS Backup to back up the Amazon EFS file systems
C.
Amazon FSx File Gateway to back up the Amazon EFS file systems
D.
Use Amazon S3 Transfer Acceleration to copy the files from EFS into a centralized S3 bucket and then
configure Cross-Region Replication of the bucket
Correct Answer: B
- Option A is incorrect because Amazon S3 File Gateway presents a file interface that enables you to
store files as objects in Amazon S3 using the industry-standard NFS and SMB file protocols. It is
mainly used to back up on-premises files to AWS. This cannot be used to back up the EFS File
system.
- Option B is CORRECT because “AWS Backup is a simple and cost-effective way to protect your data by
backing up your Amazon EFS file systems. Amazon EFS is natively integrated with AWS Backup. You
can
use the EFS console, API, and AWS Command Line Interface (AWS CLI) to enable automatic backups for
your file system.”
- Option C is incorrect because Amazon FSx File Gateway provides fast, low-latency on-premises
access
to fully managed, highly reliable, and scalable file shares in the cloud using the
industry-standard
SMB protocol. It is not suitable for EFS File System backup.
- Option D is incorrect because Amazon S3 Transfer Acceleration is for users on the web or mobile
applications which can experience fast upload or download speed over the internet. Amazon S3
Transfer Acceleration can speed up content transfers to and from Amazon S3. This is not a suitable
solution because the team is looking for backup solutions on the EFS file system. This option is
just to distract the customer.
Question 8
You are the owner of a Microservices application that has a poor latency when it runs into the ECS
cluster. Which AWS services could help you analyze the root cause by tracing different calls into
the application?
A.
Amazon CloudWatch
B.
AWS X-Ray
C.
Amazon Event Bridge
D.
Amazon CloudTrail
Correct Answer: B
AWS X-Ray is a service that collects data about requests that your application serves and provides
tools that you can use to view, filter, and gain insights into that data to identify issues and
opportunities for optimization.
- Option A is incorrect because Amazon CloudWatch can check your application's logs and monitoring
dashboards, but you can’t trace specific traffic calls.
- Option B is CORRECT because you can analyze different request calls happening in your application
with AWS X-Ray.
- Option C is incorrect because Amazon Event Bridge is for managing Services events, not tracing or
monitoring applications.
- Option D is incorrect because Amazon CloudTrail is an auditing solution. You can check API calls
for
your account, but it can’t provide you with traces from your application.
Question 9
An IT company is using EBS volumes for storing projects related work. Some of these projects are
already closed. The data for these projects should be stored long-term as per regulatory guidelines
and will be rarely accessed. The operations team is looking for options to store the snapshots
created from EBS volumes. The solution should be cost-effective and incur the least admin work.
What solution can be designed for storing data from EBS volumes?
A.
Create EBS Snapshots from the volumes and store them in the EBS Snapshots Archive
B.
Use Lambda functions to store incremental EBS snapshots to AWS S3 Glacier
C.
Create EBS Snapshots from the volumes and store them in a third-party low-cost, long-term storage
D.
Create EBS Snapshots from the volumes and store them in the EBS standard tier
Correct Answer: A
Amazon EBS has a new storage tier named Amazon EBS Snapshots Archive for storing snapshots that are
accessed rarely and stored for long periods.
By default, snapshots created from Amazon EBS volumes are stored in Amazon EBS Snapshot standard
tier. These are incremental snapshots. When EBS snapshots are archived, incremental snapshots are
converted to full snapshots.
These snapshots are stored in the EBS Snapshots Archive instead of the standard tier. Storing
snapshots in the EBS Snapshots archive costs much less than storing snapshots in the standard tier.
EBS snapshot archive helps store snapshots for long durations for governance or compliance
requirements, which will be rarely accessed.
- Option B is incorrect as it will require additional work for creating an AWS Lambda function. EBS
Snapshots archive is a more efficient way of storing snapshots for the long term.
- Option C is incorrect as using third-party storage will incur additional costs.
- Option D is incorrect as all EBS snapshots are stored in a standard tier by default. Storing
snapshots that will be rarely accessed in the standard tier will be costlier than storing in the
EBS
snapshots archive.
Question 10
You are working as a solutions architect in an E-Commerce based company with users from around the
globe. There was feedback coming from various users of different countries to have the website
content in their local languages. So, the company has now translated the website into multiple
languages and is rolling out the feature soon for its users.
Now you need to send the traffic based on the location of the user. For example, if a request comes
from Japan, it should be routed to the server in the ap-northeast-1 (Tokyo) region where the
application is in the Japanese language. You can do so by specifying the IP address of that
particular server while configuring the records in Route 53. Which one of the following routing
policies should you use in Amazon Route 53 that will fulfill the given requirement?
A.
Weighted Routing Policy
B.
Geoproximity Routing Policy
C.
Geolocation Routing Policy
D.
Multivalue Answer Routing Policy
Correct Answer: C
- Option A is incorrect. A Weighted Routing Policy is used when you have a requirement to specify
the
percentages of traffic to be routed to the underlying servers. For example, 10% traffic to Test
Server A, 10% Traffic to Test Server B, and 80% Traffic to the Production server.
- Option B is incorrect. If the requirement was to route the user request based on the location of
the
users as well as the servers, then you could use the Geoproximity routing policy. For example, you
may want the user’s request should be routed to the resources present at the least distance from
the
user. Also, in this policy, You can optionally choose to route more traffic or less to a given
resource by specifying a value, known as a bias. A bias will let you shrink or expand the size of
a
geographic region from which traffic is routed to a resource. So let’s say there are 4 or 5 cities
whose traffic you want to route to resources in 1 region and then another few cities whose
requests
you want to be routed to the resources in 2nd region.
- Option C is CORRECT. In the Geolocation policy, though the resources might be present close to the
user, the request wants to go to another resource which is quite far. This happens because we want
the user’s request from a location to be routed to a specific server or set of servers only where
the correctly translated website was hosted.
- Option D is incorrect. Multivalue answer routing is used when you want to route the DNS queries
randomly to the underlying servers. It will consider up to 8 healthy servers, and the request will
be routed to any of them randomly. In this question, the requirement is for routing the DNS
queries
to a specific server which is why this - Option Can be eliminated.
Question 11
You have built a serverless architecture composed of Lambda Functions exposed through API Gateway
for one of your client’s applications. For the database layer, you have used DynamoDB. Your team
lead has reviewed the architecture and is concerned about the cost of numerous API Calls being made
to the backend (Lambda Functions) for so many similar requests. Also, the client is concerned about
providing as low latency as possible for the application users’ requests. You have to look for a
solution where the latency and overall cost can be reduced for the current architecture without much
effort.
A.
Cache the computed request’s responses using the CloudFront CDN caching
B.
Use the API Gateway QuickResponse feature to reduce the latency and number of calls to the backend
C.
Enable API Gateway Caching to cache the computed request’s responses
D.
Adjust API Gateway Throttling settings to reduce the latency and number of calls to the backend
Correct Answer: C
- Option A is incorrect. Cloudfront CDN caching is mainly used for caching the responses to limited
HTTP requests. API Gateway can cache the responses to requests of any type. Also, configuring
Cloudfront CDN would be an added effort. You have to select a solution with minimal effort.
Therefore, this - Option Can be eliminated.
- Option B is incorrect because there is no feature like API Gateway QuickResponse. This option is
just a distractor and thus, can be eliminated.
- Option C is CORRECT. When you enable caching on API Gateway, it caches the responses of the
requests
processed by the backend. When a similar request comes, it will serve it quickly from the API
Gateway Cache itself instead of passing the call to the backend. This will reduce the number of
calls made to the backend, eventually reducing the overall cost. Also, this will help in reducing
the latency of the responses sent to the user.
- Option D is incorrect. Throttling is used to limit the number of requests that can pass through
API
Gateway at a time. It is configured in API Gateway to protect your APIs from being overwhelmed by
many requests at a time. This won’t satisfy the requirement of reducing the latency. Hence, this
option can be eliminated.
Question 12
A customer has an instance hosted in the public subnet of the default VPC. The subnet has the
default settings for the Network Access Control List. An IT Administrator needs to be provided SSH
access to the underlying instance. How could this be accomplished?
A.
Ensure the Network Access Control Lists allow Inbound SSH traffic from the IT Administrator’s
Workstation.
B.
Ensure the Network Access Control Lists allow Outbound SSH traffic from the IT Administrator’s
Workstation.
C.
Ensure that the Security group allows Inbound SSH traffic from the IT Administrator’s Workstation.
D.
Ensure that the Security group allows Outbound SSH traffic from the IT Administrator’s Workstation.
Correct Answer - C
Since the IT administrator needs to be provided SSH access to the instance, the traffic would be
inbound to the instance.
Since Security groups are stateful, we do not have to configure outbound traffic. What enters the
inbound traffic is allowed in the outbound traffic too.
Options A and B are incorrect because the default network ACL is configured to allow all traffic to
flow in and out of the subnets to which it is associated.
- Option D is incorrect because Security groups are stateful, we do not have to configure outbound
traffic.
Question 13
A new VPC with CIDR range 10.10.0.0/16 has been set up with a public and a private subnet. Internet
Gateway and a custom route table have been created, and a route has been added with the '
Destination’ as ‘0.0.0.0/0’ and the ‘Target’ with Internet Gateway ( igw-id ). A new Linux EC2
instance has been launched on the public subnet with the auto-assign public IP option enabled, but
the connection is getting failed when trying to SSH into the machine. What could be the reason?
A.
Elastic IP is not assigned.
B.
The NACL of the public subnet disallows the SSH traffic.
C.
A public IP address is not assigned.
D.
The Security group of the instance disallows the egress traffic on port 80.
Answer: B
- Option A is incorrect. An Elastic IP address is a public IPv4 address with which you can mask the
failure of an instance or software by rapidly remapping the address to another instance in your
account.
If your instance does not have a public IPv4 address, you can associate an Elastic IP address with
your instance to enable communication with the internet; for example, to connect to your instance
from your local computer.
From our problem statement, EC2 is launched with Auto-assign public IP enabled. So, since public IP
is available, Elastic IP is not necessary to connect from the internet.
- Option C is incorrect because the problem statement clearly states that EC2 is launched with
Auto-assign Public IP enabled, so this - Option Cannot be true.
- Option B is CORRECT as the NCL may not allow the ingress SSH traffic so the connection failed to
connect.
- Option D is incorrect because SSH uses port 22. A security group is stateful and in this scenario,
the security group may disallow the ingress SSH traffic instead of egress.
Question 14
You are designing a website for a company that streams anime videos. You serve this content through
CloudFront. The company has implemented a section for premium subscribers. This section contains
more videos than the free section. You want to ensure that only premium subscribers can access this
premium section. How can you achieve this easily?
A.
Using bucket policies.
B.
Requiring HTTPS for communication between users and CloudFront.
You currently have your EC2 instances running in multiple availability zones in an AWS region. You
need to create NAT gateways for your private instances to access internet. How would you set up the
NAT gateways so that they are highly available?
A.
Create two NAT Gateways and place them behind an ELB.
B.
Create a NAT Gateway in each Availability Zone.
C.
Create a NAT Gateway in another region.
D.
Use Auto Scaling groups to scale the NAT Gateways.
Correct Answer - B
- Option A is incorrect because you cannot create such configurations.
- Option B is CORRECT because this is recommended by AWS. With this option, if a NAT gateway’s
Availability Zone is down, resources in other Availability Zones can still access internet.
- Option C is incorrect because the EC2 instances are in one AWS region so there is no need to
create
a NAT Gateway in another region.
- Option D is incorrect because you cannot create an Auto Scaling group for NAT Gateways.
Question 16
You are a Solutions Architect in a startup company that is releasing the first iteration of its app.
Your company doesn’t have a directory service for its intended users but wants the users to sign in
and use the app. Which of the following solutions is the most cost-efficient?
A.
Create an IAM role for each end user and the user will assume the IAM role when he signs in the APP.
B.
Create an AWS user account for each customer.
C.
Invest heavily in Microsoft Active Directory as it’s the industry standard.
D.
Use Cognito Identity along with a User Pool to securely save users’ profile attributes.
Correct Answer: D
- Option A is incorrect. It is improper to assign an IAM role for each end-user. IAM role is not a
directory service.
- Option B is incorrect. AWS account cannot be configured as a directory service.
- Option C is incorrect. This isn’t the most efficient means to authenticate and save user
information.
- Option D is correct. Cognito is a managed service that can be used for this app and scale quickly
as
usage grows.
Question 17
A financial institution wants to improve its customer service by automating certain tasks and
providing a more personalized experience. They are considering using AWS services to achieve this
goal.
Which of the following AWS services can be used to enhance customer service in a financial
institution?
(Select all that apply)
A.
Amazon Polly
B.
Amazon Fraud Detector
C.
Amazon Kendra
D.
Amazon Lex
E.
Amazon Textract
Correct Answers: A and D
Amazon Polly can be used to generate natural-sounding speech, which can be used for automated
customer service calls, IVR systems, or to read out text-based information to visually impaired
customers.
Amazon Lex can be used to build conversational interfaces, such as chatbots and voice assistants.
These can be used to answer customer queries, provide product information, or help with
account-related tasks.
- Option B is incorrect. Amazon Fraud Detector s used to detect fraudulent activities, such as
credit
card fraud or identity theft. While it can be used to protect the financial institution, it is not
directly related to enhancing customer service.
- Option C is incorrect. Amazon Kendra is used used to find information within a company's internal
knowledge base. While it can be used to improve customer service by providing accurate and
up-to-date information, it is not directly related to automating customer interactions or
providing
a personalized experience.
- Option E is incorrect. Amazon Textract is used to extract text and data from scanned documents. It
is not designed to predict future trends or build chatbots.
Question 18
A website is hosted on two EC2 instances that sit behind an Elastic Load Balancer. The website’s
response time has been slowed down drastically, and fewer orders are placed by the customers due to
the wait time. By troubleshooting, it showed that one of the EC2 instances had failed and only one
instance is running now. What is the best course of action to prevent this from happening in the
future?
A.
Change the instance size to the maximum available to compensate for the failure.
B.
Use CloudWatch to monitor the VPC Flow Logs for the VPC, the instances are deployed in.
C.
Configure the ELB to perform health checks on the EC2 instances and implement auto-scaling.
D.
Replicate the existing configuration in several regions for failover.
Correct Answer: C
- Option C is correct. Using the elastic load balancer to perform health checks will determine
whether
or not to remove a non-performing or underperforming instance, and have the auto-scaling group
launch a new instance.
- Option A is incorrect. Increasing the instance size doesn’t prevent the failure of one or both
instances. Therefore the website can still become slow or unavailable.
- Option B is incorrect. Monitoring the VPC flow logs for the VPC will capture the VPC traffic, not
the traffic for the EC2 instance. You would need to create a flow log for a network
interface.
- Option D is incorrect. Replicating the same two instance deployment may not prevent instances of
failure and could still result in the website becoming slow or unavailable.
Question 19
You work in the media industry and have deployed a web application on a large EC2 instance where
users can upload photos to your website. This web application must be able to call the S3 API to
function properly. Where would you store your API credentials while maintaining the maximum level of
security?
A.
Save the API credentials to your PHP files.
B.
Don’t save your API credentials. Instead, create an IAM role and assign that role to an EC2
instance.
C.
Save your API credentials in a public Github repository.
D.
Pass API credentials to the instance using instance user data
Correct Answer – B
We designed IAM roles so that your applications can securely make API requests from your instances,
without requiring you to manage the security credentials that the applications use. Instead of
creating and distributing your AWS credentials, you can delegate permission to make API requests
using IAM roles as follows:
Create an IAM role.
Define which accounts or AWS services can assume the role.
Define which API actions and resources the application can use after assuming the role.
Specify the role when you launch your instance or attach the role to an existing instance.
Have the application retrieve a set of temporary credentials and use them.
Question 20
A company has a media processing application deployed in a local data center. Its file storage is
built on a Microsoft Windows file server. The application and file server need to be migrated to
AWS. You want to set up the file server in AWS quickly. The application code should continue working
to access the file systems. Which method should you choose to create the file server?
A.
Create a Windows File Server from Amazon WorkSpaces.
B.
Configure a high performance Windows File System in Amazon EFS.
C.
Create FSx for Windows File Server.
D.
Configure a secure enterprise storage through Amazon WorkDocs.
Correct Answer– C
In this question, a Windows file server is required in AWS, and the application should continue to
work unchanged. Amazon FSx for Windows File Server is the correct answer as it is backed by a fully
native Windows file system.
- Option A is incorrect because Amazon WorkSpace configures a desktop server which is not
required in this question. Only a Windows file server is needed.
- Option B is incorrect because EFS cannot be used to configure a Windows file server.
- Option C is CORRECT because Amazon FSx provides fully managed Microsoft Windows file servers.
Check the reference
in https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html.
- Option D is incorrect because Amazon WorkDocs is a file-sharing service in AWS. It cannot
provide a native Windows file system.
Question 21
You have an application hosted in an Auto Scaling group, and an application load balancer
distributes traffic to the ASG. You want to add a scaling policy that keeps the average aggregate
CPU utilization of the Auto Scaling group to be 60 percent. The capacity of the Auto Scaling group
should increase or decrease based on this target value. Which scaling policy does it belong to?
A.
Target tracking scaling policy.
B.
Step scaling policy.
C.
Simple scaling policy.
D.
Scheduled scaling policy.
Correct Answer – A
In ASG, you can add a target tracking scaling policy based on a target. For different scaling
policies, please
check https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html
- Option A is CORRECT: Because a target tracking scaling policy can be applied to check the
ASGAverageCPUUtilization metric according
to https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html.
- Option B is incorrect: Because Step scaling adjusts the capacity based on step adjustments instead
of a target.
- Option C is incorrect: Because Simple scaling changes the capacity based on a single
adjustment.
- Option D is incorrect: With Scheduled scaling, the capacity is adjusted based on a schedule rather
than a target.
Question 22
A multinational logistics company is looking to modernize its tracking and auditing system for
packages and shipments. They require a solution that provides immutable transaction history,
real-time visibility into data changes, and seamless integration with their existing AWS
infrastructure. Which AWS service would be most suitable for their use case?
A.
Amazon Neptune
B.
Amazon Quantum Ledger Database (Amazon QLDB)
C.
Amazon ElastiCache
D.
Amazon DynamoDB
Correct Answer: B
Amazon Quantum Ledger Database (Amazon QLDB) is the most suitable option for the logistics company's
requirements:
Immutable Transaction History: QLDB provides a fully managed, serverless, and scalable ledger
database service designed specifically to maintain a complete and verifiable history of all changes
to application data.
Real-time Visibility into Data Changes: QLDB allows for real-time visibility into data changes with
its ability to provide an immutable history of all transactions.
- Option A is incorrect because Amazon Neptune is a fully managed graph database service designed
for
storing and querying highly connected data, such as social networks or recommendation engines. It
does not provide the built-in features necessary for maintaining an immutable transaction
history
- Option C is incorrect because Amazon ElastiCache is a fully managed in-memory caching service that
provides high-performance, scalable caching solutions. However, it is not designed to maintain
immutable transaction history or provide real-time visibility into data changes.
- Option D is incorrect because Amazon DynamoDB is a fully managed NoSQL database service designed
for
high-performance, low-latency applications. It does not provide built-in features for maintaining
immutable transaction history or real-time visibility into data changes. While DynamoDB Streams
can
capture changes to data, it does not offer the same level of immutability and verifiability as
Amazon QLDB.
Question 23
The customer data of an application is stored in an S3 bucket. Your team would like to use Amazon
Athena to analyze the data using standard SQL. However, the data in the S3 bucket is encrypted via
SSE-KMS. How would you create the table in Athena for the encrypted data in S3?
A.
You need to provide the private KMS key to Athena.
B.
Athena decrypts the data automatically, and you do not need to provide key information.
C.
You need to convert SSE-KMS to SSE-S3 before creating the table in Athena.
D.
You need to disable the server-side encryption in S3 before creating the Athena table
Correct Answer – B
- Option A is incorrect because, for SS3-KMS, Athena can determine the proper materials to
decrypt the dataset when creating the table. You do not need to provide the key information to
Athena.
- Option B is CORRECT because Athena can create the table for the S3 data encrypted by SSE-KMS.
Options C and D are incorrect because these steps are not required. Athena can create tables for
the dataset encrypted by SSE-KMS.
Question 24
You create several SQS queues to store different types of customer requests. Each SQS queue has a
backend node that pulls messages for processing. Now you need a service to collect messages from the
frontend and push them to the related queues using the publish/subscribe model. Which service would
you choose?
A.
Amazon MQ
B.
Amazon Simple Notification Service (SNS)
C.
Amazon Simple Queue Service (SQS)
D.
AWS Step Functions
Correct Answer – B
AWS SNS can push notifications to the related SQS endpoints. SNS uses a publish/subscribe model that
provides instant event notifications for applications.
- Option A is incorrect: Amazon MQ is a managed message broker service, which is not suitable for
this
scenario.
- Option B is CORRECT: Because SNS uses Pub/Sub messaging to provide asynchronous event
notifications.
Please check the
link-https://aws.amazon.com/pub-sub-messaging/.
- Option C is incorrect: Because SQS does not use the publish/subscribe model.
- Option D is incorrect: AWS Step Functions coordinate application components using visual
workflows.
The service should not be used in this scenario.
Question 25
You have a requirement to get a snapshot of the current configuration of resources in your AWS
Account. Which service can be used for this purpose?
A.
AWS CodeDeploy
B.
AWS Trusted Advisor
C.
AWS Config
D.
AWS IAM
Correct Answer - C
AWS Documentation mentions the following.
With AWS Config, you can do the following.
Evaluate your AWS resource configurations for desired settings.
Get a snapshot of the current configurations of the supported resources that are associated with
your AWS account.
Retrieve configurations of one or more resources that exist in your account.
Retrieve historical configurations of one or more resources.
Receive a notification whenever a resource is created, modified or deleted.
View relationships between resources. For example, you might want to find all resources that use a
particular security group.