400+ Câu hỏi luyện thi chứng chỉ AWS Certified Solutions Architect Associate (SAA-C03) - Phần 6

aws
aws certification
cloud computing
solutions architect
saa c03
aws exam
practice test

Question 1

A manufacturing firm has a large number of smart devices installed in various locations worldwide. Hourly logs from these devices are stored in an Amazon S3 bucket. Management is looking for comprehensive dashboards which should incorporate usages of these devices and forecast usage trends for these devices.

Which tool is the best suited to get this required dashboard?

A. Use S3 as a source for Amazon QuickSight and create dashboards for usage and forecast trends

B. Use S3 as a source for Amazon Redshift and create dashboards for usage and forecast trends

C. Copy data from Amazon S3 to Amazon DynamoDB. Use Amazon DynamoDB as a source for Amazon QuickSight and create dashboards for usage and forecast trends

D. Copy data from Amazon S3 to Amazon RDS. Use Amazon RDS as a source for Amazon QuickSight and create dashboards for usage and forecast trends

Question 2

A company has launched Amazon EC2 instances in an Auto Scaling group for deploying a web application. The Operations Team is looking to capture custom metrics for this application from all the instances. These metrics should be viewed as aggregated metrics for all instances in an Auto Scaling group. What configuration can be implemented to get the metrics as required?

A. Use Amazon CloudWatch metrics with detail monitoring enabled and send to CloudWatch console where all the metrics for an Auto Scaling group will be aggregated by default.

B. Install a unified CloudWatch agent on all Amazon EC2 instances in an Auto Scaling group and use " aggregation_dimensions" in an agent configuration file to aggregate metrics for all instances

C. Install unified CloudWatch agent on all Amazon EC2 instances in an Auto Scaling group and use “append-config” in an agent configuration file to aggregate metrics for all instances

D. Use Amazon CloudWatch metrics with detail monitoring enabled and create a single Dashboard to display metrics from all the instances

Question 3

A critical web application is deployed on multiple Amazon EC2 instances which are part of an Autoscaling group. One of the Amazon EC2 instances in the group needs to have a software upgrade. The Operations Team is looking for your suggestions to advise for this upgrade without impacting another instance in the group. Post upgrade, the same instance should be part of the Auto Scaling group.

What steps can be initiated to complete this upgrade?

A. Hibernate the instance and perform upgrade in offline mode. Post upgrade, start the instance which will be part of the same auto-scaling group

B. Use cooldown timers to perform upgrades on the instance. Post cooldown timers’ instances would be part of the same auto-scaling group

C. Put the instance in Standby mode. Post upgrade, move instance back to InService mode. It will be part of the same auto-scaling group

D. Use lifecycle hooks to perform upgrades on the instance. Once these timers expire, the instance would be part of the same auto-scaling group

Question 4

An eCommerce website is hosted on two EC2 instances that sit behind an Elastic Load Balancer. The response time of the website has slowed dramatically. The analytics team found that one of the EC2 instances failed, leaving only one instance running. What is the best course of action to prevent this from happening in the future?

A. Change the instance size to the maximum available to compensate for the failure.

B. Use CloudWatch to monitor the instances via VPC Flow Logs.

C. Configure the ELB to perform health checks on the EC2 instances and implement auto-scaling.

D. Replicate the existing configuration in several regions for failover

Question 5

A third-party vendor based in an on-premises location needs to have temporary connectivity to database servers launched in a single Amazon VPC. The proposed connectivity for these few users should be secure, and access should be provided only to authenticated users.

Which connectivity option can be deployed for this requirement in the most cost-effective way?

A. Deploy an AWS Client VPN from third-party vendor’s client machines to access databases in Amazon VPC

B. Deploy AWS Direct Connect connectivity from the on-premises network to AWS

C. Deploy an AWS Managed VPN connectivity to a Virtual Private gateway from an on-premises network

D. Deploy an AWS Managed VPN connectivity to the AWS Transit gateway from the on-premises network

Question 6

Which of the following are the main functions of AWS Route 53?

A. Register domain names

B. Route internet traffic to the resources for your domain

C. Load-balance traffic among geographically-distributed web servers.

D. Check the health of your resources

E. All of the above

Question 7

A company is using microservices-based applications using Amazon ECS for an online shopping application. For different services, multiple tasks are created in a container using the EC2 launch type. The security team is looking for some specific security controls for the tasks in the containers along with granular network monitoring using various tools for each task.

What networking mode configuration can be considered with Amazon ECS to meet this requirement?

A. Use host networking mode for Amazon ECS tasks

B. By default, an elastic network interface (ENI) with a primary private IP address is assigned to each task

C. Use awsvpc networking mode for Amazon ECS tasks

D. Use bridge networking mode for Amazon ECS tasks

Question 8

A start-up firm is planning to deploy container-based applications using Amazon ECS. The firm is looking for the least latency from on-premises networks to the workloads in the containers. The proposed solution should be scalable and should support consistent high CPU and memory requirements. What deployment can be implemented for this purpose?

A. Create a Fargate launch type with Amazon ECS and deploy it in the AWS Outpost

B. Create a Fargate launch type with Amazon ECS and deploy it in the AWS Local Zone

C. Create an EC2 launch type with Amazon ECS and deploy it in the AWS Local Zone

D. Create an EC2 launch type with Amazon ECS and deploy it in the AWS Outpost

Question 9

A new application is deployed in an Amazon EC2 instance which is launched in a private subnet of Amazon VPC. This application will be fetching data from Amazon S3 as well as from Amazon DynamoDB. The communication between the Amazon EC2 instance and Amazon S3 as well as with Amazon DynamoDB should be secure and should not transverse over internet links. The connectivity should also support accessing data in Amazon S3 from an on-premises network in the future.

What design can be implemented to have secure connectivity?

A. Access Amazon DynamoDB from an instance in a private subnet using a gateway endpoint. Access Amazon S3 from an instance in a private subnet using an interface endpoint

B. Access Amazon S3 and Amazon DynamoDB from an instance in a private subnet using a private NAT gateway

C. Access Amazon S3 and Amazon DynamoDB from an instance in a private subnet using a public NAT gateway

D. Access Amazon S3 and Amazon DynamoDB from an instance in a private subnet using a gateway endpoint

Question 10

A static website named ‘whizexample’ is hosted using the Amazon S3 bucket. JavaScript on the web pages stored in the Amazon S3 bucket needs to make authenticated GET requests to the bucket using the Amazon S3 API endpoint for the bucket, example.s3.us-west-1.amazonaws.com.

What additional configuration will be required for allowing this access?

A. Create CORS configuration with Access-Control-Request-Header as GET using JSON and add CORS configuration to the bucket from the S3 console

B. Create CORS configuration with Access-Control-Request-Method as GET using JSON and add CORS configuration to the bucket from the S3 console

C. Create CORS configuration with Access-Control-Request-Method as GET using XML and add CORS configuration to the bucket from the S3 console

D. Create CORS configuration with Access-Control-Request-Header as GET using XML and add CORS configuration to the bucket from the S3 console

Question 11

The HR Team is using Amazon S3 buckets to save details of the employees. There are some users in the Development team having an IAM policy with full access to S3 buckets. The HR head wants strict access control for the HR bucket to ensure only legitimate members from the HR team have access to the HR bucket. The policy should be applicable when new users are created with the IAM policy of full access to S3 buckets.

What access control can be created for this purpose with the least admin work?

A. Create an S3 bucket policy for the HR bucket with explicit ‘deny all’ to Principal elements (users and roles) other than users who require access to the Amazon S3 bucket

B. Create an S3 bucket policy for the HR bucket with explicit ‘deny all’ to Principal (only users) other than users who require access to the Amazon S3 bucket

C. Create an S3 bucket policy for the HR bucket restricting access only to the roles. Create a role with access permissions to the Amazon S3 bucket. HR Team users who require access to the bucket can assume this role

D. Create an S3 bucket policy for the HR bucket with explicit deny to NotPrincipal element (users and roles) which will match all users required to access and in turn deny all other user’s access to the bucket

Question 12

An online retail store is using Amazon Redshift for its data warehousing service which analyses petabytes sized data. Operations Head is concerned about the performance of the clusters and requires near real-time data which should display performance data every minute.

What actions can be initiated to get this monitoring data?

A. Create custom performance queries and view them in the Amazon CloudWatch console

B. Create custom performance queries and view them in the Amazon Trusted Advisor console

C. Create custom performance queries and view them in the Amazon CloudTrail console

D. Create custom performance queries and view them in the Amazon Redshift console

Question 13

You have created a VPC with an application load balancer and selected two EC2 instances as targets. However, when you are trying to make a request to the internet-facing load balancer, the request fails. What could be the reason?

A. The route table associated with the load balancer’s subnet does not have a route to the internet gateway.

B. Target EC2 instances are in a public subnet.

C. There is no elastic IP address attached to the load balancer.

D. Cross-zone load balancing is not enabled.

Question 14

An IT firm is planning to deploy microservices applications on Amazon ECS. The Project Team is expecting occasional bursts in the application usage. Amazon ECS clusters should be scalable to meet this burst without any manual interventions. The relational database for this application should automatically scale to the application demand without any need to manage underlying instances.

What design can be recommended to have a scalable application?

A. Create Amazon ECS clusters with Fargate launch type. For Database use, Amazon DynamoDB

B. Create Amazon ECS clusters with Fargate launch type. For Database use, Amazon Aurora Serverless

C. Create Amazon ECS clusters with EC2 launch type. For Database use, Amazon Aurora Serverless

D. Create Amazon ECS clusters with EC2 launch type. For Database use, Amazon DynamoDB

Question 15

A start-up firm has deployed multiple Amazon EC2 instances for its web application. The operations team is looking to retrieve ami-id for all these running instances. They are seeking your help with the correct URL for this purpose.

What command can be used to get this detail?

A. Use http://169.254.169.254/latest/meta-data/ami-id

B. Use http://168.254.168.254/latest/metadata/ami-id

C. Use http://169.254.169.254/latest/user-data/ami-id

D. Use http://168.253.168.253/latest/dynamic/ami-id

Question 16

You are the Solutions Architect for a health insurance service company that wants to start building and supporting IoT devices for patients who recently signed new clauses in the contract. This will open opportunities to expand its market but also introduces some restrictions. All services and data involved have to guarantee HIPAA compliance and include in-transit encryption. Due to high sensitivity data, bypassing the internet is crucial. The architecture uses already ELBs. They want to avoid DNS re-configuration and IP address caching when it comes to the IoT devices. What combination of services may be the closest option to address this challenge?

A. AWS ELBs, AWS IoT, and Amazon Route53 configured with geolocation or latency routing policies. This requires an interface endpoint (PrivateLink) for Route53 to stay inside the AWS backbone.

B. AWS ELBs, AWS IoT, and AWS Global Accelerator which provisions two anycast static IP addresses.

C. AWS ELBs, AWS IoT, and Amazon Route53 configured with geolocation or latency routing policies. This does not require an interface endpoint (PrivateLink) because Route53 is inside the AWS backbone.

D. AWS ELBs, AWS IoT, and AWS Global Accelerator which provisions one anycast static IP address.

Question 17

The fraud detection department in a financial analytics company using Amazon Web Services with recommended configurations needs to transfer data from their POSIX-compliant file system (Lustre) to an Amazon S3 bucket. In this context, which statement is correct from below? (Select TWO)

A. AWS DataSync provides support for Amazon FSx for the Lustre file system

B. Amazon FSx for Lustre integrates natively with Amazon S3.

C. Amazon FSx for Windows File Server takes highly durable backups stored in S3.

D. If you link your Amazon FSx for Lustre to an Amazon S3 data lake, your content will appear as objects as soon as the attached block-storage is available.

Question 18

As a Solutions Architect working with the systems and networking teams, you have been tasked with providing disaster recovery (DR) and high-availability elements for a security planning document that involves FSx for Windows. Which of the following statements are true about Amazon FSx for Windows File Server? (Select TWO)

A. Amazon FSx for Windows File Server offers instant regional failover, fault-isolating design, and automatic traffic routing across multiple applications, multiple VPCs, accounts, or Regions.

B. Amazon FSx for Windows File Server allows to access file systems from multiple Amazon Virtual Private Clouds (VPCs), AWS accounts, and AWS Regions via VPC Peering or AWS Transit Gateway.

C. Amazon FSx for Windows File Server offers single-AZ and multi-AZ deployment options with SSD and HDD storage options.

D. Direct Connect, VPN, VPC Peering, and AWS Transit Gateway are not supported.

E. Amazon FSx for Windows File Server is a fully POSIX-compliant filesystem.

Question 19

While analyzing billions of web pages in a company, you have noticed some munging processes are exceeding SLAs even after using X1e instances suitable for HPC applications. After monitoring logs, trailing, and tracing data closely, you noticed that write operations involving S3 content pre-processing causing 80% of the bottleneck. In comparison, read operations for post-processing and LIST operations are together leading to the remaining 20% of congestion. Which two options are recommended to increase performance in this scenario? (Select TWO)

A. Migrate S3 files to an RDS database with write-optimized IOPS.

B. Using Amazon S3 Transfer Acceleration, Multipart upload, parallelized reading via byte-range fetches, and partitioned prefix for distributing key names as part of naming patterns.

C. Instead of LIST operations, you can scale storage connections horizontally since Amazon S3 is a very large distributed system similar to a decoupled parallel, single network endpoint. You can achieve the best performance by issuing multiple concurrent requests to Amazon S3.

D. Instead of LIST operations, you can build a search catalog to keep track of S3 metadata by using other AWS services like Amazon DynamoDB or Amazon OpenSearch Service.

E. Combining Amazon S3 and Amazon EC2 instances by migrating your current buckets to the region where the instances are available. This will help to reduce network latency and data transfer costs.

Question 20

A multinational logistics company is looking to modernize its tracking and auditing system for packages and shipments. They require a solution that provides immutable transaction history, real-time visibility into data changes, and seamless integration with their existing AWS infrastructure. Which AWS service would be most suitable for their use case?

A. Amazon Neptune

B. Amazon Quantum Ledger Database (Amazon QLDB)

C. Amazon ElastiCache

D. Amazon DynamoDB

Question 21

You are analyzing dynamic scaling activity defined to process messages from an Amazon SQS queue. You are using a target tracking policy. CloudWatch alarms are meant to invoke the scaling policy. However, you have noticed that the EC2 Auto Scaling group does not seem to be responding to a CloudWatch alarm. Which option may cause it?

A. Wrong CloudWatch metric is configured in the CloudWatch alarm.

B. The Auto Scaling group is under the cooldown period.

C. The minimum number of instances in the ASG is 0.

D. The desired number of instances in the ASG is 0

Question 22

A project you are part of as a Solutions Architect has a latency-sensitive workload requirement despite availability. You have to consider building high-performance networking for their VPCs to operate based on SLAs already signed as part of the contract. Which two statements in this context are correct? (Select TWO)

A. A cluster placement group is suitable for this use case as long as the instances remain within a single Availability Zone

B. A cluster placement group is appropriate for this use case because of a higher per-flow throughput limit of up to 10 Gbps except for the traffic over an AWS Direct Connect connection to on-premises resources.

C. You can move an instance from one placement group to another, but you cannot combine placement groups.

D. You can migrate an instance from one placement group to another and merge placement groups.

E. A spread placement group is appropriate for this use case but if you start or launch an instance in the group and there is insufficient unique hardware, the request fails and you have to try again.

Question 23

You are building a solution for a security company that requires real-time monitoring and analysis of surveillance camera footage for detecting suspicious activities and intrusions. Which AWS service do you suggest for securely handling the visual data stream originating from multiple cameras dispersed across various locations?

A. Implement Amazon Elastic Transcoder for securely facilitating real-time detection of suspicious activities and intrusions

B. Enable Amazon Rekognition Video for analyzing the video streams and to detect objects, activities suitable for real-time video analysis and content moderation

C. Use Amazon SageMaker to trigger real-time analysis and alerts based on video stream data

D. Implement Amazon Kinesis Video Streams for securely facilitating real-time detection of suspicious activities and intrusions

Question 24

A web application uses Amazon CloudFront to deliver its static and dynamic web content. You want to customize the error page returned to the viewer when there is a problem with the origin server. For example, if the origin server returns a 500 Internal Server Error, CloudFront should present a page that you have prepared. How would you achieve this requirement in CloudFront?

A. Modify the application to store custom error pages. CloudFront can cache these error pages automatically.

B. Create a new CloudFront distribution to fetch error pages. Configure the original CloudFront to use the new one as its custom error responses.

C. Put the static error pages in an S3 bucket. Create custom error responses for the HTTP 5xx status code in the CloudFront distribution.

D. Upload custom error pages to the CloudFront distribution. Return the error pages when there is a server error.

Question 25

A CloudFront distribution delivers the web content of an application. Some static files are cached in the CloudFront distribution, and the TTL is set to 1 day. You upgrade the origin server, and a configuration JSON file is updated. However, when users access the website, the old file cached in CloudFront is returned, and some services are impacted. How would you resolve this problem?

A. Wait for a day and the file will be updated automatically.

B. Invalidate the object in CloudFront so that the object is removed from the CloudFront edge cache.

C. Modify the default TTL to be 0 in the CloudFront cache setting.

D. Upgrade the origin application again and add a cache-control header to inform CloudFront to remove the JSON file from its cache.

Question 26

You plan to migrate a Classic Load Balancer to an Application Load Balancer or a Network Load Balancer. For the new load balancer listener, you need to define two rules that route requests based on the host name in the host header. Then the traffic will be routed to two different target groups accordingly. How would you configure the new load balancer?

A. Create a Network Load Balancer and select the HTTP protocol in the listener. Configure the host-based routing in the listener.

B. Launch a new Network Load Balancer and choose the TCP protocol in its listener. Route the traffic based on the path.

C. Set up an Application Load Balancer and configure several rules in the listener to perform path-based routing.

D. Use an Application Load Balancer and configure host-based routing in the listener rule.

Question 27

Your team uses Amazon ECS to manage containers for several micro-services. To save cost, multiple ECS tasks should run at a single container instance. When a task is launched, the host port should be dynamically chosen from the container instance’s ephemeral port range. The ECS service should select a load balancer that supports dynamic port mapping. Which types of load balancers are appropriate?

A. Application Load Balancer or Network Load Balancer.

B. Application Load Balancer only.

C. Network Load Balancer only.

D. Application Load Balancer or Classic Load Balancer.

Question 28

You plan to manage API keys in AWS Secrets Manager. The keys need to be automatically rotated to be compliant with the company policy. From Secrets Manager, applications can get the latest version of the API credentials. How would you implement the rotation of keys?

A. Use AWS Parameter Store to store and rotate the keys as Secrets Manager does not support it.

B. Directly add multiple keys in Secrets Manager for rotation and the keys will be rotated every year automatically.

C. Customize the Lambda function that performs the rotation of secrets in Secrets Manager.

D. Create two secrets in Secrets Manager to store two versions of the API credentials. Modify the application to get one of them.

Question 29

Your organization starts to store RDS credentials in AWS Secrets Manager. To be compliant with security regulations, all secrets stored in the Secrets Manager should automatically rotate. If rotation is not enabled for a secret, your team should get an email notification. Which method is the most appropriate?

A. Configure AWS Secrets Manager to enable the rotation for all existing and new secrets.

B. Create a CloudWatch Event rule that matches all events in Secrets Manager. Register an SNS topic as its target to provide notifications.

C. Enable Amazon GuardDuty that monitors services including Secrets Manager.

D. Add the rule “secretsmanager-rotation-enabled-check” in AWS Config to check whether AWS Secrets Manager has enabled the secret rotation. Register an SNS topic to provide notifications.

Question 30

The customer data of an application is stored in an S3 bucket. Your team would like to use Amazon Athena to analyze the data using standard SQL. However, the data in the S3 bucket is encrypted via SSE-KMS. How would you create the table in Athena for the encrypted data in S3?

A. You need to provide the private KMS key to Athena.

B. Athena decrypts the data automatically, and you do not need to provide key information.

C. You need to convert SSE-KMS to SSE-S3 before creating the table in Athena.

D. You need to disable the server-side encryption in S3 before creating the Athena table.

Question 31

Your organization stores customer data in an Amazon DynamoDB table. You need to use AWS Glue to create the ETL (extract, transform, and load) jobs to build the data warehouse. In AWS Glue, you need a component to determine the schema from DynamoDB, and populate the AWS Glue Data Catalog with metadata. Which of the following components should be used to implement it?

A. Table in AWS Glue.

B. Table in Amazon Athena.

C. Crawler in AWS Glue.

D. Classifier in AWS Glue

Question 32

You work in a start-up company as an AWS solutions architect. You create a new AWS Organization that includes a large amount of AWS accounts. You want to use a tool to trigger a notification whenever the administrator performs an action in the Organization. Which of the following AWS services would you use?

A. Amazon EventBridge

B. AWS Config Resources.

C. AWS CloudTrail.

D. AWS CloudWatch Logs.

Question 33

IoT sensors monitor the number of bags that are handled at an airport. The data is sent back to a Kinesis stream with default settings. Every alternate day, the data from the stream is sent to S3 for processing. But it is noticed that S3 is not receiving all of the data being sent to the Kinesis stream. What could be the reason for this?

A. The sensors probably stopped working on somedays, hence data is not sent to the stream.

B. S3 can only store data for a day.

C. The default retention period of the data stream is set to 24 hours only, and hence the failure.

D. Kinesis streams are not meant to handle IoT related data.

Question 34

Your company uses an S3 bucket to store data for an application. Sometimes the team also downloads the S3 files for further analysis. As the data is very important, you need to protect against accidental deletions initiated by someone or an application and restore the files when needed. Which of the following options is appropriate?

A. Enable the versioning feature in the S3 bucket.

B. Modify the S3 bucket to be read-only.

C. Use an S3 Lifecycle policy to transfer objects to a lower cost storage.

D. Enable the Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)

Question 35

You are working as an AWS Administrator for a software firm with a popular Web application hosted on EC2 instance in various regions. You are using AWS CloudHSM for offloading SSL/TLS processing from Web servers. Since this is a critical application for the firm, you need to ensure that proper backups are performed for data in AWS CloudHSM daily. What does the AWS CloudHSM use to perform a secure & durable backup?

A. Ephemeral backup key (EBK) is used to encrypt data & Persistent backup key (PBK) is used to encrypt EBK before saving data to the Amazon S3 bucket in the same region as that of AWS CloudHSM cluster.

B. Data Key is used to encrypt data & Customer Managed Key (CMK) is used to encrypt Data Key before saving data to the Amazon S3 bucket in the same region as that of AWS CloudHSM cluster.

C. Ephemeral Backup Key (EBK) is used to encrypt data & Persistent backup Key (PBK) is used to encrypt EBK before saving data to the Amazon S3 bucket in a different region than the AWS CloudHSM cluster.

D. Data Key is used to encrypt data & Customer Managed Key (CMK) is used to encrypt Data Key before saving data to Amazon S3 bucket in a different region than the AWS CloudHSM cluster.

Question 36

A company is planning on moving its applications to the AWS Cloud. They have some large SQL data sets that need to be hosted in a data store on the cloud. The data store needs to have features that support client connections with many types of applications, including business intelligence (BI), reporting, data, and analytics tools. Which of the following service should be considered for this requirement?

A. Amazon DynamoDB

B. Amazon Redshift

C. Amazon Kinesis

D. Amazon Simple Queue Service

Question 37

You are developing an application that uses the Amazon Kinesis Producer Library (KPL) to put data records to an encrypted Kinesis data stream. However, when your application runs, there is an unauthorized KMS master key permission error. How would you resolve the problem?

A. Configure the application’s IAM role as the key administrator of the KMS key.

B. In the KMS key policy, assign the permission to the application to access the key.

C. Re-encrypt the Kinesis data stream with AWS/kinesis.

D. Configure the KPL not to encrypt the data records for the Kinesis data stream

Question 38

Your company plans to host its development, test, and production applications on EC2 Instances in AWS. The team is worried about how access control would be given to relevant IT Admins for each of the above environments. As an architect, what would you suggest to manage the relevant accesses?

A. Add tags to the instances marking each environment and then segregate access using IAM Policies.

B. Add Userdata to the underlying instances to mark each environment.

C. Add Metadata to the underlying instances to mark each environment.

D. Add each environment to a separate Auto Scaling Group.

Question 39

You have a web application that processes customer orders. The frontend application forwards the order messages to an SQS queue. The backend contains an Elastic Load Balancer and an Auto Scaling group. You want the ASG to auto-scale depending on the queue size. Which of the following CloudWatch metrics would you choose to discover the SQS queue length?

A. ApproximateNumberOfMessagesVisible

B. NumberOfMessagesReceived

C. NumberOfMessagesDeleted

D. ApproximateNumberOfMessagesNotVisible

Question 40

You are performing a Load Testing exercise on your application that is hosted on AWS. While testing your Amazon RDS MySQL DB Instance, you notice that your application becomes non-responsive when you reach 100% CPU utilization due to large read-heavy workloads. Which methods would help scale your data tier to meet the application’s needs? (Select TWO)

A. Add Amazon RDS Read Replicas, and have your application direct read queries to them.

B. Add your Amazon RDS DB instance to storage Auto Scaling, and set your desired maximum storage limit.

C. Use an Amazon SQS queue to throttle data going to the Amazon RDS Instance.

D. Use ElastiCache to cache common queries of your Amazon RDS DB.

E. Enable Multi-AZ for your Amazon RDS Instance.

Question 41

You create several SQS queues to store different types of customer requests. Each SQS queue has a backend node that pulls messages for processing. Now you need a service to collect messages from the frontend and push them to the related queues using the publish/subscribe model. Which service would you choose?

A. Amazon MQ

B. Amazon Simple Notification Service (SNS)

C. Amazon Simple Queue Service (SQS)

D. AWS Step Functions

Question 42

You need a new S3 bucket to store objects using the write-once-read-many (WORM) model. After objects are saved in the bucket, they cannot be deleted or overwritten for a fixed amount of time. Which option would you select to achieve this requirement?

A. Enable the Amazon S3 object lock when creating the S3 bucket.

B. Enable versioning for the S3 bucket.

C. Modify the S3 bucket policy to only allow the read operation.

D. Enable the WORM model in the S3 Access Control List (ACL) configuration.

Question 43

You are building up a Lambda function that runs periodically and processes the data stored in a DynamoDB table. As the data in the DynamoDB table grows fast, you need to estimate the cost of the Lambda function. Which of the following factors directly determine the monthly cost of Lambda? ( Select TWO.)

A. The programming language used by the Lambda function.

B. The Lambda function handler.

C. The memory allocated to the Lambda function.

D. The timeout value of the Lambda function.

E. The total number of requests for the Lambda function.

Question 44

You have a lifecycle rule for an S3 bucket that archives objects to the S3 Glacier storage class 60 days after creation. The archived objects are no longer needed one year after being created. How would you configure the S3 bucket to save more cost?

A. Configure a rule in S3 Glacier to place delete markers for objects that are one year old.

B. Configure the S3 lifecycle rule to expire the objects after 365 days from object creation.

C. Modify the S3 lifecycle rule to clean up expired object delete markers for one year old objects.

D. Modify the S3 lifecycle rule to use S3 Glacier Deep Archive which automatically deletes objects one year after creation.

Question 45

You are working for an electrical appliance company that has a web-application hosted in AWS. This is a two-tier web application with web-servers hosted in VPC’s & on-premise data-center. You are using a Network Load balancer in the front end to distribute traffic between these servers. You are using instance Id for configuring targets for Network Load Balancer. Some clients are complaining about the delay in accessing this website.

To troubleshoot this issue, you are looking for a list of Client IP address having longer TLS handshake time. You have enabled access logging on Network Load balancing with logs saved in Amazon S3 buckets. Which tool could be used to quickly analyze many log files without any visualization in a cost-effective way?

A. Use Amazon Athena to query logs saved in Amazon S3 buckets.

B. Use Amazon S3 console to process logs.

C. Export Network Load Balancer access logs to third-party application.

D. Use Amazon Athena along with Amazon QuickSight to query logs saved in Amazon S3 buckets.

Question 46

You are requested to guide a large Pharma company. They are looking for a solution to save all their R&D test analysis data securely. Daily large numbers of reports are generated; this data would be accessed from multiple R&D centers spread across the globe. The company requires this data to be instantaneously available to all users. Which of the following is the most suitable way for AWS storage to provide low latency access to users across the globe with the least cost?

A. Use Amazon EC2 instance with instance store to store data.

B. Use Amazon EFS volumes to store data.

C. Use Amazon EBS volumes connected to the EC2 instance to store data

D. Use Amazon S3 Standard storage class from Amazon S3 to store data.

Question 47

A company hosts a popular web application that connects to an Amazon RDS MySQL DB instance running in a private VPC subnet created with default ACL settings. The IT Security department has identified a DoS attack from a suspect IP. How would you protect the subnets from this attack?

A. Change the Inbound Security Groups to deny access from the suspecting IP.

B. Change the Outbound Security Groups to deny access from the suspecting IP.

C. Change the Inbound NACL to deny access from the suspecting IP.

D. Change the Outbound NACL to deny access from the suspecting IP.

Question 48

Your company has a set of 100 servers hosted on the AWS Cloud. There is a need to stream the Logs from the Instances for analysis purposes. From a security compliance perspective, additional logic will be executed to analyze the data for any sort of abnormal behaviour. Which of the following would be used to stream the log data?

A. Amazon CloudFront

B. Amazon SQS

C. Amazon Kinesis Data Streams (KDS)

D. Amazon SES (Simple Email Service)

Question 49

A startup company wants to launch an online learning portal on AWS using CloudFront and S3. They have different subscription models. One model where all the members will have access to basic content but another model where the company provides premium content that includes access to multiple private contents without changing their current links.

How should a Solution Architect design this solution to meet the requirements?

A. Design the learning portal using CloudFront web distribution to deliver the premium private content using Signed Cookies.

B. Design the learning portal using CloudFront web distribution to deliver the premium private content using Signed URLs.

C. Design the learning portal using CloudFront web distribution to deliver the premium private content using S3 pre-signed URLs.

D. Design the learning portal using CloudFront web distribution to deliver the premium private content using CloudFront geographic restrictions feature.

Question 50

A small company started using EBS backed EC2 instances due to the cost improvements over running their own servers. The company’s policy is to stop the development servers over the weekend and restart them each week. The first time the servers were brought back. None of the developers were able to SSH into them. What did the server most likely overlook?

A. The associated Elastic IP address has changed and the SSH configurations were not updated.

B. The security group for a stopped instance needs to be reassigned after start.

C. The public IPv4 address has changed on server restart and the SSH configurations were not updated.

D. EBS backed EC2 instances cannot be stopped and were automatically terminated

Question 51

An Organization has an application using Amazon S3 Glacier to store large CSV objects. While retrieving these large objects end users are observing some performance issue. In most cases, users only need a small part of the data instead of the entire objects.

A solutions Architect has been asked to re-design this solution to improve the performance. Which solution is the most cost-effective?

A. Use AWS Athena to retrieve only the data that users need.

B. Use S3 Select to retrieve only the data that users need.

C. Use Glacier Select to retrieve only the data that users need.

D. Use custom SQL statements and S3 APIs to retrieve only the data that users need.

Question 52

An IT firm is using AWS cloud infrastructure for its three-tier web application. They are using memory-optimized EC2 instances for application hosting & SQL-based database servers deployed in Multi-AZ with auto-failover. Recently, they are observing heavy loads on database servers. This is impacting user data lookup from application servers resulting in slow access. As AWS Consultants, they are looking for guidance to resolve this issue. Which of the following will provide a faster scalable option to deliver data to users without impacting backend servers?

A. Use Amazon ElastiCache to cache data.

B. Configure the Multi-AZ replicas to serve the read traffic.

C. Use Amazon CloudFront to save recently accessed data in cache.

D. Use on-host caching on memory optimised EC2 instance.

Question 53

Your company has setup EC2 Instances in a VPC for their application. They now have a concern that not all of the EC2 instances are being utilized. Which of the below mentioned services can help you find underutilized resources in AWS? Choose 2 answers from the options given below

A. AWS Cloudwatch

B. SNS

C. AWS Trusted Advisor

D. Cloudtrail

Question 54

A Solutions Architect has been asked to design a serverless media upload web application that will have the functionality to upload thumbnail images, transcode videos, index files, validate contents, and aggregate data in real-time. The architect needs to visualize the distributed components of his architecture through a graphical console. How can this be designed wisely?

A. Host a static web application using Amazon S3, upload images to Amazon S3, trigger a Lambda function when media files are uploaded, coordinate other media files processing Lambdas using several SQS queues, and store the aggregated data in DynamoDB.

B. Host a static web application using Amazon S3, upload images to Amazon S3, trigger a Lambda function when media files are uploaded, coordinate other media files processing Lambdas using Simple Workflow Service, and store the aggregated data in DynamoDB.

C. Host a static web application using Amazon S3, upload images to Amazon S3, trigger a Lambda function when media files are uploaded, process media files using various Lambdas, and store the aggregated data in DynamoDB.

D. Host a static web application using Amazon S3, upload images to Amazon S3, use S3 event notification to trigger a Lambda function when media files are uploaded, coordinate other media files processing Lambda using Step functions, and store the aggregated data in DynamoDB.

Question 55

Which of the following actions is required by the Lambda execution role to write the logs into AWS CloudWatch? (Select THREE)

A. logs:CreateLogGroup

B. logs:GetLogEvents

C. logs:CreateLogStream

D. logs:DescribeLogStreams

E. logs:PutLogEvents

Question 56

An organization has a distributed application running. This application is implemented with microservices architecture using AWS services including Lambda, API Gateway, SNS, and SQS. What is the cost-effective best way to analyze, debug and notify if any issues arise in production?

A. Use the CloudWatch dashboard to monitor the application, and create a cloud watch alarm to notify of any errors.

B. Use CloudWatch events to trigger a lambda and notify.

C. Use X-Ray to analyze and debug the application and Enable insights notifications.

D. Use 3rd party tools to debug and notify.

Question 57

You have been hired as an AWS Architect in a global financial firm. They provide daily consolidated reports to their clients for trades in stock markets. For a large amount of data processing, they store daily trading transaction data in S3 buckets, which triggers the AWS Lambda function. This function submits a new AWS Batch job in the Job queue. These queues use EC2 compute resources with this customized AMI and Amazon ECS to complete the job.

You have been working on an application created using the above requirements. While performing a trial for the application, even though it has enough memory/CPU resources, the job is stuck in a Runnable state. Which of the following checks would help to resolve the issue?

A. Ensure that AWS logs driver is configured on compute resources.

B. AWS Batch does not support customized AMI, use ECS-optimized AMI.

C. Check dependencies for the job which holds the job in Runnable state.

D. Use only On-Demand EC2 instance in compute resources

Question 58

You currently work for a company that is specialized in baggage management. GPS devices are installed on all the baggage which delivers the unit’s coordinates every 10 seconds. You need to collect and analyze these coordinates in real time from multiple sources. Which tool should you use to collect the data in real time for processing?

A. Amazon EMR

B. Amazon SQS

C. AWS Data Pipeline

D. Amazon Kinesis

Question 59

As an AWS Solutions Architect, you are helping the team to set up an AWS Site-to-Site VPN so that the AWS resources in a VPC can communicate with one remote site. You plan to use IPSec VPN tunnels in the VPN connections as they provide secure connections with redundancy. When creating the Site-to-Site VPN connection in AWS console, which of the following options can you configure for the VPN tunnels?

A. The number of VPN tunnels.

B. The encryption algorithms used by the VPN tunnels.

C. The memory used by the VPN tunnels.

D. The TCP ports allowed by the VPN tunnels.

Question 60

You are working as an AWS Architect for a media firm. The firm has large text files that need to be converted into audio files. They are using S3 buckets to store these text files. AWS Batch is used to process these files along with Amazon Polly. You have a mix of EC2 On-Demand & Spot instances for the compute environment. Critical Jobs must be completed quickly, while non-critical Jobs can be scheduled during non-peak hours. While using AWS Batch, management wants a cost-effective solution with no performance impact. Which of the following Job Queue can be selected to meet this requirement?

A. Create single Job Queue with EC2 On Demand instance having higher priority & Spot Instance having lower priority.

B. Create multiple Job Queues with one Queue having EC2 On Demand instance & having higher priority while another queue having Spot Instance & lower priority.

C. Create multiple Job Queues with one Queue having EC2 On Demand instance & having lower priority while another queue having Spot Instance & higher priority.

D. Create single Job Queue with EC2 On Demand instance having lower priority & Spot Instance having higher priority.

Question 61

You have a requirement to get a snapshot of the current configuration of resources in your AWS Account. Which service can be used for this purpose?

A. AWS CodeDeploy

B. AWS Trusted Advisor

C. AWS Config

D. AWS IAM

Question 62

You work for a company that has a set of EC2 Instances. There is an internal requirement to create another instance in another availability zone. One of the EBS volumes from the current instance needs to be moved from one of the older instances to the new instance. How can you achieve this?

A. Detach the volume and attach to an EC2 instance in another AZ.

B. Create a new volume in the other AZ and specify the current volume as the source.

C. Create a snapshot of the volume and then create a volume from the snapshot in the other AZ

D. Create a new volume in the AZ and do a disk copy of contents from one volume to another

Question 63

Which of the following is a limitation of the Elastic Fabric Adapter (EFA)?

A. EFA-only interfaces support only the EFA device capabilities.

B. EFA traffic can’t cross Availability Zones or VPCs.

C. EFA traffic refers to the traffic transmitted through the EFA device of either an EFA (EFA with ENA) or EFA-only interface.

D. EFA supports Nvidia Collective Communications Library (NCCL) for AI and ML applications.

Question 64

A global media firm is using AWS CodePipeline as an automation service for releasing new features to customers with periodic checks in place. All the codes are uploaded in the Amazon S3 bucket. Changes in files stored in the S3 bucket should trigger AWS CodePipeline that will further initiate AWS Elastic Beanstalk for deploying additional resources. What is the additional requirement that should be configured to trigger CodePipeline in a faster way?

A. Enable periodic checks and create a Webhook which triggers pipeline once S3 bucket is updated.

B. Disable periodic checks, create an Amazon CloudWatch Events rule & AWS CloudTrail trail.

C. Enable periodic checks, create an Amazon CloudWatch Events rule & AWS CloudTrail trail.

D. Disable periodic checks and create a Webhook which triggers pipeline once S3 bucket is updated.

Question 65

You are working as an AWS Architect for a retail company using AWS EC2 instance for a web application. The company is using Provisioned IOPS SSD EBS volumes to store all product database.

This is a critical database & you need to ensure appropriate backups are accomplished every 12 hours. Also, you need to ensure that storage space is optimally used for storing all these snapshots removing all older files. Which of the following can help to meet this requirement with the least management overhead?

A. Manually create snapshots & delete old snapshots for EBS volumes as this is a critical data.

B. Use Amazon CloudWatch events to initiate AWS Lambda which will create snapshot of EBS volumes along with deletion of old snapshots.

C. Use Amazon Data Lifecycle Manager to schedule EBS snapshots and delete old snapshots as per retention policy.

D. Use Third party tool to create snapshot of EBS volumes along with deletion of old snapshots.