Renovate SAP-C01 Vce 2020

It is more faster and easier to pass the Amazon-Web-Services SAP-C01 exam by using Pinpoint Amazon-Web-Services AWS Certified Solutions Architect- Professional questuins and answers. Immediate access to the Up to date SAP-C01 Exam and find the same core area SAP-C01 questions with professionally verified answers, then PASS your exam with a high score now.

Free SAP-C01 Demo Online For Amazon-Web-Services Certifitcation:

NEW QUESTION 1
AnyCompany has acquired numerous companies over the past few years. The CIO for AnyCompany would like to keep the resources for each acquired company separate. The CIO also would like to enforce a chargeback model where each company pays for the AWS services it uses.
The Solutions Architect is tasked with designing an AWS architecture that allows AnyCompany to achieve the following:
SAP-C01 dumps exhibit Implementing a detailed chargeback mechanism to ensure that each company pays for the resources it uses.
SAP-C01 dumps exhibit AnyCompany can pay for AWS services for all its companies through a single invoice.
SAP-C01 dumps exhibit Developers in each acquired company have access to resources in their company only.
SAP-C01 dumps exhibit Developers in an acquired company should not be able to affect resources in their company only.
SAP-C01 dumps exhibit A single identity store is used to authenticate Developers across all companies.
Which of the following approaches would meet these requirements? (Choose two.)

  • A. Create a multi-account strategy with an account per compan
  • B. Use consolidated billing to ensure that AnyCompany needs to pay a single bill only.
  • C. Create a multi-account strategy with a virtual private cloud (VPC) for each compan
  • D. Reduce impact across companies by not creating any VPC peering link
  • E. As everything is in a single account, there will be a single invoic
  • F. use tagging to create a detailed bill for each company.
  • G. Create IAM users for each Developer in the account to which they require acces
  • H. Create policies that allow the users access to all resources in that accoun
  • I. Attach the policies to the IAM user.
  • J. Create a federated identity store against the company’s Active Director
  • K. Create IAM roles with appropriate permissions and set the trust relationships with AWS and the identity stor
  • L. Use AWS STS to grant users access based on the groups they belong to in the identity store.
  • M. Create a multi-account strategy with an account per compan
  • N. For billing purposes, use a tagging solution that uses a tag to identify the company that creates each resource.

Answer: AD

NEW QUESTION 2
A Solutions Architect is designing a network solution for a company that has applications running in a data center in Northern Virginia. The applications in the company’s data center require predictable performance to applications running in a virtual private cloud (VPC) located in us-east-1, and a secondary VPC in us-west-2 within the same account. The company data center is collocated in an AWS Direct Connect facility that serves the us-est-1 region. The company has already ordered an AWS Direct Connect connection and a cross-connect has been established.
Which solution will meet the requirements at the LOWEST cost?

  • A. Provision a Direct Connect gateway and attach the virtual private (VGW) for the VPC in us-east-1 and the VGW for the VPC in us-west-2. Create a private VIF on the Direct Connect connection and associate it to the Direct Connect gateway.
  • B. Create private VIFs on the Direct Connect connection for each of the company’s VPCs in the us-est-1 and us-west-2 region
  • C. Configure the company’s data center router to connect directly with the VPCs in those regions via the private VIFs.
  • D. Deploy a transit VPC solution using Amazon EC2-based router instances in the us-east-1 region.Establish IPsec VPN tunnels between the transit routers and virtual private gateways (VGWs) located in the us-east-1 and us-west-2 regions, which are attached to the company’s VPCs in those region
  • E. Create a public VIF on the Direct Connect connection and establish IPsec VPN tunnels over the public VIF between the transit routers and the company’s data center router.
  • F. Order a second Direct Connect connection to a Direct Connect facility with connectivity to theus-west-2 regio
  • G. Work with partner to establish a network extension link over dark fiber from the Direct Connect facility to the company’s data cente
  • H. Establish private VIFs on the Direct Connect connections for each of the company’s VPCs in the respective region
  • I. Configure the company’s data center router to connect directly with the VPCs in those regions via the private VIFs.

Answer: A

Explanation:
https://aws.amazon.com/blogs/aws/new-aws-direct-connect-gateway-inter-region-vpc-access/

NEW QUESTION 3
A company wants to move a web application to AWS. The application stores session information locally on each web server, which will make auto scaling difficult. As part of the migration, the application will be rewritten to decouple the session data from the web servers. The company requires low latency, scalability, and availability.
Which service will meet the requirements for storing the session information in the MOST cost-effective way?

  • A. Amazon ElastiCache with the Memcached engine
  • B. Amazon S3
  • C. Amazon RDS MySQL
  • D. Amazon ElastiCache with the Redis engine

Answer: D

Explanation:
https://aws.amazon.com/caching/session-management/ https://aws.amazon.com/elasticache/redis-vs-memcached/

NEW QUESTION 4
A three-tier web application runs on Amazon EC2 instances. Cron daemons are used to trigger scripts that collect the web server, application, and database logs and send them to a centralized location every hour. Occasionally, scaling events or unplanned outages have caused the instances to stop before the latest logs were collected, and the log files were lost.
Which of the following options is the MOST reliable way of collecting and preserving the log files?

  • A. Update the cron jobs to run every 5 minutes instead of every hour to reduce the possibility of log messages being lost in an outage.
  • B. Use Amazon CloudWatch Events to trigger Amazon Systems Manager Run Command to invoke the log collection scripts more frequently to reduce the possibility of log messages being lost in an outage.
  • C. Use the Amazon CloudWatch Logs agent to stream log messages directly to CloudWatch Logs.Configure the agent with a batch count of 1 to reduce the possibility of log messages being lost in an outage.
  • D. Use Amazon CloudWatch Events to trigger AWS Lambda to SSH into each running instance and invoke the log collection scripts more frequently to reduce the possibility of log messages being lost in an outage.

Answer: C

Explanation:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html

NEW QUESTION 5
A company is implementing a multi-account strategy; however, the Management team has expressed concerns that services like DNS may become overly complex. The company needs a solution that allows private DNS to be shared among virtual private clouds (VPCs) in different accounts. The company will have approximately 50 accounts in total.
What solution would create the LEAST complex DNS architecture and ensure that each VPC can resolve all AWS resources?

  • A. Create a shared services VPC in a central account, and create a VPC peering connection from the shared services VPC to each of the VPCs in the other account
  • B. Within Amazon Route 53, create a privately hosted zone in the shared services VPC and resource record sets for the domain and subdomains.Programmatically associate other VPCs with the hosted zone.
  • C. Create a VPC peering connection among the VPCs in all account
  • D. Set the VPC attributes enableDnsHostnames and enableDnsSupport to “true” for each VP
  • E. Create an Amazon Route 53 private zone for each VP
  • F. Create resource record sets for the domain and subdomain
  • G. Programmatically associate the hosted zones in each VPC with the other VPCs.
  • H. Create a shared services VPC in a central accoun
  • I. Create a VPC peering connection from the VPCs in other accounts to the shared services VP
  • J. Create an Amazon Route 53 privately hosted zone in the shared services VPC with resource record sets for the domain and subdomain
  • K. Allow UDP and TCP port 53 over the VPC peering connections.
  • L. Set the VPC attributes enableDnsHostnames and enableDnsSupport to “false” in every VP
  • M. Create an AWS Direct Connect connection with a private virtual interfac
  • N. Allow UDP and TCP port 53 over the virtual interfac
  • O. Use the on-premises DNS servers to resolve the IP addresses in each VPC on AWS.

Answer: A

Explanation:
https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-w

NEW QUESTION 6
A Solutions Architect is designing the storage layer for a recently purchased application. The application will be running on Amazon EC2 instances and has the following layers and requirements:
SAP-C01 dumps exhibit Data layer: A POSIX file system shared across many systems.
SAP-C01 dumps exhibit Service layer: Static file content that requires block storage with more than 100k IOPS. Which combination of AWS services will meet these needs? (Choose two.)

  • A. Data layer – Amazon S3
  • B. Data layer – Amazon EC2 Ephemeral Storage
  • C. Data layer – Amazon EFS
  • D. Service layer – Amazon EBS volumes with Provisioned IOPS
  • E. Service layer – Amazon EC2 Ephemeral Storage

Answer: CE

Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/storage-optimized-instances.html

NEW QUESTION 7
A Development team is deploying new APIs as serverless applications within a company. The team is currently using the AWS Management Console to provision Amazon API Gateway, AWS Lambda, and Amazon DynamoDB resources. A Solutions Architect has been tasked with automating the future deployments of these serverless APIs.
How can this be accomplished?

  • A. Use AWS CloudFormation with a Lambda-backed custom resource to provision API Gatewa
  • B. Use the AWS::DynamoDB::Table and AWS::Lambda::Function resources to create the Amazon DynamoDB table and Lambda function
  • C. Write a script to automate the deployment of the CloudFormation template.
  • D. Use the AWS Serverless Application Model to define the resource
  • E. Upload a YAML template and application files to the code repositor
  • F. Use AWS CodePipeline to connect to the code repository and to create an action to build using AWS CodeBuil
  • G. Use the AWS CloudFormation deployment provider in CodePipeline to deploy the solution.
  • H. Use AWS CloudFormation to define the serverless applicatio
  • I. Implement versioning on the Lambda functions and create aliases to point to the version
  • J. When deploying, configure weights to implement shifting traffic to the newest version, and gradually update the weights as traffic moves over.
  • K. Commit the application code to the AWS CodeCommit code repositor
  • L. Use AWS CodePipeline and connect to the CodeCommit code repositor
  • M. Use AWS CodeBuild to build and deploy the Lambda functions using AWS CodeDeplo
  • N. Specify the deployment preference type in CodeDeploy to gradually shift traffic over to the new version.

Answer: B

Explanation:
https://aws-quickstart.s3.amazonaws.com/quickstart-trek10-serverless-enterprise-cicd/doc/serverless-cicd-for-th https://aws.amazon.com/quickstart/architecture/serverless-cicd-for-enterprise/

NEW QUESTION 8
A company has an application written using an in-house software framework. The framework installation takes 30 minutes and is performed with a user data script. Company Developers deploy changes to the application frequently. The framework installation is becoming a bottleneck in this process.
Which of the following would speed up this process?

  • A. Create a pipeline to build a custom AMI with the framework installed and use this AMI as a baseline for application deployments.
  • B. Employ a user data script to install the framework but compress the installation files to make them smaller.
  • C. Create a pipeline to parallelize the installation tasks and call this pipeline from a user data script.
  • D. Configure an AWS OpsWorks cookbook that installs the framework instead of employing user dat
  • E. Use this cookbook as a base for all deployments.

Answer: A

Explanation:
https://aws.amazon.com/codepipeline/features/?nc=sn&loc=2

NEW QUESTION 9
A company used Amazon EC2 instances to deploy a web fleet to host a blog site. The EC2 instances are behind an Application Load Balancer (ALB) and are configured in an Auto Scaling group. The web application stores all blog content on an Amazon EFS volume.
The company recently added a feature for bloggers to add video to their posts, attracting 10 times the previous user traffic. At peak times of day, users report buffering and timeout issues while attempting to reach the site or watch videos.
Which is the MOST cost-efficient and scalable deployment that will resolve the issues for users?

  • A. Reconfigure Amazon EFS to enable maximum I/O.
  • B. Update the blog site to use instance store volumes for storag
  • C. Copy the site contents to the volumes at launch and to Amazon S3 at shutdown.
  • D. Configure an Amazon CloudFront distributio
  • E. Point the distribution to an S3 bucket, and migrate the videos from EFS to Amazon S3.
  • F. Set up an Amazon CloudFront distribution for all suite contents, and point the distribution at the ALB.

Answer: C

Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-https-connection-fails/

NEW QUESTION 10
A company has released a new version of a website to target an audience in Asia and South America. The website’s media assets are hosted on Amazon S3 and have an Amazon CloudFront distribution to improve end-user performance. However, users are having a poor login experience the authentication service is only available in the us-east-1 AWS Region.
How can the Solutions Architect improve the login experience and maintain high security and performance with minimal management overhead?

  • A. Replicate the setup in each new geography and use Amazon Route 53 geo-based routing to route traffic to the AWS Region closest to the users.
  • B. Use an Amazon Route 53 weighted routing policy to route traffic to the CloudFront distributio
  • C. Use CloudFront cached HTTP methods to improve the user login experience.
  • D. Use Amazon Lambda@Edge attached to the CloudFront viewer request trigger to authenticate and authorize users by maintaining a secure cookie token with a session expiry to improve the user experience in multiple geographies.
  • E. Replicate the setup in each geography and use Network Load Balancers to route traffic to the authentication service running in the closest region to users.

Answer: C

Explanation:
There are several benefits to using Lambda@Edge for authorization operations. First, performance is improved by running the authorization function using Lambda@Edge closest to the viewer, reducing latency and response time to the viewer request. The load on your origin servers is also reduced by offloading CPU-intensive operations such as verification of JSON Web Token (JWT) signatures. Finally, there are security benefits such as filtering out unauthorized requests before they reach your origin infrastructure.
https://aws.amazon.com/blogs/networking-and-content-delivery/authorizationedge-how-to-use-lambdaedge-and-

NEW QUESTION 11
A company has an existing on-premises three-tier web application. The Linux web servers serve content from a centralized file share on a NAS server because the content is refreshed several times a day from various sources. The existing infrastructure is not optimized and the company would like to move to AWS in order to gain the ability to scale resources up and down in response to load. On-premises and AWS resources are connected using AWS Direct Connect.
How can the company migrate the web infrastructure to AWS without delaying the content refresh process?

  • A. Create a cluster of web server Amazon EC2 instances behind a Classic Load Balancer on AW
  • B. Share an Amazon EBS volume among all instances for the conten
  • C. Schedule a periodic synchronization of this volume and the NAS server.
  • D. Create an on-premises file gateway using AWS Storage Gateway to replace the NAS server and replicate content to AW
  • E. On the AWS side, mount the same Storage Gateway bucket to each web server Amazon EC2 instance to serve the content.
  • F. Expose an Amazon EFS share to on-premises users to serve as the NAS serv
  • G. Mount the same EFS share to the web server Amazon EC2 instances to serve the content.
  • H. Create web server Amazon EC2 instances on AWS in an Auto Scaling grou
  • I. Configure a nightly process where the web server instances are updated from the NAS server.

Answer: C

Explanation:
File gateway is limited by performance its gateway instance, whether EC2 or On-premises, Cache will get filled up fast if not properly configured, For large number of EC2 instances EFS scales better. So, bottom line is File Storage gateway is for legacy applications and you have to add cost of large gateway instances before comparing it to same quantity of EFS storage. https://www.reddit.com/r/aws/comments/82pyop/storage_gateway_vs_efs/
https://docs.aws.amazon.com/efs/latest/ug/efs-onpremises.html

NEW QUESTION 12
A company is planning the migration of several lab environments used for software testing. An assortment of custom tooling is used to manage the test runs for each lab. The labs use immutable infrastructure for the software test runs, and the results are stored in a highly available SQL database cluster. Although completely rewriting the custom tooling is out of scope for the migration project, the company would like to optimize workloads during the migration.
Which application migration strategy meets this requirement?

  • A. Re-host
  • B. Re-platform
  • C. Re-factor/re-architect
  • D. Retire

Answer: B

Explanation:
https://aws.amazon.com/blogs/enterprise-strategy/6-strategies-for-migrating-applications-to-the-cloud/

NEW QUESTION 13
A large global company wants to migrate a stateless mission-critical application to AWS. The application is based on IBM WebSphere (application and integration middleware), IBM MQ (messaging middleware), and IBM DB2 (database software) on a z/OS operating system.
How should the Solutions Architect migrate the application to AWS?

  • A. Re-host WebSphere-based applications on Amazon EC2 behind a load balancer with Auto Scaling.Re-platform the IBM MQ to an Amazon EC2-based M
  • B. Re-platform the z/OS-based DB2 to Amazon RDS DB2.
  • C. Re-host WebSphere-based applications on Amazon EC2 behind a load balancer with Auto Scaling.Re-platform the IBM MQ to an Amazon M
  • D. Re-platform z/OS-based DB2 to Amazon EC2-based DB2.
  • E. Orchestrate and deploy the application by using AWS Elastic Beanstal
  • F. Re-platform the IBM MQ to Amazon SQ
  • G. Re-platform z/OS-based DB2 to Amazon RDS DB2.
  • H. Use the AWS Server Migration Service to migrate the IBM WebSphere and IBM DB2 to an Amazon EC2-based solutio
  • I. Re-platform the IBM MQ to an Amazon MQ.

Answer: B

Explanation:
https://aws.amazon.com/blogs/database/aws-database-migration-service-and-aws-schema-conversion-tool-now- https://aws.amazon.com/quickstart/architecture/ibm-mq/

NEW QUESTION 14
A Solutions Architect is designing a multi-account structure that has 10 existing accounts. The design must meet the following requirements:
SAP-C01 dumps exhibit Consolidate all accounts into one organization.
SAP-C01 dumps exhibit Allow full access to the Amazon EC2 service from the master account and the secondary accounts.
SAP-C01 dumps exhibit Minimize the effort required to add additional secondary accounts.
Which combination of steps should be included in the solution? (Choose two.)

  • A. Create an organization from the master accoun
  • B. Send invitations to the secondary accounts from the master accoun
  • C. Accept the invitations and create an OU.
  • D. Create an organization from the master accoun
  • E. Send a join request to the master account from each secondary accoun
  • F. Accept the requests and create an OU.
  • G. Create a VPC peering connection between the master account and the secondary account
  • H. Accept the request for the VPC peering connection.
  • I. Create a service control policy (SCP) that enables full EC2 access, and attach the policy to the OU.
  • J. Create a full EC2 access policy and map the policy to a role in each accoun
  • K. Trust every other account to assume the role.

Answer: AD

Explanation:
There is a concept of Permission Boundary vs Actual IAM Policies That is, we have a concept of "Allow" vs "Grant". In terms of boundaries, we have the following three boundaries: 1. SCP 2. User/Role boundaries 3. Session boundaries (ex. AssumeRole ... ) In terms of actual permission granting, we have the following: 1. Identity Policies 2. Resource Policies

NEW QUESTION 15
A Solutions Architect is building a containerized NET Core application that will run in AWS Fargate The backend of the application requires Microsoft SQL Server with high availability All tiers of the application must be highly available The credentials used for the connection string to SQL Server should not be stored on disk within the .NET Core front-end containers.
Which strategies should the Solutions Architect use to meet these requirements'?

  • A. Set up SQL Server to run in Fargate with Service Auto Scaling Create an Amazon ECS task execution role that allows the Fargate task definition to get the secret value for the credentials to SQL Server running in Fargate Specify the ARN of the secret in AWS Secrets Manager m the secrets section of the Fargate task definition so the sensitive data can be injected into the containers as environment variables on startup for reading into the application to construct the connection string Set up the NET Core service using Service Auto Scaling behind an Application Load Balancer in multiple Availability Zones
  • B. Create a Multi-AZ deployment of SQL Server on Amazon RDS Create a secret in AWS Secrets Manager for the credentials to the RDS database Create an Amazon ECS task execution role that allows the Fargate task definition to get the secret value for the credentials to the RDS database in Secrets Manager Specify the ARN of the secret in Secrets Manager in the secrets section of the Fargate task definition so the sensitive data can be injected into the containers as environment variables on startup for reading into the application to construct the connection string Set up the NET Core service m Fargate using Service Auto Scalina behind an Application Load Balancer in multiple Availability Zones.
  • C. Create an Auto Scaling group to run SQL Server on Amazon EC2 Create a secret in AWS Secrets Manager for the credentials to SQL Server running on EC2 Create an Amazon ECS task execution role that allows the Fargate task definition to get the secret value for the credentials to SQL Server on EC2 Specify the ARN of the secret in Secrets Manager In the secrets section of the Fargate task definition sothe sensitive data can be injected into the containers as environment variables on startup for reading into the application to construct the connection string Set up the NET Core service using Service Auto Scaling behind an Application Load Balancer in multiple Availabilitv Zones.
  • D. Create a Multi-AZ deployment of SQL Server on Amazon RDS Create a secret in AWS Secrets Manager for the credentials to the RDS database Create non-persistent empty storage for the NET Core containers in the Fargate task definition to store the sensitive information Create an Amazon ECS task execution role that allows the Fargate task definition to get the secret value for the credentials to the RDS database in Secrets Manager Specify the ARN of the secret in Secrets Manager in the secrets section of the Fargate task definition so the sensitive data can be written to the non-persistent empty storage on startup for reading into the application to construct the connection.

Answer: C

NEW QUESTION 16
A company runs an ordering system on AWS using Amazon SQS and AWS Lambda, with each order received as a JSON message. recently the company had a marketing event that led to a tenfold increase in orders. With this increase, the following undesired behaviors started in the ordering system:
SAP-C01 dumps exhibit Lambda failures while processing orders lead to queue backlogs.
SAP-C01 dumps exhibit The same orders have been processed multiple times.
A solutions Architect has been asked to solve the existing issues with the ordering system and add the following resiliency features:
SAP-C01 dumps exhibit Retain problematic orders for analysis.
SAP-C01 dumps exhibit Send notification if errors go beyond a threshold value. How should the Solutions Architect meet these requirements?

  • A. Receive multiple messages with each Lambda invocation, add error handling to message processing code and delete messages after processing, increase the visibility timeout for the messages, create a dead letter queue for messages that could not be processed, create an Amazon CloudWatch alarm on Lambda errors for notification.
  • B. Receive single messages with each Lambda invocation, put additional Lambda workers to poll the queue, delete messages after processing, increase the message timer for the messages, use Amazon CloudWatch Logs for messages that could not be processed, create a CloudWatch alarm on Lambda errors for notification.
  • C. Receive multiple messages with each Lambda invocation, use long polling when receiving the messages, log the errors from the message processing code using Amazon CloudWatch Logs, create a dead letter queue with AWS Lambda to capture failed invocations, create CloudWatch events on Lambda errors for notification.
  • D. Receive multiple messages with each Lambda invocation, add error handling to message processing code and delete messages after processing, increase the visibility timeout for the messages, create a delay queue for messages that could not be processed, create an Amazon CloudWatch metric on Lambda errors for notification.

Answer: D

NEW QUESTION 17
A company runs a Windows Server host in a public subnet that is configured to allow a team of administrators to connect over RDP to troubleshoot issues with hosts in a private subnet. The host must be available at all times outside of a scheduled maintenance window, and needs to receive the latest operating system updates within 3 days of release.
What should be done to manage the host with the LEAST amount of administrative effort?

  • A. Run the host in a single-instance AWS Elastic Beanstalk environmen
  • B. Configure the environment with a custom AMI to use a hardened machine image from AWS Marketplac
  • C. Apply system updates with AWS Systems Manager Patch Manager.
  • D. Run the host on AWS WorkSpace
  • E. Use Amazon WorkSpaces Application Manager (WAM) to harden the hos
  • F. Configure Windows automatic updates to occur every 3 days.
  • G. Run the host in an Auto Scaling group with a minimum and maximum instance count of 1. Use a hardened machine image from AWS Marketplac
  • H. Apply system updates with AWS Systems Manager Patch Manager.
  • I. Run the host in AWS OpsWorks Stack
  • J. Use a Chief recipe to harden the AMI during instance launch.Use an AWS Lambda scheduled event to run the Upgrade Operating System stack command to apply system updates.

Answer: B

NEW QUESTION 18
A company plans to move regulated and security-sensitive businesses to AWS. The Security team is developing a framework to validate the adoption of AWS best practice and industry-recognized compliance standards. The AWS Management Console is the preferred method for teams to provision resources.
Which strategies should a Solutions Architect use to meet the business requirements and continuously assess, audit, and monitor the configurations of AWS resources? (Choose two.)

  • A. Use AWS Config rules to periodically audit changes to AWS resources and monitor the compliance of the configuratio
  • B. Develop AWS Config custom rules using AWS Lambda to establish a test-driven development approach, and further automate the evaluation of configuration changes against the required controls.
  • C. Use Amazon CloudWatch Logs agent to collect all the AWS SDK log
  • D. Search the log data using a pre-defined set of filter patterns that machines mutating API call
  • E. Send notifications using Amazon CloudWatch alarms when unintended changes are performe
  • F. Archive log data by using a batch exportto Amazon S3 and then Amazon Glacier for a long-term retention and auditability.
  • G. Use AWS CloudTrail events to assess management activities of all AWS account
  • H. Ensure that CloudTrail is enabled in all accounts and available AWS service
  • I. Enable trails, encrypt CloudTrail event log files with an AWS KMS key, and monitor recorded activities with CloudWatch Logs.
  • J. Use the Amazon CloudWatch Events near-real-time capabilities to monitor system events patterns, and trigger AWS Lambda functions to automatically revert non-authorized changes in AWS resource
  • K. Also, target Amazon SNS topics to enable notifications and improve the response time of incident responses.
  • L. Use CloudTrail integration with Amazon SNS to automatically notify unauthorized API activities.Ensure that CloudTrail is enabled in all accounts and available AWS service
  • M. Evaluate the usage of Lambda functions to automatically revert non-authorized changes in AWS resources.

Answer: AC

Explanation:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html https://docs.aws.amazon.com/en_pv/awscloudtrail/latest/userguide/best-practices-security.html

NEW QUESTION 19
During a security audit of a Service team's application a Solutions Architect discovers that a username and password tor an Amazon RDS database and a set of AWSIAM user credentials can be viewed in the AWS Lambda function code. The Lambda function uses the username and password to run queries on the database and it uses the I AM credentials to call AWS services in a separate management account.
The Solutions Architect is concerned that the credentials could grant inappropriate access to anyone who can view the Lambda code The management account and the Service team's account are in separate AWS Organizations organizational units (OUs)
Which combination of changes should the Solutions Architect make to improve the solution's security? (Select TWO)

  • A. Configure Lambda to assume a tole in the management account with appropriate access to AWS
  • B. Configure Lambda to use the stored database credentials in AWS Secrets Manager and enable automatic rotation
  • C. Create a Lambda function to rotate the credentials every hour by deploying a new Lambda version with the updated credentials
  • D. Use an SCP on the management accounts OU to prevent IAM users from accessing resources m the Service team's account
  • E. Enable AWS Shield Advanced on the management account to shield sensitive resources from unauthorized IAM access

Answer: BD

NEW QUESTION 20
A company has an application that generates a weather forecast that is updated every 15 minutes with an output resolution of 1 billion unique positions, each approximately 20 bytes in size (20 Gigabytes per forecast). Every hour, the forecast data is globally accessed approximately 5 million times (1,400 requests per second), and up to 10 times more during weather events. The forecast data is overwritten every update. Users of the current weather forecast application expect responses to queries to be returned in less than two seconds for each request.
Which design meets the required request rate and response time?

  • A. Store forecast locations in an Amazon ES cluste
  • B. Use an Amazon CloudFront distribution targeting an Amazon API Gateway endpoint with AWS Lambda functions responding to queries as the origi
  • C. Enable API caching on the API Gateway stage with a cache-control timeout set for 15 minutes.
  • D. Store forecast locations in an Amazon EFS volum
  • E. Create an Amazon CloudFront distribution that targets an Elastic Load Balancing group of an Auto Scaling fleet of Amazon EC2 instances that have mounted the Amazon EFS volum
  • F. Set the set cache-control timeout for 15 minutes in the CloudFront distribution.
  • G. Store forecast locations in an Amazon ES cluste
  • H. Use an Amazon CloudFront distribution targeting an API Gateway endpoint with AWS Lambda functions responding to queries as the origi
  • I. Create an Amazon Lambda@Edge function that caches the data locally at edge locations for 15 minutes.
  • J. Store forecast locations in an Amazon S3 as individual object
  • K. Create an Amazon CloudFront distribution targeting an Elastic Load Balancing group of an Auto Scaling fleet of EC2 instances, querying the origin of the S3 objec
  • L. Set the cache-control timeout for 15 minutes in the CloudFront distribution.

Answer: C

Explanation:
https://aws.amazon.com/blogs/networking-and-content-delivery/lambdaedge-design-best-practices/

NEW QUESTION 21
An advisory firm is creating a secure data analytics solution for its regulated financial services users Users will upload their raw data to an Amazon 53 bucket, where they have PutObject permissions only Data will be analyzed by applications running on an Amazon EMR cluster launched in a VPC The firm requires that the environment be isolated from the internet All data at rest must be encrypted using keys controlled by the firm
Which combination of actions should the Solutions Architect take to meet the user's security requirements? (Select TWO )

  • A. Launch the Amazon EMR cluster m a private subnet configured to use an AWS KMS CMK for at-rest encryption Configure a gateway VPC endpoint (or Amazon S3 and an interlace VPC endpoint for AWS KMS
  • B. Launch the Amazon EMR cluster in a private subnet configured to use an AWS KMS CMK for at-rest encryption Configure a gateway VPC endpomint for Amazon S3 and a NAT gateway to access AWS KMS
  • C. Launch the Amazon EMR cluster in a private subnet configured to use an AWS CloudHSM appliance for at-rest encryption Configure a gateway VPC endpoint for Amazon S3 and an interface VPC endpoint for CloudHSM
  • D. Configure the S3 endpoint policies to permit access to the necessary data buckets only
  • E. Configure the S3 bucket polices lo permit access using an aws sourceVpce condition lo match the S3 endpoint ID

Answer: AC

NEW QUESTION 22
A company is refactoring an existing web service that provides read and write access to structured data. The service must respond to short but significant spikes in the system load The service must be fault tolerant across multiple AWS Regions.
Which actions should be taken to meet these requirements?

  • A. Store the data in Amazon DocumentDB Create a single global Amazon CloudFront distribution with a custom origin built on edge-optimized Amazon API Gateway and AWS Lambda Assign the company's domain as an alternate domain for the distributio
  • B. and configure Amazon Route 53 with an alias to the CloudFront distribution
  • C. Store the data in replicated Amazon S3 buckets in two Regions Create an Amazon CloudFront distribution in each Region, with custom origins built on Amazon API Gateway and AWS Lambda launched in each Region Assign the company's domain as an alternate domain for both distributions and configure Amazon Route 53 with a failover routing policy between them
  • D. Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode In both Regions, run the web service as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB) In Amazon Route 53, configure an alias record in the company's domain and a Route 53 latency-based routing policy with health checks to distribute traffic between the two ALBs

Answer: A

NEW QUESTION 23
A company has several teams, and each team has their own Amazon RDS database that totals 100 TB The company is building a data query platform for Business Intelligence Analysts to generate a weekly business report The new system must run ad-hoc SQL queries
What is the MOST cost-effective solution?

  • A. Create a new Amazon Redshift cluster Create an AWS Glue ETL job to copy data from the RDS databases to the Amazon Redshift cluster Use Amazon Redshift to run the query
  • B. Create an Amazon EMR cluster with enough core nodes Run an Apache Spark job to copy data from the RDS databases to an Hadoop Distributed File System (HDFS) Use a local Apache Hive metastore to maintain the table definition Use Spark SQL to run the query
  • C. Use an AWS Glue ETL job to copy all the RDS databases to a single Amazon Aurora PostgreSQL database Run SQL queries on the Aurora PostgreSQL database
  • D. Use an AWS Glue crawler to crawl all the databases and create tables in the AWS Glue Data Catalog Use an AWS Glue ETL Job to load data from the RDS databases to Amazon S3, and use Amazon Athena to run the queries.

Answer: C

NEW QUESTION 24
A development team has created a series of AWS CloudFormation templates to help deploy services. They created a template for a network/virtual private (VPC) stack, a database stack, a bastion host stack, and a web application-specific stack. Each service requires the deployment of at least:
Each template has multiple input parameters that make it difficult to deploy the services individually from the AWS CloudFormation console. The input parameters from one stack are typically outputs from other stacks. For example, the VPC ID, subnet IDs, and security groups from the network stack may need to be used in the application stack or database stack.
Which actions will help reduce the operational burden and the number of parameters passed into a service deployment? (Choose two.)

  • A. Create a new AWS CloudFormation template for each servic
  • B. After the existing templates to use cross-stack references to eliminate passing many parameters to each templat
  • C. Call each required stack for the application as a nested stack from the new stac
  • D. Call the newly created service stack from theAWS CloudFormation console to deploy the specific service with a subset of the parameters previously required.
  • E. Create a new portfolio in AWS Service Catalog for each servic
  • F. Create a product for each existing AWS CloudFormation template required to build the servic
  • G. Add the products to the portfolio that represents that service in AWS Service Catalo
  • H. To deploy the service, select the specific service portfolio and launch the portfolio with the necessary parameters to deploy all templates.
  • I. Set up an AWS CodePipeline workflow for each servic
  • J. For each existing template, choose AWS CloudFormation as a deployment actio
  • K. Add the AWS CloudFormation template to the deployment actio
  • L. Ensure that the deployment actions are processed to make sure that dependences are obeye
  • M. Use configuration files and scripts to share parameters between the stack
  • N. To launch the service, execute the specific template by choosing the name of the service and releasing a change.
  • O. Use AWS Step Functions to define a new servic
  • P. Create a new AWS CloudFormation template for each servic
  • Q. After the existing templates to use cross-stack references to eliminate passing many parameters to each templat
  • R. Call each required stack for the application as a nested stack from the new service templat
  • S. Configure AWS Step Functions to call the service template directl
  • T. In the AWS Step Functions console, execute the step.
  • . Create a new portfolio for the Services in AWS Service Catalo
  • . Create a new AWS CloudFormation template for each servic
  • . After the existing templates to use cross-stack references to eliminate passing many parameters to each templat
  • . Call each required stack for the application as a nested stack from the new stac
  • . Create a product for each applicatio
  • . Add the service template to the produc
  • . Add each new product to the portfoli
  • . Deploy the product from the portfolio to deploy the service with the necessary parameters only to start the deployment.

Answer: AE

NEW QUESTION 25
A company wants to replace its call system with a solution built using AWS managed services. The company call center would like the solution to receive calls, create contact flows, and scale to handle growth projections. The call center would also like the solution to use deep learning capabilities to recognize the intent of the callers and handle basic tasks, reducing the need to speak an agent. The solution should also be able to query business applications and provide relevant information back to calls as requested.
Which services should the Solution Architect use to build this solution? (Choose three.)

  • A. Amazon Rekognition to identity who is calling.
  • B. Amazon Connect to create a cloud-based contact center.
  • C. Amazon Alexa for Business to build conversational interface.
  • D. AWS Lambda to integrate with internal systems.
  • E. Amazon Lex to recognize the intent of the caller.
  • F. Amazon SQS to add incoming callers to a queue.

Answer: BDE

NEW QUESTION 26
......

100% Valid and Newest Version SAP-C01 Questions & Answers shared by Exambible, Get Full Dumps HERE: https://www.exambible.com/SAP-C01-exam/ (New 179 Q&As)