How Many Questions Of DOP-C01 Training
Proper study guides for Renovate Amazon-Web-Services AWS Certified DevOps Engineer- Professional certified begins with Amazon-Web-Services DOP-C01 preparation products which designed to deliver the Tested DOP-C01 questions by making you pass the DOP-C01 test at your first time. Try the free DOP-C01 demo right now.
Free demo questions for Amazon-Web-Services DOP-C01 Exam Dumps Below:
NEW QUESTION 1
Which of the following CLI commands is used to spin up new EC2 Instances?
- A. awsec2 run-instances
- B. awsec2 create-instances
- C. awsec2 new-instancesD- awsec2 launch-instances
The AWS Documentation mentions the following
Launches the specified number of instances using an AMI for which you have permissions. You can specify a number of options, or leave the default options. The following rules apply:
[EC2-VPC] If you don't specify a subnet ID. we choose a default subnet from your default VPC for you. If you don't have a default VPC, you must specify a subnet ID in the request.
[EC2-Classic] If don't specify an Availability Zone, we choose one for you.
Some instance types must be launched into a VPC. if you do not have a default VPC. or if you do not specify a subnet ID. the request fails. For more information, see Instance Types Available Only in a VPC.
[EC2-VPC] All instances have a network interface with a primary private IPv4 address. If you don't specify this address, we choose one from the IPv4 range of your subnet.
Not all instance types support IPv6 addresses. For more information, see Instance Types.
If you don't specify a security group ID, we use the default security group. For more information, see Security Groups.
If any of the AMIs have a product code attached for which the user has not subscribed, the request fails. For more information on the Cc2 run instance command please refer to the below link http://docs.aws.a mazon.com/cli/latest/reference/ec2/run-instances.html
NEW QUESTION 2
When deploying applications to Elastic Beanstalk, which of the following statements is false with regards to application deployment
- A. Theapplication can be bundled in a zip file
- B. Caninclude parent directories
- C. Shouldnot exceed 512 MB in size
- D. Canbe a war file which can be deployed to the application server
The AWS Documentation mentions
When you use the AWS Clastic Beanstalk console to deploy a new application or an application version, you'll need to upload a source bundle. Your source bundle must meet the following requirements:
Consist of a single ZIP file or WAR file (you can include multiple WAR files inside your ZIP file) Not exceed 512 MB
Not include a parent folder or top-level directory (subdirectories are fine)
For more information on deploying applications to Clastic Beanstalk please see the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/applications-sourcebundle.html
NEW QUESTION 3
Your API requires the ability to stay online during AWS regional failures. Your API does not store any state, it only aggregates data from other sources - you do not have a database. What is a simple but effective way to achieve this uptime goal?
- A. Use a CloudFront distribution to serve up your AP
- B. Even if the region your API is in goes down, the edge locations CloudFront uses will be fine.
- C. UseanELBand a cross-zone ELB deployment to create redundancy across datacenter
- D. Even if a region fails, the other AZ will stay online.
- E. Create a Route53 Weighted Round Robin record, and if one region goes down, have that region redirect to the other region.
- F. Create a Route53 Latency Based Routing Record with Failover and point it to two identical deployments of your stateless API in two different region
- G. Make sure both regions use Auto Scaling Groups behind ELBs.
Failover routing lets you route traffic to a resource when the resource is healthy or to a different resource when the first resource is unhealthy. The primary and secondary resource record sets can route traffic to anything from an Amazon S3 bucket that is configured as a website to a complex tree of records.
For more information on Route53 Failover Routing, please visit the below URL:
NEW QUESTION 4
Your company is getting ready to do a major public announcement of a social media site on AWS. The website is running on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? Choose 2 answers from the options below
- A. DeployElasticCache in-memory cache running in each availability zone
- B. Implementshardingto distribute load to multiple RDS MySQL instances
- C. Increasethe RDS MySQL Instance size and Implement provisioned IOPS
- D. Addan RDS MySQL read replica in each availability zone
Implement Read Replicas and Elastic Cache
Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This replication feature makes it easy to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput.
For more information on Read Replica's, please visit the below link
Amazon OastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.
For more information on Amazon OastiCache, please visit the below link
NEW QUESTION 5
You are a Devops engineer for your company. The company hosts a web application that is hosted on a single EC2 Instance. The end users are complaining of slow response times for the application. Which of the following can be used to effectively scale the application?
- A. UseAutoscaling Groups to launch multiple instances and place them behind an ELB.
- B. UseAutoscaling launch configurations to launch multiple instances and place thembehing an ELB.
- C. UseAmazonRDS with the Multi-AZ feature.
- D. UseCloudformation to deploy the app again with an Amazon RDS with the Multi-AZfeature.
The AWS Documentation mentions the below
When you use Auto Scaling, you can automatically increase the size of your Auto Scalinggroup when demand goes up and decrease it when demand goes down. As Auto Scaling adds and removes CC2 instances, you must ensure that the traffic for your application is distributed across all of your CC2 instances. The Clastic Load Balancing service automatically routes incoming web traffic across such a dynamically changing number of L~C2 instances. Your load balancer acts as a single point
of contact for all incoming traffic to the instances in your Auto Scalinggroup. For more information on Autoscaling and ELB, please refer to the below link:
• http://docs.aws.a mazon.com/autoscaling/latest/userguide/autosca I ing-load-balancer.html
NEW QUESTION 6
You work for an insurance company and are responsible for the day-to-day operations of your company's online quote system used to provide insurance quotes to members of the public. Your company wants to use the application logs generated by the system to better understand customer behavior. Industry, regulations also require that you retain all application logs for the system indefinitely in order to investigate fraudulent claims in the future. You have been tasked with designing a log management system with the following requirements:
- All log entries must be retained by the system, even during unplanned instance failure.
- The customer insight team requires immediate access to the logs from the past seven days.
- The fraud investigation team requires access to all historic logs, but will wait up to 24 hours before these logs are available.
How would you meet these requirements in a cost-effective manner? Choose three answers from the options below
- A. Configure your application to write logs to the instance's ephemeral disk, because this storage is free and has good write performanc
- B. Create a script that moves the logs from the instance to Amazon S3 once an hour.
- C. Write a script that is configured to be executed when the instance is stopped or terminated and that will upload any remaining logs on the instance to Amazon S3.
- D. Create an Amazon S3 lifecycle configuration to move log files from Amazon S3 to Amazon Glacier after seven days.
- E. Configure your application to write logs to the instance's default Amazon EBS boot volume, because this storage already exist
- F. Create a script that moves the logs from the instance to Amazon S3 once an hour.
- G. Configure your application to write logs to a separate Amazon EBS volume with the "delete on termination" field set to fals
- H. Create a script that moves the logs from the instance to Amazon S3 once an hour.
- I. Create a housekeeping script that runs on a T2 micro instance managed by an Auto Scaling group for high availabilit
- J. The script uses the AWS API to identify any unattached Amazon EBS volumes containing log file
- K. Your housekeeping script will mount the Amazon EBS volume, upload all logs to Amazon S3, and then delete the volume.
Since all logs need to be stored indefinitely. Glacier is the best option for this. One can use Lifecycle events to stream the data from S3 to Glacier
Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule
defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as
• Transition actions - In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARDJA QK for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation.
• Expiration actions - In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf. For more information on Lifecycle events, please refer to the below link:
• http://docs.aws.a mazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.htm I You can use scripts to put the logs onto a new volume and then transfer those logs to S3.
Moving the logs from CBS volume to S3 we have some custom scripts running in the background. Inorder to ensure the minimum memory requirements for the OS and the applications for the script to execute we can use a cost effective ec2 instance.
Considering the computing resource requirements of the instance and the cost factor a tZmicro instance can be used in this case.
The following link provides more information on various t2 instances. https://docs.aws.amazon.com/AWSCC2/latest/WindowsGuide/t2-instances.html
Question is "How would you meet these requirements in a cost-effective manner? Choose three answers from the options below"
So here user has to choose the 3 options so that the requirement is fulfilled. So in the given 6 options, options C, C and F fulfill the requirement.
" The CC2s use CBS volumes and the logs are stored on CBS volumes those are marked for non- termination" - is one of the way to fulfill requirement. So this shouldn't be a issue.
NEW QUESTION 7
Your company has recently extended its datacenter into a VPC on AWS. There is a requirement for on-premise users manage AWS resources from the AWS console. You don't want to create 1AM users for them again. Which of the below options will fit your needs for authentication?
- A. UseOAuth 2.0 to retrieve temporary AWS security credentials to enable your membersto sign in to the AWS Management Console.
- B. Useweb Identity Federation to retrieve AWS temporary security credentials toenable your members to sign in to the AWS Management Console.
- C. Useyour on-premises SAML 2 O-compliant identity provider (IDP) to grant themembers federated access to the AWS Management Console via the AWS singlesign-on (SSO) endpoint.
- D. Useyour on-premises SAML2.0-compliant identity provider (IDP) to retrieve temporarysecurity credentials to enable members to sign in to the AWS ManagementConsole.
You can use a role to configure your SAML 2.0-compliant IdP and AWS to permit your federated users to access the AWS Management Console. The role grants the user permissions to carry out tasks in the console.
For more information on aws SAML, please visit the below URL
• http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_ena ble-console- saml.html
NEW QUESTION 8
What is the amount of time that Opswork stacks services waits for a response from an underlying instance before deeming it as a failed instance?
- A. Iminute.
- B. 5minutes.
- C. 20minutes.
- D. 60minutes
The AWS Documentation mentions
Every instance has an AWS OpsWorks Stacks agent that communicates regularly with the service. AWS OpsWorks Stacks uses that communication to monitor instance health. If an agent does not communicate with the service for more than approximately five minutes, AWS OpsWorks Stacks considers the instance to have failed.
For more information on the Auto healing feature, please visit the below URL: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-auto healing.htmI
NEW QUESTION 9
You are using Chef in your data center. Which service is designed to let the customer leverage existing Chef recipes in AWS?
- A. AWS Elastic Beanstalk
- B. AWSOpsWorks
- C. AWS CloudFormation
- D. Amazon Simple Workflow Service
AWS OpsWorks is a configuration management service that uses Chef, an automation platform that treats server configurations as code. OpsWorks uses Chef to
automate how servers are configured, deployed, and managed across your Amazon Clastic Compute Cloud (Amazon CC2) instances or on-premises compute
environments. OpsWorks has two offerings, AWS Opsworks for Chef Automate, and AWS OpsWorks Stacks.
For more information on Opswork and SNS please refer to the below link:
NEW QUESTION 10
You are working with a customer who is using Chef Configuration management in their data center. Which service is designed to let the customer leverage existing Chef recipes in AWS?
- A. AmazonSimple Workflow Service
- B. AWSEIastic Beanstalk
- C. AWSCIoudFormation
- D. AWSOpsWorks
AWS OpsWorks is a configuration management service that helps you configure and operate applications of all shapes and sizes using Chef. You can define the application's architecture and the specification of each component including package installation, software configuration and resources
such as storage. Start from templates for common technologies like application servers and databases or build your own to perform any task that can be scripted. AWS OpsWorks includes automation to scale your application based on time or load and dynamic configuration to orchestrate changes as your environment scales.
For more information on Opswork, please visit the link:
NEW QUESTION 11
Which of the following is not a component of Elastic Beanstalk?
- A. Application
- B. Environment
- C. Docker
- D. ApplicationVersion
Answer - C
The following are the components of Clastic Beanstalk
1) Application - An Clastic Beanstalk application is a logical collection of Clastic Beanstalk components, including environments, versions, and environment configurations. In Clastic Beanstalk an application is conceptually similar to a folder
2) Application version - In Clastic Beanstalk, an application version refers to a specific, labeled iteration of deployable code for a web application
3) environment - An environment is a version that is deployed onto AWS resources. Cach environment runs only a single application version at a time, however you can run the same version or different versions in many environments at the same time.
4) environment Configuration - An environment configuration identifies a collection of parameters and settings that define how an environment and its associated resources behave.
5) Configuration Template - A configuration template is a starting point for creating unique environment configurations. For more information on the components of Clastic beanstalk please refer to the below link
NEW QUESTION 12
Which of the following can be used in Cloudformation to coordinate the creation of stack resources. Choose 2 answers from the options given below
- A. AWS::CloudFormation::HoldCondition
- B. AWS::CloudFormation::WaitCondition
- C. HoldPolicyattribute
- D. CreationPolicyattribute
The AWS Documentation mentions the following
Using the AWS::CloudFormation::WaitCondition resource and Creation Pol icy attribute, you can do the following:
Coordinate stack resource creation with other configuration actions that are external to the stack creation
Track the status of a configuration process For more information on wait conditions, please refer to the below link:
• http://docs.ws.amazon.com/AWSCIoudFormation/latest/UserGuide/using-cfn-waitcond ition.html
NEW QUESTION 13
You are responsible for an application that leverages the Amazon SDK and Amazon EC2 roles for storing and retrieving data from Amazon S3, accessing multiple DynamoDB tables, and exchanging message with Amazon SQS queues. Your VP of Compliance is concerned that you are not following security best practices for securing all of this access. He has asked you to verify that the application's AWS access keys are not older than six months and to provide control evidence that these keys will be rotated a minimum of once every six months.
Which option will provide your VP with the requested information?
- A. Createa script to query the 1AM list-access keys API to get your application accesskey creation date and create a batch process to periodically create acompliance report for your VP.
- B. Provideyour VP with a link to 1AM AWS documentation to address the VP's key rotationconcerns.
- C. Updateyour application to log changes to its AWS access key credential file and use aperiodic Amazon EMR job to create a compliance report for your VP
- D. Createa new set of instructions for your configuration management tool that willperiodically create and rotate the application's existing access keys andprovide a compliance report to your VP.
The question is focusing on 1AM roles rather than using access keys for accessing the services, AWS will take care of the temporary credentials provided through the roles in accessing these services.
NEW QUESTION 14
You need to create a simple, holistic check for your system's general availablity and uptime. Your system presents itself as an HTTP-speaking API. What is the most simple tool on AWS to achieve this with?
- A. Route53 Health Checks
- B. CloudWatch Health Checks
- C. AWS ELB Health Checks
- D. EC2 Health Checks
Amazon Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources. Each health check that you create
can monitor one of the following:
• The health of a specified resource, such as a web server
• The status of an Amazon Cloud Watch alarm
• The status of other health checks
For more information on Route53 Health checks, please refer to the below link:
• http://docs.aws.a mazon.com/Route53/latest/DeveloperGuide/dns-fa ilover.html
NEW QUESTION 15
You need to create a Route53 record automatically in CloudFormation when not running in production during all launches of a Template. How should you implement this?
- A. Use a Parameter for environment, and add a Condition on the Route53 Resource in the template to create the record only when environment is not production.
- B. Create two templates, one with the Route53 record value and one with a null value for the recor
- C. Use the one without it when deploying to production.
- D. Use a Parameterfor environment, and add a Condition on the Route53 Resource in the template to create the record with a null string when environment is production.
- E. Create two templates, one with the Route53 record and one without i
- F. Use the one without it when deploying to production.
The optional Conditions section includes statements that define when a resource is created or when a property is defined. For example, you can compare whether a value is equal to another value. Based on the result of that condition, you can conditionally create resources. If you have multiple conditions, separate them with commas.
You might use conditions when you want to reuse a template that can create resources in different contexts, such as a test environment versus a production environment In your template, you can add an Environ me ntType input parameter, which accepts either prod or test as inputs. For the production environment, you might include Amazon CC2 instances with certain capabilities; however, for the test environment, you want to use reduced capabilities to save money. With conditions, you can define which resources are created and how they're configured for each environment type.
For more information on Cloudformation conditions please refer to the below link: http://docs.ws.amazon.com/AWSCIoudFormation/latest/UserGuide/cond itions-section- structure.htm I
NEW QUESTION 16
Of the 6 available sections on a Cloud Formation template (Template Description Declaration, Template Format Version Declaration, Parameters, Resources, Mappings, Outputs), which is the only one required for a CloudFormation template to be accepted? Choose an answer from the options below
- A. Parameters
- B. Template Declaration
- C. Mappings
- D. Resources
If you refer to the documentation, you will see that Resources is the only mandatory field
Specifies the stack resources and their properties, such as an Amazon Elastic Compute Cloud instance or an Amazon Simple Storage Service bucket.
For more information on cloudformation templates, please refer to the below link:
NEW QUESTION 17
Your application's Auto Scaling Group scales up too quickly, too much, and stays scaled when traffic decreases. What should you do to fix this?
- A. Set a longer cooldown period on the Group, so the system stops overshooting the target capacit
- B. The issue is that the scaling system doesn't allow enough time for new instances to begin servicing requests before measuring aggregate load again.
- C. Calculate the bottleneck or constraint on the compute layer, then select that as the new metric, and set the metric thresholds to the bounding values that begin to affect response latency.
- D. Raise the CloudWatch Alarms threshold associated with your autoscaling group, so the scaling takes more of an increase in demand before beginning.
- E. Use larger instances instead of lots of smaller ones, so the Group stops scaling out so much and wasting resources as the OS level, since the OS uses a higher proportion of resources on smaller instances.
The ideal case is that the right metric is not being used for the scale up and down.
Option A is not valid because it mentions that the cooldown is not happening when the traffic decreases, that means the metric threshold for the scale down is not occurring in Cloudwatch
Option C is not valid because increasing the Cloudwatch alarm metric will not ensure that the instances scale down when the traffic decreases.
Option D is not valid because the question does not mention any constraints that points to the instance size. For an example on using custom metrics for scaling in and out, please follow the below link for a use case.
• https://blog.powerupcloud.com/aws-autoscaling-based-on-database-query-custom-metrics- f396c16e5e6a
NEW QUESTION 18
Your development team is using an Elastic beanstalk environment. After a week, the environment was torn down and a new one was created. When the development team tried to access the data on the older environment, it was not available. Why is this the case?
- A. Thisis because the underlying EC2 Instances are created with encrypted storage andcannot be accessed once the environment has been terminated.
- B. Thisis because the underlying EC2 Instances are created with IOPS volumes andcannot be accessed once the environment has been terminated.
- C. Thisis because before the environment termination, Elastic beanstalk copies thedata to DynamoDB, and hence the data is not present in the EBS volumes
- D. Thisis because the underlying EC2 Instances are created with no persistent localstorage
The AWS documentation mentions the following
Clastic Beanstalk applications run on Amazon CC2 instances that have no persistent local storage.
When the Amazon CC2 instances terminate, the local file system is not saved, and new Amazon CC2 instances start with a default file system. You should design your application to store data in a persistent data source.
For more information on Elastic beanstalk design concepts, please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.concepts.design.htmI
NEW QUESTION 19
Your application is having a very high traffic, so you have enabled autoscaling in multi availability zone to suffice the needs of your application but you observe that one of the availability zone is not receiving any traffic. What can be wrong here?
- A. Autoscalingonly works for single availability zone
- B. Autoscalingcan be enabled for multi AZ only in north Virginia region
- C. Availabilityzone is not added to Elastic load balancer
- D. Instancesneed to manually added to availability zone
When you add an Availability Zone to your load balancer. Clastic Load Balancing creates a load balancer node in the Availability Zone. Load balancer nodes accept traffic from clients and forward requests to the healthy registered instances in one or more Availability Zones.
For more information on adding AZ's to CLB, please refer to the below U RL:
NEW QUESTION 20
As an architect you have decided to use CloudFormation instead of OpsWorks or Elastic Beanstalk for deploying the applications in your company. Unfortunately, you have discovered that there is a
resource type that is not supported by CloudFormation. What can you do to get around this.
- A. Specify more mappings and separate your template into multiple templates by using nested stacks.
- B. Create a custom resource type using template developer, custom resource template, and CloudFormatio
- C. */
- D. Specify the custom resource by separating your template into multiple templates by using nested stacks.
- E. Use a configuration management tool such as Chef, Puppet, or Ansible.
Custom resources enable you to write custom provisioning logic in templates that AWS Cloud Formation runs anytime you create, update (if you changed the custom resource), or delete stacks. For example, you might want to include resources that aren't available as AWS Cloud Formation resource types. You can include those resources by using custom resources. That way you can still manage all your related resources in a single stack.
For more information on custom resources in Cloudformation please visit the below URL:
◆ http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/template-custom- resources.htm I
NEW QUESTION 21
Recommend!! Get the Full DOP-C01 dumps in VCE and PDF From Surepassexam, Welcome to Download: https://www.surepassexam.com/DOP-C01-exam-dumps.html (New 116 Q&As Version)