Improve AWS Certified Big Data -Speciality BDS-C00 Test Engine

Exam Code: BDS-C00 (Practice Exam Latest Test Questions VCE PDF)
Exam Name: AWS Certified Big Data -Speciality
Certification Provider: Amazon-Web-Services
Free Today! Guaranteed Training- Pass BDS-C00 Exam.

Online BDS-C00 free questions and answers of New Version:

NEW QUESTION 1
Which two AWS services provide out-of-the-box user configurable automatic backup-as-a- service
and backup rotation options? Choose 2 answers

  • A. Amazon S3
  • B. Amazon RDS
  • C. Amazon EBS
  • D. Amazon Redshift

Answer: BD

NEW QUESTION 2
A company needs to deploy virtual desktops to its customers in a virtual private cloud, leveraging
existing security controls. Which set of AWS services and features will meet the company’s requirements?

  • A. Virtual private network connection, AWS Directory services, and ClassicLink
  • B. Virtual private network connection, AWS Directory services, and Amazon WorkSpaces
  • C. AWS Directory service, Amazon WorkSpaces, and AWS Identity and Access Management
  • D. Amazon Elastic Compute Cloud, and AWS identity and access management

Answer: B

NEW QUESTION 3
As part of your continuous deployment process, your application undergoes an I/O load performance
test before it is deployed to production using new AMIs. The application uses one Amazon Elastic Block Store (EBS) PIOPS volume per instance and requires consistent I/O performance.
Which of the following must be carried out to ensure that I/O load performance tests yield the correct results in a repeatable manner?

  • A. Ensure that the I/O block sizes for the test are randomly selected
  • B. Ensure that the Amazon EBS volumes have been pre-warmed by reading all the blocks before the test
  • C. Ensure that snapshots of the Amazon EBS volumes are created as a backup
  • D. Ensure that the Amazon EBS volume is encrypted
  • E. Ensure that the Amazon EBS volume has been pre-warmed by creating a snapshot of the volume before the test

Answer: B

NEW QUESTION 4
A web-hosting company is building a web analytics tools to capture clickstream data from all of the
websites hosted within its platform and to provide near-real-time business intelligence. This entire system is built on AWS services. The web-hosting company is interested in using Amazon kinesis to collect this data and perform sliding window analytics. What is the most reliable and fault-tolerant technique to get each website to send data to Amazon Kinesis with every click?

  • A. After receiving a request each web server sends it to Amazon kinesis using the Amazon kinesis PutRecord APL Use the SessionID as a parturition key and set up a loop to retry until a successresponse is received
  • B. After receiving a request each web server sends it to Amazon kinesis using the Amazon Kinesis Producer Library addRecord method
  • C. Each web server bluffers the request until the count reaches 500 and sends them to Amazon kinesis using the Amazon kinesis PutRecord API call
  • D. After receiving a request each web server sends it to Amazon Kinesis using the Amazon kinesis PutRecord AP
  • E. Use the exponential back off algorithm for retries until a successful response is received

Answer: A

NEW QUESTION 5
A customers needs to capture all client connection information from their load balancer every five
minutes. The company wants to use data for analyzing traffic patterns and troubleshooting their applications. Which of the following options meets the customer requirements?

  • A. Enable access logs on the load balancer
  • B. Enable AWS CloudTrail for the load balancer
  • C. Enable Amazon CloudWatch metrics on the load balancer
  • D. Install the Amazon CloudWatch Logs agent on the load balancer

Answer: B

NEW QUESTION 6
Company A operates in Country X, Company A maintains a large dataset of historical purchase orders
that contains personal data of their customers in the form of full names and telephone numbers. The dataset consists of 5 text files. 1TB each. Currently the dataset resides on- premises due to legal requirements of storing personal data in-country. The research and development department need to run a clustering algorithm on the dataset and wants to use Elastic Map Reduce service in the closes AWS region. Due to geographic distance the minimum latency between the on-premises system and the closet AWS region is 200 ms.
Which option allows Company A to do clustering in the AWS Cloud and meet the legal requirement of maintaining personal data in-country?

  • A. Anonymize the personal data portions of the dataset and transfer the data files into Amazon S3 in the AWS regio
  • B. Have the EMR cluster read the dataset using EMRFS.
  • C. Establishing a Direct Connect link between the on-premises system and the AWS region to reduce latenc
  • D. Have the EMR cluster read the data directly from the on-premises storage system over Direct Connect.
  • E. Encrypt the data files according to encryption standards of Country X and store them in AWS region in Amazon S3. Have the EMR cluster read the dataset using EMRFS.
  • F. Use AWS Import/Export Snowball device to securely transfer the data to the AWS region and copy the files onto an EBS volum
  • G. Have the EMR cluster read the dataset using EMRFS.

Answer: B

NEW QUESTION 7
You are designing a web application that stores static assets in an Amazon Simple Storage
Service (S3) bucket. You expect this bucket to immediately receive over 150 PUT requests per second. What should you do to ensure optimal performance?

  • A. Use multi-part upload.
  • B. Add a random prefix to the key names.
  • C. Amazon S3 will automatically manage performance at this scale.
  • D. Use a predictable naming scheme, such as sequential numbers or date time sequences, in the key names

Answer: B

NEW QUESTION 8
An Amazon Kinesis stream needs to be encrypted. Which approach should be used to accomplish this task?

  • A. Perform a client-side encryption of the data before it enters the Amazon Kinesis stream on the producer
  • B. Use a partition key to segment the data by MD5 hash functions which makes indecipherable while in transit
  • C. Perform a client-side encryption of the data before it enters the Amazon Kinesis stream on the consumer
  • D. Use a shard to segment the data which has built-in functionality to make it indecipherable while in transit

Answer: B

NEW QUESTION 9
In AWS, which security aspects are the customer’s responsibility? Choose 4 answers

  • A. Life-Cycle management of IAM credentials
  • B. Security Group and ACL settings
  • C. Controlling physical access to compute resources
  • D. Path management on the EC2 instance’s operating system
  • E. Encryption of EBS volumes
  • F. Decommissioning storage devices

Answer: ABDE

NEW QUESTION 10
A company needs a churn prevention model to predict which customers will NOT review their yearly
subscription to the company’s service. The company plans to provide these customers with a promotional offer. A binary classification model that uses Amazon Machine Learning is required. On which basis should this binary classification model be built?

  • A. User profiles (age, gender, income, occupation)
  • B. Last user session
  • C. Each user time series events in the past 3 months
  • D. Quarterly results

Answer: C

NEW QUESTION 11
A large grocery distributor receives daily depletion reports from the field in the form of gzip archives
of CSV files uploading to Amazon S3. The files range from 500MB to 5GB. These files are processes daily by an EMR job.
Recently it has been observed that the file sizes vary, and the EMR jobs take too long. The distributor needs to tune and optimize the data processing workflow with this limited information to improved the performance of the EMR job.
Which recommendation should an administrator provide?

  • A. Reduce the HDFS block size to increase the number of task processors
  • B. Use bzip2 or Snappy rather than gzip for the archives
  • C. Decompress the gzip archives and store the data as CSV files
  • D. Use Avro rather than gzip for the archives

Answer: B

NEW QUESTION 12
A system needs to collect on-premises application spools files into a persistent storage layer in AWS.
Each spool file is 2 KB. The application generates 1 M files per hour. Each source file is automatically deleted from the local server after one hour. What is the most cost-efficient option to meet these requirements?

  • A. Write file contents to an Amazon DynamoDB table
  • B. Copy files to Amazon S3 standard storage
  • C. Write file content to Amazon ElastiCache
  • D. Copy files to Amazon S3 infrequent Access storage

Answer: A

NEW QUESTION 13
A company is building a new application is AWS. The architect needs to design a system to collect application log events. The design should be a repeatable pattern that minimizes data loss if an application instance fails, and keeps a durable copy of all log data for at least 30 days.
What is the simplest architecture that will allow the architect to analyze the logs?

  • A. Write them directly to a Kinesis Firehos
  • B. Configure Kinesis Firehose to load the events into an Amazon Redshift cluster for analysis.
  • C. Write them to a file on Amazon Simple Storage Service (S3). Write an AWS lambda function that runs in response to the S3 events to load the events into Amazon Elasticsearch service for analysis.
  • D. Write them to the local disk and configure the Amazon cloud watch Logs agent to lead the data into CloudWatch Logs and subsequently into Amazon Elasticsearch Service.
  • E. Write them to CloudWatch Logs and use an AWS Lambda function to load them into HDFS on an Amazon Elastic MapReduce (EMR) cluster for analysis.

Answer: A

NEW QUESTION 14
A user has created a launch configuration for Auto Scaling where CloudWatch detailed monitoring is
disabled. The user wants to now enable detailed monitoring. How can the user achieve this?

  • A. Update the Launch config with CLI to set InstanceMonitoringDisabled = false
  • B. The user should change the Auto Scaling group from the AWS console to enable detailed monitoring
  • C. Update the Launch config with CLI to set InstanceMonitoring.Enabled = true
  • D. Create a new Launch Config with detail monitoring enabled and update the Auto Scaling group

Answer: D

NEW QUESTION 15
A data engineer is about to perform a major upgrade to the DDL contained within an Amazon Redshift cluster to support a new data warehouse application. The upgrade scripts will include user permission updates, view and table structure changes as well as additional loading and data manipulation tasks. The data engineer must be able to restore the database to its existing state in the event of issues.
Which action should be taken prior to performing this upgrade task?

  • A. Run an UNLOAD command for all data in the warehouse and save it to S3
  • B. Create a manual snapshot of the Amazon Redshift cluster
  • C. Make a copy of the automated snapshot on the Amazon Redshift cluster
  • D. Call the wait For Snap Shot Available command from either the AWS CLI or an AWS SDK

Answer: B

NEW QUESTION 16
A city has been collecting data on its public bicycle share program for the past three years. The SPB
dataset currently on Amazon S3. The data contains the following data points:
• Bicycle organization points
• Bicycle destination points
• Mileage between the points
• Number of bicycle slots available at the station (which is variable based on the station location)
• Number of slots available and taken at each station at a given time
The program has received additional funds to increase the number of bicycle stations, available. All data is regularly archived to Amazon Glacier.
The new bicycle station must be located to provide the most riders access to bicycles. How should this task be performed?

  • A. Move the data from Amazon S3 into Amazon EBS-backed volumes and EC2 Hardoop with spot instances to run a Spark job that performs a stochastic gradient descent optimization.
  • B. Use the Amazon Redshift COPY command to move the data from Amazon S3 into RedShift and platform a SQL query that outputs the most popular bicycle stations.
  • C. Persist the data on Amazon S3 and use a transits EMR cluster with spot instances to run a Spark streaming job that will move the data into Amazon Kinesis.
  • D. Keep the data on Amazon S3 and use an Amazon EMR based Hadoop cluster with spot insistences to run a spark job that perform a stochastic gradient descent optimization over EMBFS.

Answer: B

NEW QUESTION 17
A game company needs to properly scale its game application, which is backed by DynamoDB.
Amazon Redshift has the past two years of historical data. Game traffic varies throughout the year based on various factors such as season, movie release, and holiday season. An administrator needs to calculate how much read and write throughput should be previsioned for DynamoDB table for each week in advance.
How should the administrator accomplish this task?

  • A. Feed the data into Amazon Machine Learning and build a regression model
  • B. Feed the data into Spark Mlib and build a random forest model
  • C. Feed the data into Apache Mahout and build a multi-classification model
  • D. Feed the data into Amazon Machine Learning and build a binary classification model

Answer: B

NEW QUESTION 18
A user has setup an RDS DB with Oracle. The user wants to get notifications when someone modifies
the security group of that DB. How can the user configure that?

  • A. It is not possible to get the notifications on a change in the security group
  • B. Configure SNS to monitor security group changes
  • C. Configure event notification on the DB security group
  • D. Configure the CloudWatch alarm on the DB for a change in the security group

Answer: C

NEW QUESTION 19
A solutions architect works for a company that has a data lake based on a central Amazon S3 bucket.
The data contains sensitive information. The architect must be able to specify exactly which files each user can access. Users access the platform through SAML federation Single Sign On platform.
The architect needs to build a solution that allows fine grained access control, traceability of access to the objects, and usage of the standard tools (AWS Console, AWS CLI) to access the data.
Which solution should the architect build?

  • A. Use Amazon S3 Server-Side Encryption with AWS KMS-Managed Keys for strong data.Use AWS KMS to allow access to specific elements of the platfor
  • B. Use AWS CloudTrail for auditing
  • C. Use Amazon S3 Server-Side Encryption with Amazon S3 Managed Key
  • D. Set Amazon S3ACI to allow access to specific elements of the platfor
  • E. Use Amazon S3 to access logs for auditing
  • F. Use Amazon S3 Client-Side Encryption with Client-Side Master Ke
  • G. Set Amazon S3 ACI to allow access to specific elements of the platfor
  • H. Use Amazon S3 access logs for auditing
  • I. Use Amazon S3 Client-Side Encryption with AWS KMS-Managed keys for storing data.Use AMS KWS to allow access to specific elements of the platfor
  • J. Use AWS CloudTrail for auditing

Answer: B

NEW QUESTION 20
A medical record filing system for a government medical fund is using an Amazon S3 bucket to
archive documents related to patients. Every patient visit to a physician creates a new file, which can add up to millions of files each month. Collection of these files from each physician is handled via a batch process that runs every night using AWS Data Pipeline. This is sensitive data, so the data and any associated metadata must be encrypted at rest.
Auditors review some files on a quarterly basis to see whether the records are maintained according to regulations. Auditors must be able to locate any physical file in the S3 bucket or a given data, patient, or physician. Auditors spend a signification amount of time locating such files.
What is the most cost-and time-efficient collection methodology in this situation?

  • A. Use Amazon kinesis to get the data feeds directly from physician, batch them using a Spark application on Amazon Elastic MapReduce (EMR) and then store them in Amazon S3 with folders separated per physician.
  • B. Use Amazon API Gateway to get the data feeds directly from physicians, batch them using a Spark application on Amazon Elastic MapReduce (EMR), and then store them in Amazon S3 with folders separated per physician.
  • C. Use Amazon S3 event notifications to populate an Amazon DynamoDB table with metadata about every file loaded to Amazon S3, and partition them based on the month and year of the file.
  • D. Use Amazon S3 event notifications to populate and Amazon Redshift table with metadata about every file loaded to Amazon S3, and partition them based on the month and year of the file.

Answer: D

NEW QUESTION 21
You need to design a VPC for a web-application consisting of an Elastic Load Balancer (ELB). A fleet of web/application servers, and an RDS database The Entire Infrastructure must be distributed over 2 availability zones.
Which VPC configuration works while assuring the database is not available from the Internet?

  • A. One public subnet for ELB one public subnet for the web-servers, and one private subnet for the database
  • B. One public subnet for ELB two private subnets for the web-servers, two private subnets for RDS
  • C. Two public subnets for ELB two private subnets for the web-servers and two private subnets for RDS
  • D. Two public subnets for ELB two public subnets for the web-servers, and two public subnets for RDS

Answer: C

NEW QUESTION 22
An organization uses Amazon Elastic MapReduce (EMR) to process a series of extract-transform-load
(ETL) steps that run in sequence. The output of each step must be fully processed in subsequent steps but will not be retained.
Which of the following techniques will meet this requirement most efficiently?

  • A. Use the EMR File System (EMRFS) to store the outputs from each step as objects in Amazon Simple Storage Service (S3).
  • B. Use the s3n URI to story the data to be processes as objects in Amazon S3.
  • C. Define the ETL steps as separate AWS Data Pipeline activities.
  • D. Load the data to be processed into HDFS and then write the final output to Amazon S3.

Answer: A

NEW QUESTION 23
You have written a server-side Node.js application and a web application with an HTML/JavaScript
front end that uses the Angular.js Framework. The server-side application connects to an Amazon Redshift cluster, issue queries, and then returns the results to the front end for display. Your user base is very large and distributed, but it is important to keep the cost of running this application low. Which deployment strategy is both technically valid and the most cost-effective?

  • A. Deploy an AWS Elastic Beanstalk application with two environments: one for the Node.js application and another for the web front en
  • B. Launch an Amazon Redshift cluster, and point your application to its Java Database connectivity (JDBC) endpoint
  • C. Deploy an AWS OpsWorks stack with three layers: a static web server layer for your front end, a Node.js app server layer for your server-side application, and a Redshift DB layer Amazon Redshift cluster
  • D. Upload the HTML, CSS, images, and JavaScript for the front end to an Amazon Simple Storage Service (S3) bucke
  • E. Create an Amazon CloudFront distribution with this bucket as its origi
  • F. Use AWS Elastic Beanstalk to deploy the Node.js applicatio
  • G. Launch an Amazon Redshift cluster, and point your application to its JDBC endpoint
  • H. Upload the HTML, CSS, images, and JavaScript for the front end, plus the Node.js code for the server-side application, to an Amazon S3 bucke
  • I. Create a CloudFront distribution with this bucket as its origi
  • J. Launch an Amazon Redshift cluster, and point your application to its JDBC endpoint
  • K. Upload the HTML, CSS, images, and JavaScript for the front end to an Amazon S3 bucke
  • L. Use AWS Elastic Beanstalk to deploy the Node.js applicatio
  • M. Launch an Amazon Redshift cluster, and point your application to its JDBC endpoint

Answer: C

NEW QUESTION 24
......

P.S. Easily pass BDS-C00 Exam with 264 Q&As Simply pass Dumps & pdf Version, Welcome to Download the Newest Simply pass BDS-C00 Dumps: https://www.simply-pass.com/Amazon-Web-Services-exam/BDS-C00-dumps.html (264 New Questions)