Far Out Confluent Certified Developer For Apache Kafka Certification Examination CCDAK Test Preparation
Want to know Exambible CCDAK Exam practice test features? Want to lear more about Confluent Confluent Certified Developer for Apache Kafka Certification Examination certification experience? Study Guaranteed Confluent CCDAK answers to Up to date CCDAK questions at Exambible. Gat a success with an absolute guarantee to pass Confluent CCDAK (Confluent Certified Developer for Apache Kafka Certification Examination) test on your first attempt.
Also have CCDAK free dumps questions for you:
NEW QUESTION 1
Which of the following is not an Avro primitive type?
- A. string
- B. long
- C. int
- D. date
- E. null
Answer: D
Explanation:
date is a logical type
NEW QUESTION 2
A consumer has auto.offset.reset=latest, and the topic partition currently has data for offsets going from 45 to 2311. The consumer group never committed offsets for the topic before. Where will the consumer read from?
- A. offset 2311
- B. offset 0
- C. offset 45
- D. it will crash
Answer: A
Explanation:
Latest means that data retrievals will start from where the offsets currently end
NEW QUESTION 3
Which of the following Kafka Streams operators are stateful? (select all that apply)
- A. flatmap
- B. reduce
- C. joining
- D. count
- E. peek
- F. aggregate
Answer: BCDF
Explanation:
Seehttps://kafka.apache.org/20/documentation/streams/developer-guide/dsl- api.html#stateful-transformations
NEW QUESTION 4
To continuously export data from Kafka into a target database, I should use
- A. Kafka Producer
- B. Kafka Streams
- C. Kafka Connect Sink
- D. Kafka Connect Source
Answer: C
Explanation:
Kafka Connect Sink is used to export data from Kafka to external databases and Kafka Connect Source is used to import from external databases into Kafka.
NEW QUESTION 5
We want the average of all events in every five-minute window updated every minute. What kind of Kafka Streams window will be required on the stream?
- A. Session window
- B. Tumbling window
- C. Sliding window
- D. Hopping window
Answer: D
Explanation:
A hopping window is defined by two propertiesthe window's size and its advance interval (aka "hop"), e.g., a hopping window with a size 5 minutes and an advance interval of 1 minute.
NEW QUESTION 6
A kafka topic has a replication factor of 3 and min.insync.replicas setting of 2. How many brokers can go down before a producer with acks=all can't produce?
- A. 2
- B. 1
- C. 3
Answer: C
Explanation:
acks=all and min.insync.replicas=2 means we must have at least 2 brokers up for the partition to be available
NEW QUESTION 7
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100); try {
consumer.commitSync();
} catch (CommitFailedException e) { log.error("commit failed", e)
}
for (ConsumerRecord<String, String> record records)
{
System.out.printf("topic = %s, partition = %s, offset =
%d, customer = %s, country = %s ",
record.topic(), record.partition(), record.offset(), record.key(), record.value());
}
}
What kind of delivery guarantee this consumer offers?
- A. Exactly-once
- B. At-least-once
- C. At-most-once
Answer: C
Explanation:
Here offset is committed before processing the message. If consumer crashes before processing the message, message will be lost when it comes back up.
NEW QUESTION 8
To allow consumers in a group to resume at the previously committed offset, I need to set the proper value for...
- A. value.deserializer
- B. auto.offset.resets
- C. group.id
- D. enable.auto.commit
Answer: C
Explanation:
Setting a group.id that's consistent across restarts will allow your consumers part of the same group to resume reading from where offsets were last committed for that group
NEW QUESTION 9
You want to sink data from a Kafka topic to S3 using Kafka Connect. There are 10 brokers in the cluster, the topic has 2 partitions with replication factor of 3. How many tasks will you configure for the S3 connector?
- A. 10
- B. 6
- C. 3
- D. 2
Answer: D
Explanation:
You cannot have more sink tasks (= consumers) than the number of partitions, so 2.
NEW QUESTION 10
is KSQL ANSI SQL compliant?
- A. Yes
- B. No
Answer: B
Explanation:
KSQL is not ANSI SQL compliant, for now there are no defined standards on streaming SQL languages
NEW QUESTION 11
What is the default port that the KSQL server listens on?
- A. 9092
- B. 8088
- C. 8083
- D. 2181
Answer: B
Explanation:
Default port of KSQL server is 8088
NEW QUESTION 12
If you enable an SSL endpoint in Kafka, what feature of Kafka will be lost?
- A. Cross-cluster mirroring
- B. Support for Avro format
- C. Zero copy
- D. Exactly-once delivery
Answer: C
Explanation:
With SSL, messages will need to be encrypted and decrypted, by being first loaded into the JVM, so you lose the zero copy optimization. See more information herehttps://twitter.com/ijuma/status/1161303431501324293?s=09
NEW QUESTION 13
In Avro, adding an element to an enum without a default is a schema evolution
- A. breaking
- B. full
- C. backward
- D. forward
Answer: A
Explanation:
Since Confluent 5.4.0, Avro 1.9.1 is used. Since default value was added to enum complex type , the schema resolution changed from:
(<1.9.1) if both are enums:** if the writer's symbol is not present in the reader's enum, then an error is signalled. **(>=1.9.1) if both are enums:
if the writer's symbol is not present in the reader's enum and the reader has a default value, then that value is used, otherwise an error is signalled.
NEW QUESTION 14
Your streams application is reading from an input topic that has 5 partitions. You run 5 instances of your application, each with num.streams.threads set to 5. How many stream tasks will be created and how many will be active?
- A. 5 created, 1 active
- B. 5 created, 5 active
- C. 25 created, 25 active
- D. 25 created, 5 active
Answer: D
Explanation:
One partition is assigned a thread, so only 5 will be active, and 25 threads (i.e. tasks) will be created
NEW QUESTION 15
Once sent to a topic, a message can be modified
- A. No
- B. Yes
Answer: A
Explanation:
Kafka logs are append-only and the data is immutable
NEW QUESTION 16
Kafka is configured with following parameters - log.retention.hours = 168 log.retention.minutes = 168 log.retention.ms = 168 How long will the messages be retained for?
- A. Broker will not start due to bad configuration
- B. 168 ms
- C. 168 hours
- D. 168 minutes
Answer: B
Explanation:
If more than one similar config is specified, the smaller unit size will take precedence.
NEW QUESTION 17
......
P.S. Certshared now are offering 100% pass ensure CCDAK dumps! All CCDAK exam questions have been updated with correct answers: https://www.certshared.com/exam/CCDAK/ (150 New Questions)