Download as xlsx, pdf, or txt
Download as xlsx, pdf, or txt
You are on page 1of 6

The configuration parameter to specify the log clean up policy in Kafka Brokers

_ is the first broker that starts in a cluster and responsible for electing leader for a partition
_ is the minimum amount of data needed in a topic partition for a consumer to request data from the broker
_configuration parameter is used to specify the time between each heartbeats to the group coordinator.
_is the configuration parameter that is used to specify the number of bytes the broker will return from each partition
_is the list of host:port pairs of brokers used by producers and consumer to initiate a connection to Kafka cluster.
Choose the command used to get the topic, partition and leader details in the cluster
In order to delete a message completely from the segment, the producer produces the same message again with its key ha
In order to stay in sync with leader, replicas send _ requests to the broker containing leader of the partition.
In Producer application the messages are sent to _ to convert keys and values which are strings to byte arrays before send
Log compaction is enabled using the configuration parameter
maintains the list of brokers in a Kafka cluster
Synchronous send() method uses Future.get() method which will wait to get a reply from Broker. If record is sent successfu
The _ API provides a set of tools or scripts to manage topics, brokers, partitions and ACLs.
The _ configuration parameter specifies the maximum amount of time a replication can be delayed in replicating a new me
The _ is the only consumer that has the complete list of all consumers in the consumer group with their partition assignme
The _ is the replica that was the leader when that topic was originally created
The _ is used to maintain a mapping from offsets to segment files and positions within the file.
The _ request is used by producer and consumer application to find the leader of each partition replica.
The _ runs on each port the broker is listening on, initiates a client connection and hand over the connection for further pr
The _ takes client connection from Producers and transfers the connection to a request queue in Broker
The _ threads are responsible for picking up and processing requests placed in a request queue by processor thread
The _ tool provided by AdminClient API helps in operations like topic creation, modification, deletion and listing.
The _API implements connectors to pull data from a source data system and push them to a sink data system.
The _tool provided by AdminClient API helps in overring the retention period of a topic
The follower replicas are used only for replication purpose and do not serve client requests
The heartbeats are send to the group coordinator when it retrieves record from Kafka broker using _ method
The Kafka topic partitions are again split into_ which is a file that contains messages and their offsets
The log clean up policy where the messages are compacted by retaining the messages that contain only keys with their lat
The maximum time for which the group coordinator will wait for the consumer to send heartbeat to maintain its members
The producer send() method uses a callback function which gets triggered when the send() method receives a response fro
The producer uses _ request that contains messages the producer want to write to the brokers
The replicas that keep up with the leader are called _
The response obtained from broker on a metadata request is cached and the cached information is refreshed in _ interval.
The value for ack parameter that specifies that the leader will acknowledge producer as soon as the record is written to its
The value to be set to ack to specify the leader should wait till it receive acknowledgement from the full set of In Sync repl
The_configuration parameter is used to specify the amount of memory used to buffer records waiting to be send to broke
When a consumer in a consumer group fails, the partitions it was assigned with will be transferred to another consumer in
Which value for ack provides least guarantee for successful message delivery, but is the fastest providing high throughput.
When a consumer wants to join a consumer group it will send request to a special broker in the cluster called _
The _ API allows real-time processing of streams of record from input topics and publish them to other topics. without Zoo
When a consumer wants to join a consumer group it will send _request to group coordinator
maintains the list of brokers in a Kafka cluster.Zookeeper
The consumer maintains its membership in the consumer group by sending _ to group coordinator?
In Kafka, each partition has an index which is a mapping of offsets to messages.

In Producer application the messages are sent to _ to convert keys and values which are strings to byte arrays before send
Serializer
is the first broker that starts in a cluster and responsible for electing leader for a partition . Controller

The replicas that keep up with the leader are called _


ISR
Log compaction is done on active segments of a partition log
The _ takes client connection from Producers and transfers the connection to a request queue in Broker
Processor thread
The follower replicas are used only for replication purpose and do not serve client requests Flase.
The _ runs on each port the broker is listening on, initiates a client connection and hand over the connection for further pr
The _ request is used by producer and consumer application to find the leader of each partition replica. metadata
The _ threads are responsible for picking up and processing requests placed in a request queue by processor thread. Accep

The _ is a producer configuration parameter that is set to specify the number of acknowledgements the leader need to rec
The value for ack parameter that specifies that the leader will acknowledge producer as soon as the record is written to its
The value to be set to ack to specify the leader should wait till it receive acknowledgement from the full set of In Sync repl
xo topic partitions in a broker.-Serializer
The producer send() method uses a callback function which gets triggered when the send() method receives a response fro
Which value for ack provides least guarantee for successful message delivery, but is the fastest providing high throughput.
The _API implements connectors to pull data from a source data system and push them to a sink data system. producer(w
When a consumer wants to join a consumer group it will send request to a special broker in the cluster called _ ? group co
_ is the minimum amount of data needed in a topic partition for a consumer to request data from the broker fetch.min.by
The _ API allows real-time processing of streams of record from input topics and publish them to other topics. without Zoo
When a consumer wants to join a consumer group it will send _request to group coordinator? Join group
the heartbeats are send to the group coordinator when it retrieves record from Kafka broker using _ method -poll()
Synchronous send() method uses Future.get() method which will wait to get a reply from Broker. If record is sent successfu

The _ is the only consumer that has the complete list of all consumers in the consumer group with their partition assignme
The response obtained from broker on a metadata request is cached and the cached information is refreshed in _ interval.
In order to stay in sync with leader, replicas send _ requests to the broker containing leader of the partition. fetch
The producer uses _ request that contains messages the producer want to write to the brokersProducer
The _ API provides a set of tools or scripts to manage topics, brokers, partitions and ACLs.admin client
configuration parameter is used to specify the time between each heartbeats to the group coordinator.heartbeat.interval.
The_configuration parameter is used to specify the amount of memory used to buffer records waiting to be send to broke
log.cleaner.enable
Controller
fetch.min.bytes
heartbeat.interval.ms
max.partition.fetch.bytes
bootstrap.servers
bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic Multibrokerapplication
1
Produce
Serializer
log.cleaner.enable
Zookeeper
RecordMetadata
AdminClient
max.block.ms
Group Leader
Preferred Leader
Partitions
Metadata
acceptor thread
processor thread
IO
kafka-topics.sh
Connect
kafka-configs.sh
1
poll()
Segments
compact
heartbeat.interval.ms
1
Fetch
ISRs
metadata.max.age.ms
acks=1
acks=0
buffer.memory
partition rebalance
acks=1
Group coordinator
Streams
JoinGroup

Heartbeats
0
0

ceiving acknowledgement from all followers.-ack=1

asynchronous send.True

t _ object which is used to get offset of the message written to broker.Callback


er",
"org.apac
he.kafka.c
ommon.s
erializatio
n.StringSe
rializer");

Producer
<String,St
ring>
producer
= new
KafkaProd
ucer<>(ka
fkaProps);

Producer
Record<St
ring,String
>
record=n
ew
Producer
Record<>(
topic,key,
value);

producer.
send(reco
rd);

producer.
close();
}
}
consumer
.subscribe
(Collectio
ns.singlet
onList(top
ics));

while(tru
e)
{

Consumer
Records<
String,Stri
ng>
records=c
onsumer.
poll(1000)
;
for
(Consume
rRecord<S
tring,Strin
g> record:
records)
{

System.o
ut.println(
"");
}

}
}

You might also like