kafkaProvide tool kafka-producer-perf-test.sh for pressure measurement.
parameter |
Explain |
messages |
Producers send the total number of messages. |
message-size |
Size of each message |
batch-size |
Number of messages sent per batch |
topics |
Topic sent by producer |
threads |
Producers use several threads to send simultaneously. |
broker-list |
List of machine ip:port for installing Kafka services |
producer-num-retries |
A message failed to send retry times. |
request-timeout-ms |
A message request sends timeout time.
|
./kafka-producer-perf-test.sh --topic log.business --num-records 1000000 --record-size 500 --throughput 1000000 --threads 100 --batch-size 4096 --producer-props bootstrap.servers=*****:9092 --sync
Later, it was found that these parameters were not applicable, including message-size, batch-size, threads, sync and other parameters in 1.0.0 does not apply, had to find the answer according to the script prompt.
Run command:
[udap@10 bin]$ ./kafka-producer-perf-test.sh usage: producer-performance [-h] --topic TOPIC --num-records NUM-RECORDS [--payload-delimiter PAYLOAD-DELIMITER] --throughput THROUGHPUT [--producer-props PROP-NAME=PROP-VALUE [PROP-NAME=PROP-VALUE ...]] [--producer.config CONFIG-FILE] [--print-metrics] [--transactional-id TRANSACTIONAL-ID] [--transaction-duration-ms TRANSACTION-DURATION] (--record-size RECORD-SIZE | --payload-file PAYLOAD-FILE) This tool is used to verify the producer performance. optional arguments: -h, --help show this help message and exit --topic TOPIC produce messages to this topic --num-records NUM-RECORDS number of messages to produce --payload-delimiter PAYLOAD-DELIMITER provides delimiter to be used when --payload-file is provided. Defaults to new line. Note that this parameter will be ignored if --payload-file is not provided. (default: \n) --throughput THROUGHPUT throttle maximum message throughput to *approximately* THROUGHPUT messages/sec --producer-props PROP-NAME=PROP-VALUE [PROP-NAME=PROP-VALUE ...] kafka producer related configuration properties like bootstrap.servers,client.id etc. These configs take precedence over those passed via --producer.config. --producer.config CONFIG-FILE producer config properties file. --print-metrics print out metrics at the end of the test. (default: false) --transactional-id TRANSACTIONAL-ID The transactionalId to use if transaction-duration-ms is > 0. Useful when testing the performance of concurrent transactions. (default: performance-producer-default-transactional- id) --transaction-duration-ms TRANSACTION-DURATION The max age of each transaction. The commitTransaction will be called after this this time has elapsed. Transactions are only enabled if this value is positive. (default: 0) either --record-size or --payload-file must be specified but not both. --record-size RECORD-SIZE message size in bytes. Note that you must provide exactly one of --record-size or --payload-file. --payload-file PAYLOAD-FILE file to read the message payloads from. This works only for UTF-8 encoded text files. Payloads will be read from this file and a payload will be randomly selected when sending messages. Note that you must provide exactly one of --record-size or --payload-file.
According to the hint, the command is obtained as follows:
./kafka-producer-perf-test.sh --topic log.business --throughput 100000 --num-records 1000000 --record-size 200 --producer-props bootstrap.servers=******:9092 ack=0
The script is explained in terms of parameters.
–topic topicName,
–num-records The total number of messages to send.
–record-size The number of bytes per record,
–throughput The number of records sent per second,
–producer-props bootstrap.servers=localhost:9092 Configuration information of sending end
There are many fewer parameters here than in previous versions, indicating that producer-props has some configuration to offer, check out the official website: http://kafka.apache.org/documentation/find directory: ProducerConfigs found some useful configurations, such as acks, batch. size, ssl, etc., but they all have default values. If the values are different, you can set the parameters according to the prompt. The table is longer, so you don’t copy them, according to the path past.
by the way
kafkaPerformance is related to the record size of each write, and the size of each record seriously affects performance. When I set the number of bytes to 200, the single-point piezometric performance approached 50 W requests per second.