`
Donald_Draper
  • 浏览: 951609 次
社区版块
存档分类
最新评论

Kafka配置文件

阅读更多
Kafka目录结构:http://donald-draper.iteye.com/blog/2396760
上面我们kafka的目录结构,今天来看一下kafka的相关配置文件,由于鄙人当前kafka的知识的局限性,
配置文件中可能有一定的错误,有,我们日后再改。
[donald@Donald_Draper ~]$ ls
Desktop  Documents  Downloads  kafka_2.11-0.11.0.1  kafka_2.11-0.11.0.1.tgz  Music  Pictures  Public  Templates  Videos
[donald@Donald_Draper ~]$ 
[donald@Donald_Draper ~]$ cd kafka_2.11-0.11.0.1/
[donald@Donald_Draper kafka_2.11-0.11.0.1]$ ls
bin  config  libs  LICENSE  NOTICE  site-docs
[donald@Donald_Draper kafka_2.11-0.11.0.1]$ cd config/
[donald@Donald_Draper config]$ ls
connect-console-sink.properties    connect-file-source.properties  log4j.properties        zookeeper.properties
connect-console-source.properties  connect-log4j.properties        producer.properties
connect-distributed.properties     connect-standalone.properties   server.properties
connect-file-sink.properties       consumer.properties             tools-log4j.properties

主要生产者消息者配置文件,日志配置文件,broker配置文件,connect和内置zookeeper配置文件;

我们分别来看,
broker配置文件:
[donald@Donald_Draper config]$ cat server.properties 
....
# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################
基本配置
# The id of the broker. This must be set to a unique integer for each broker.
broker id, id必须是唯一的整数
broker.id=0

# Switch to enable topic deletion or not, default value is false
是否可以删除topic,如果为true,我们可以在命令行删除topic,否则,不能。
#delete.topic.enable=true


############################# Socket Server Settings #############################
socket配置
# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
broker监听地址。如果没有配置,默认为java.net.InetAddress.getCanonicalHostName()方法返回的地址
#listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
broker的主机名和端口号将会广播给消费者与生产者。如果没有设置,默认为监听配置,否者使用
java.net.InetAddress.getCanonicalHostName()方法返回的地址
#advertised.listeners=PLAINTEXT://your.host.name:9092


# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
监听协议,默认为PLAINTEXT
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL


# The number of threads that the server uses for receiving requests from the network and sending responses to the network
服务器接受请求和相应请求的线程数
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
处理请求的线程数,包括磁盘的IO操作
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
服务器socket发送缓存
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
服务器socket接受缓存
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
服务器接收请求的最大值
socket.request.max.bytes=104857600

############################# Log Basics #############################
log基本配置
# A comma seperated list of directories under which to store log files
log日志文件夹
log.dirs=/tmp/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
每个topic的默认日志分区数。允许分区数大于并行消费数,这样可能导致,更多的文件将会跨broker
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
在启动和关闭刷新时,没有数据目录用于日志恢复的线程数。
这个值,强烈建议在随着在RAID阵列中的安装数据目录的增长而增长。
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
内部topic配置
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, 
a value greater than 1 is recommended for to ensure availability such as 3.
内部__consumer_offsets和__transaction_state两个topic,分组元数据的复制因子。
除开发测试外的使用,强烈建议值大于1,以保证可用性,比如3。
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1


############################# Log Flush Policy #############################
日志刷新策略
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
消息立刻被写到文件系统,默认调用fsync方法,懒同步操作系统缓存。下面的配置用于控制刷新数据到磁盘。
这里是一些折中方案:
#    1. Durability: Unflushed data may be lost if you are not using replication.
持久性:如果没有使用replication,没刷新的数据可能丢失。
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does
occur as there will be a lot of data to flush.
延迟性:当有大量的数据需要刷新,刷新操作发生时,比较大的刷新间隔可能会导致延时。
#    3. Throughput: The flush is generally the most expensive operation, and a small flush 
interval may lead to exceessive seeks.
吞吐量:刷新操作代价比较高,较小的刷新间隔,将会引起过渡的seek文件操作。

# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
下面的配置刷新策略,允许在一个的刷新间隔或消息数量下,刷新数据,这个配置是全局的,可以在每个主题
下重写。
# The number of messages to accept before forcing a flush of data to disk
在强制刷新数据到磁盘前,允许接受消息数量
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
在强制刷新前,一个消息可以日志中停留在最大时间
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################
日志保留策略
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
下面的配置用于控制日志segments的处理。这些策略可以在一定的时间间隔和数据累积到一定的size,可以删除
segments。两种策略只要有 一种触发,segments将会被删除。删除总是从log的末端。
# The minimum age of a log file to be eligible for deletion due to age
log文件的保留的时间
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes. Functions independently of log.retention.hours.
log文件保留的size
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
日志segments文件最大size,当日志文件的大于最大值,则创建一个新的log segment
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
日志保留检查间隔
log.retention.check.interval.ms=300000

############################# Zookeeper #############################
Zookeeper配置
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper地址,多个以逗号隔开比如:"127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zookeeper.connect=localhost:2181

# Timeout in ms for connecting to zookeeper
连接zookeeper超时时间
zookeeper.connection.timeout.ms=6000


############################# Group Coordinator Settings #############################
分组协调配置
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will 
delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new 
members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to 
avoid unnecessary, and potentially expensive, rebalances during application startup.
下面的配置为毫秒时间,用于延时消费者重平衡的时间。重平衡将会进一步在新成员添加分组是,
延时group.initial.rebalance.delay.ms时间,直到到达maximum of max.poll.interval.ms时间。
默认值为3秒,我们重写0,主要是用户开发测试体验。在生产环境下,默认值3s,在应用启动期间,
帮助避免不必要及潜在的代价高的rebalances,是比较合适的。
group.initial.rebalance.delay.ms=0
[donald@Donald_Draper config]$ 


zookeeper配置文件

[donald@Donald_Draper config]$ cat zookeeper.properties 
...
# the directory where the snapshot is stored.
数据目录
dataDir=/tmp/zookeeper
# the port at which the clients will connect
监听端口
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
最大连接数,非生产环境配置
maxClientCnxns=0
[donald@Donald_Draper config]$


生产者配置文件
[donald@Donald_Draper config]$ cat producer.properties 
...
# see kafka.producer.ProducerConfig for more details

############################# Producer Basics #############################
生产者基本配置
# list of brokers used for bootstrapping knowledge about the rest of the cluster
# format: host1:port1,host2:port2 ...
broker地址配置,集群则格式为 host1:port1,host2:port2 ...
bootstrap.servers=localhost:9092

# specify the compression codec for all data generated: none, gzip, snappy, lz4
是否压缩数据,有none, gzip, snappy, lz4,默认为压缩
compression.type=none

# name of the partitioner class for partitioning events; default partition spreads data randomly
分区事件的类名,默认随机
#partitioner.class=

# the maximum amount of time the client will wait for the response of a request
请求超时时间
#request.timeout.ms=

# how long `KafkaProducer.send` and `KafkaProducer.partitionsFor` will block for
`KafkaProducer.send` and `KafkaProducer.partitionsFor`最长阻塞时间
#max.block.ms=

# the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together
生产者延时发送消息的时间,以便可以批量发送消息
#linger.ms=

# the maximum size of a request in bytes
最大请求size
#max.request.size=

# the default batch size in bytes when batching multiple records sent to a partition
每次可以批量发送到一个分区的消息记录数
#batch.size=

# the total bytes of memory the producer can use to buffer records waiting to be sent to the server
在消息发送至server前,生产者可以缓存的消息大小
#buffer.memory=
[donald@Donald_Draper config]$ 


消息者配置文件

[donald@Donald_Draper config]$ cat consumer.properties 
...
# see kafka.consumer.ConsumerConfig for more details

# Zookeeper connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zookeeper连接地址,集群则个时如:127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002
zookeeper.connect=127.0.0.1:2181

# timeout in ms for connecting to zookeeper
zookeeper 连接超时时间
zookeeper.connection.timeout.ms=6000

#consumer group id
消费者分组id
group.id=test-consumer-group

#consumer timeout
消费超时时间
#consumer.timeout.ms=5000
[donald@Donald_Draper config]$ 


connect standalone配置文件

[donald@Donald_Draper config]$ cat connect-standalone.properties 
...

# These are defaults. This file just demonstrates how to override some settings.
broker地址
bootstrap.servers=localhost:9092

# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will
# need to configure these based on the format they want their data in when loaded from or stored into Kafka
kafka数据格式转化器,用于指定数据格式以及如何转化数据到连接数据。当从kafka加载数据或存储数据到kafka时,
每个连接用户需要基于以下配置格式需要的数据。
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
从数据转换器命令来看改为JSON数据转化器
# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply
# it to
启动先前的数据转化器配置
key.converter.schemas.enable=true
value.converter.schemas.enable=true

# The internal converter used for offsets and config data is configurable and must be specified, but most users will
# always want to use the built-in default. Offset and config data is never visible outside of Kafka Connect in this format.
分区segments消息索引和配置数据转化器,这个必须制定,大部分可以使用默认的配置。
消息索引和配置数据在kafka连接器外部是看不到了。
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
消息索引存储文件
offset.storage.file.filename=/tmp/connect.offsets
# Flush much faster than normal, which is useful for testing/debugging
用于测试和调试
offset.flush.interval.ms=10000

# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
# (connectors, converters, transformations). The list should consist of top level directories that include 
# any combination of: 
# a) directories immediately containing jars with plugins and their dependencies
# b) uber-jars with plugins and their dependencies
# c) directories immediately containing the package directory structure of classes of plugins and their dependencies
# Note: symlinks will be followed to discover dependencies or plugins.
# Examples: 
# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
#plugin.path=
[donald@Donald_Draper config]$ 

connect 分布式配置文件

[donald@Donald_Draper config]$ cat connect-distributed.properties 
...

# This file contains some of the configurations for the Kafka Connect distributed worker. This file is intended
# to be used with the examples, and some settings may differ from those used in a production system, especially
# the `bootstrap.servers` and those specifying replication factors.

# A list of host/port pairs to use for establishing the initial connection to the Kafka cluster.
broker族地址
bootstrap.servers=localhost:9092

# unique name for the cluster, used in forming the Connect cluster group. Note that this must not conflict with consumer group IDs
族id不能与消费者组名一样
group.id=connect-cluster

# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will
# need to configure these based on the format they want their data in when loaded from or stored into Kafka
数据转化器
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply
# it to
启动先前的数据转化器配置
key.converter.schemas.enable=true
value.converter.schemas.enable=true

# The internal converter used for offsets, config, and status data is configurable and must be specified, but most users will
# always want to use the built-in default. Offset, config, and status data is never visible outside of Kafka Connect in this format.
kafka内部数据转化器
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false

# Topic to use for storing offsets. This topic should have many partitions and be replicated and compacted.
# Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create
# the topic before starting Kafka Connect if a specific topic configuration is needed.
# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.
# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able
# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.
消息索引存储topic及复制因子及分区数
offset.storage.topic=connect-offsets
offset.storage.replication.factor=1
#offset.storage.partitions=25

# Topic to use for storing connector and task configurations; note that this should be a single partition, highly replicated,
# and compacted topic. Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create
# the topic before starting Kafka Connect if a specific topic configuration is needed.
# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.
# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able
# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.
配置数据存储topic及复制因子
config.storage.topic=connect-configs
config.storage.replication.factor=1

# Topic to use for storing statuses. This topic can have multiple partitions and should be replicated and compacted.
# Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create
# the topic before starting Kafka Connect if a specific topic configuration is needed.
# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.
# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able
# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.
状态存储topic及复制因子及分区数
status.storage.topic=connect-status
status.storage.replication.factor=1
#status.storage.partitions=5

# Flush much faster than normal, which is useful for testing/debugging
消息索引刷新间隔,对于测试和调试比较有用
offset.flush.interval.ms=10000

# These are provided to inform the user about the presence of the REST host and port configs 
# Hostname & Port for the REST API to listen on. If this is set, it will bind to the interface used to listen to requests.
#rest.host.name=
#rest.port=8083

# The Hostname & Port that will be given out to other workers to connect to i.e. URLs that are routable from other servers.
#rest.advertised.host.name=
#rest.advertised.port=

# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
# (connectors, converters, transformations). The list should consist of top level directories that include 
# any combination of: 
# a) directories immediately containing jars with plugins and their dependencies
# b) uber-jars with plugins and their dependencies
# c) directories immediately containing the package directory structure of classes of plugins and their dependencies
# Examples: 
# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
#plugin.path=
[donald@Donald_Draper config]$ 


connect控制台source配置文件


[donald@Donald_Draper config]$ cat connect-console-source.properties 
...

name=local-console-source
connector.class=org.apache.kafka.connect.file.FileStreamSourceConnector
tasks.max=1
topic=connect-test
[donald@Donald_Draper config]$ 

connect控制台sink配置文件
[donald@Donald_Draper config]$ cat connect-console-sink.properties 
...
name=local-console-sink
connector.class=org.apache.kafka.connect.file.FileStreamSinkConnector
tasks.max=1
topics=connect-test
[donald@Donald_Draper config]$ 


connect文件source配置文件

[donald@Donald_Draper config]$ more connect-file-source.properties 
...
name=local-file-source
connector.class=FileStreamSource
tasks.max=1
file=test.txt
topic=connect-test
[donald@Donald_Draper config]$


connect文件sink配置文件

[donald@Donald_Draper config]$ more connect-file-sink.properties 
...

name=local-file-sink
connector.class=FileStreamSink
tasks.max=1
file=test.sink.txt
topics=connect-test


kafka日志文件

[donald@Donald_Draper config]$ cat log4j.properties 
...

# Unspecified loggers and loggers with additivity=true output to server.log and stdout
# Note that INFO only applies to unspecified loggers, the log level of the child logger is used otherwise
默认日志等级为INFO
log4j.rootLogger=INFO, stdout, kafkaAppender

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log
log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

# Change the two lines below to adjust ZK client logging
log4j.logger.org.I0Itec.zkclient.ZkClient=INFO
log4j.logger.org.apache.zookeeper=INFO

# Change the two lines below to adjust the general broker logging level (output to server.log and stdout)
log4j.logger.kafka=INFO
log4j.logger.org.apache.kafka=INFO

# Change to DEBUG or TRACE to enable request logging
log4j.logger.kafka.request.logger=WARN, requestAppender
log4j.additivity.kafka.request.logger=false

# Uncomment the lines below and change log4j.logger.kafka.network.RequestChannel$ to TRACE for additional output
# related to the handling of requests
请求处理日志
#log4j.logger.kafka.network.Processor=TRACE, requestAppender
#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender
#log4j.additivity.kafka.server.KafkaApis=false
log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender
log4j.additivity.kafka.network.RequestChannel$=false

log4j.logger.kafka.controller=TRACE, controllerAppender
log4j.additivity.kafka.controller=false

log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender
log4j.additivity.kafka.log.LogCleaner=false
状态变更log
log4j.logger.state.change.logger=TRACE, stateChangeAppender
log4j.additivity.state.change.logger=false

# Change to DEBUG to enable audit log for the authorizer
认证日志
log4j.logger.kafka.authorizer.logger=WARN, authorizerAppender
log4j.additivity.kafka.authorizer.logger=false

[donald@Donald_Draper config]$ 

connect日志文件

[donald@Donald_Draper config]$ more connect-log4j.properties 
...
输出级别为INFO,控制台输出
log4j.rootLogger=INFO, stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c:%L)%n

log4j.logger.org.apache.zookeeper=ERROR
log4j.logger.org.I0Itec.zkclient=ERROR
log4j.logger.org.reflections=ERROR
[donald@Donald_Draper config]$ 


tools日志文件


[donald@Donald_Draper config]$ more tools-log4j.properties 
...
输出级别为WARN,控制台输出
log4j.rootLogger=WARN, stderr

log4j.appender.stderr=org.apache.log4j.ConsoleAppender
log4j.appender.stderr.layout=org.apache.log4j.PatternLayout
log4j.appender.stderr.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.stderr.Target=System.err
[donald@Donald_Draper config]$ 
0
0
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics