Fluentd Performance Tuning

We have deployed the log colleges system in the k8s cluster before, but have poor performance. The maximum resource of Fluentd is 800M memory and 1 CPU, it can only process and push almost 1500 logs per second. We need performance tuning.

中文版

1、HOW TO TEST

First of all, I’ll describe how to test FLuentd performance.

New a simple Java Boot Project, or change an existing project. And change SpringBootApplication class.

1、Change SpringApplication

Change:

1
2
3
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}

To:

1
2
3
4
5
6
7
8
9
10
public static void main(String[] args) {
while (true){
log.info("test test test test test test test test test test test");
try {
Thread.sleep(10);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}

Thread.sleep(10); Used to roughly control the number of logs printed per second. 1s = 1000ms, if each loop sleeps 10 ms, then about 100 logs are printed in 1s. If each loop sleeps 1ms, then about 1000 logs are printed in 1s. We can copy more log.info to add the number of logs printed. About 20k number of logs are printed in 1s if follow the config below.

1
2
3
4
5
6
7
8
9
10
11
12
public static void main(String[] args) {
while (true){
for(int i = 0 ; i < 20; i++){
log.info("test test test test test test test test test test test");
}
try {
Thread.sleep(1);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}

Then build the Java application to Docker image, deployed on k8s platform. Start the application and it will print logs all the time. By looking at the logs in Fluentd, we can get the number of logs processed and push number per second. Then we can know the performance of Fluentd.

2、change log_level to debug

Change @log_level to debug in <match **> @id kafka at Fluentd config file. Then we can get the number of pushed logs to kafka, like:

1
2022-01-01 03:31:41 +0000 [debug]: #0 [kafka] 2000 messages send.

The value of flush_interval in <match **> @id kafka <buffer>module is meaning the frequency of the push.

3、Calculate the performance

If we see [kafka] 2000 messages send in Fluentd log, and the value of flush_interval is 10s. That meaning Fluentd processed and pushed 2000 logs in 10s, the performance is 200/s.

2、TUNING IDEAS

After seeing the poor performance, first I went to search the peak performance of Fluentd. To determine the poor performance is the Fluentd’s problem or our configuration problem. After checking the official documentation, I could see that the minimum performance should be at 5000 entries/s. This means that there is a problem with our configuration, the next step is to find the slow part and performance tuning.

3、PROBLEM POSITIONING

1、performance test without plugins

We added a lot of plugins in Fluentd configuration, like detect_exceptions to merge Exceptions, concat to merge multi-lines logs, kubernetes_metadata to add k8s clusters info, record_transformer to add or delete fields. Is there a possibility that their plugins cause poor performance? Or in other words, if only read log & push, without other logic. How much peak performance can Fluentd achieve? The value of flush_interval in Fluentd is 10s.

Modify the config file as below for testing.

View the log file:

1
2
3
4
5
6
7
8
9
10
2022-01-01 04:14:26 +0000 [debug]: #0 [kafka] 43250 messages send.
2022-01-01 04:14:36 +0000 [debug]: #0 [kafka] 38101 messages send.
2022-01-01 04:14:47 +0000 [debug]: #0 [kafka] 43251 messages send.
2022-01-01 04:14:57 +0000 [debug]: #0 [kafka] 43250 messages send.
2022-01-01 04:15:07 +0000 [debug]: #0 [kafka] 39131 messages send.
2022-01-01 04:15:18 +0000 [debug]: #0 [kafka] 39131 messages send.
2022-01-01 04:15:28 +0000 [debug]: #0 [kafka] 42221 messages send.
2022-01-01 04:15:37 +0000 [debug]: #0 [kafka] 38101 messages send.
2022-01-01 04:15:48 +0000 [debug]: #0 [kafka] 43250 messages send.
2022-01-01 04:15:57 +0000 [debug]: #0 [kafka] 38102 messages send.

Like 4000 entries/s, it’s not very well because the official document says that optimization is required if 5000 entries/s is reached. It means that 5000 entries/s should be easy to reach.

2、tail problem positioning

After troubleshooting, the problem was finally found to be in json time parsing.

time_format %Y-%m-%dT%H:%M:%S.%NZ

1
2
3
4
5
6
7
8
9
10
2022-01-01 05:07:03 +0000 [debug]: #0 [kafka] 119892 messages send.
2022-01-01 05:07:19 +0000 [debug]: #0 [kafka] 509786 messages send.
2022-01-01 05:07:31 +0000 [debug]: #0 [kafka] 129708 messages send.
2022-01-01 05:07:39 +0000 [debug]: #0 [kafka] 44242 messages send.
2022-01-01 05:07:44 +0000 [debug]: #0 [kafka] 109599 messages send.
2022-01-01 05:07:59 +0000 [debug]: #0 [kafka] 469568 messages send.
2022-01-01 05:08:13 +0000 [debug]: #0 [kafka] 135742 messages send.
2022-01-01 05:08:16 +0000 [debug]: #0 [kafka] 53292 messages send.
2022-01-01 05:08:30 +0000 [debug]: #0 [kafka] 435380 messages send.
2022-01-01 05:08:43 +0000 [debug]: #0 [kafka] 134736 messages send.

Removing this line has a very significant performance improvement. After calculation, the speed can reach 21000/s. But we are not sure if we have reached the peak of performance of Fluentd, but we can see a 5x performance improvement.

What this line does is format the field time to %Y-%m-%dT%H:%M:%S.%NZ format. in fact, this field is not used in our usage scenario. Only one place where the time field is relevant is in the Kibana page where the kafka push time is used to calculate the kafka push delay ms. kafka’s push time is [settable](https://docs.fluentd.org/output/kafka#use _event_time), using the Fluentd event time or the current time.

And the Java application’s log is printed by plugins, the beginning of the log is time. So we don’t need to format this field. We can delete it to improve performance.

3、de-time format performance test

How much performance can be reached after adding all plugins? If it can reach about 7000/s, then we don’t need to tune anymore.

After testing, find that only about 3000 enteris/s, which means plugins can reduce performance a lot.

The next goal is very clear, troubleshoot which plugin has a greater impact.

1
2
3
4
5
6
7
8
9
10
11
12
13
2022-01-01 04:21:27 +0000 [debug]: #0 [kafka] 31921 messages send.
2022-01-01 04:21:37 +0000 [debug]: #0 [kafka] 33981 messages send.
2022-01-01 04:21:46 +0000 [debug]: #0 [kafka] 29862 messages send.
2022-01-01 04:21:56 +0000 [debug]: #0 [kafka] 32952 messages send.
2022-01-01 04:22:07 +0000 [debug]: #0 [kafka] 33980 messages send.
2022-01-01 04:22:16 +0000 [debug]: #0 [kafka] 29863 messages send.
2022-01-01 04:22:27 +0000 [debug]: #0 [kafka] 33981 messages send.
2022-01-01 04:22:36 +0000 [debug]: #0 [kafka] 30892 messages send.
2022-01-01 04:22:47 +0000 [debug]: #0 [kafka] 33980 messages send.
2022-01-01 04:22:56 +0000 [debug]: #0 [kafka] 31921 messages send.
2022-01-01 04:23:07 +0000 [debug]: #0 [kafka] 33981 messages send.
2022-01-01 04:23:19 +0000 [debug]: #0 [kafka] 35011 messages send.
2022-01-01 04:23:28 +0000 [debug]: #0 [kafka] 30891 messages send.

4、PLUGIN PERFORMANCE TEST

The logic is simple, remove all the plugins and keep only input and output modules, then add our existing plugins one by one to try them out and see which one has a bigger impact on performance.

1、only keep detect_exceptions

1
2
3
4
5
6
7
8
9
10
2022-01-01 05:05:19 +0000 [debug]: #0 [kafka] 97003 messages send.
2022-01-01 05:05:24 +0000 [debug]: #0 [kafka] 91186 messages send.
2022-01-01 05:05:38 +0000 [debug]: #0 [kafka] 89283 messages send.
2022-01-01 05:05:48 +0000 [debug]: #0 [kafka] 93693 messages send.
2022-01-01 05:05:58 +0000 [debug]: #0 [kafka] 86792 messages send.
2022-01-01 05:06:07 +0000 [debug]: #0 [kafka] 79343 messages send.
2022-01-01 05:06:18 +0000 [debug]: #0 [kafka] 99481 messages send.
2022-01-01 05:06:28 +0000 [debug]: #0 [kafka] 86697 messages send.
2022-01-01 05:06:38 +0000 [debug]: #0 [kafka] 92506 messages send.
2022-01-01 05:06:48 +0000 [debug]: #0 [kafka] 93030 messages send

If only keep detect_exceptions plugin, after testing we can see the performance is about 9000/s. Compared to the peak of 21k/s, the performance is reduced by amount 50%, indicating that this plugin has poor performance.

2、only keep concat

1
2
3
4
5
6
7
8
9
10
2022-01-01 05:37:39 +0000 [debug]: #0 [kafka] 232269 messages send.
2022-01-01 05:37:48 +0000 [debug]: #0 [kafka] 136747 messages send.
2022-01-01 05:38:01 +0000 [debug]: #0 [kafka] 275506 messages send.
2022-01-01 05:38:10 +0000 [debug]: #0 [kafka] 123676 messages send.
2022-01-01 05:38:20 +0000 [debug]: #0 [kafka] 239307 messages send.
2022-01-01 05:38:29 +0000 [debug]: #0 [kafka] 143786 messages send.
2022-01-01 05:38:41 +0000 [debug]: #0 [kafka] 279529 messages send.
2022-01-01 05:38:50 +0000 [debug]: #0 [kafka] 119654 messages send.
2022-01-01 05:39:00 +0000 [debug]: #0 [kafka] 231264 messages send.
2022-01-01 05:39:10 +0000 [debug]: #0 [kafka] 174957 messages send.

If only keep concatplugin, after testing we can see the performance is about 19.6k/s. Compared to the peak of 21k/s, it’s very close, indicating that this plugin has high performance.

3、only keep kubernetes_metadata

1
2
3
4
5
6
7
8
9
10
2022-01-01 05:13:00 +0000 [debug]: #0 [kafka] 85467 messages send.
2022-01-01 05:13:12 +0000 [debug]: #0 [kafka] 101555 messages send.
2022-01-01 05:13:23 +0000 [debug]: #0 [kafka] 105578 messages send.
2022-01-01 05:13:32 +0000 [debug]: #0 [kafka] 105577 messages send.
2022-01-01 05:13:40 +0000 [debug]: #0 [kafka] 92505 messages send.
2022-01-01 05:13:53 +0000 [debug]: #0 [kafka] 94517 messages send.
2022-01-01 05:14:05 +0000 [debug]: #0 [kafka] 97533 messages send.
2022-01-01 05:14:10 +0000 [debug]: #0 [kafka] 96527 messages send.
2022-01-01 05:14:26 +0000 [debug]: #0 [kafka] 100550 messages send.
2022-01-01 05:14:37 +0000 [debug]: #0 [kafka] 98538 messages send.

If only keep kubernetes_metadataplugin, after testing we can see the performance is about 9k-10k/s. The performance of this plugin is very average.

4、kubernetes_metadata + record_transformer

1
2
3
4
5
6
7
8
9
10
2022-01-01 05:14:52 +0000 [debug]: #0 [kafka] 70385 messages send.
2022-01-01 05:14:53 +0000 [debug]: #0 [kafka] 80440 messages send.
2022-01-01 05:15:06 +0000 [debug]: #0 [kafka] 94517 messages send.
2022-01-01 05:15:19 +0000 [debug]: #0 [kafka] 90495 messages send.
2022-01-01 05:15:24 +0000 [debug]: #0 [kafka] 80440 messages send.
2022-01-01 05:15:36 +0000 [debug]: #0 [kafka] 94517 messages send.
2022-01-01 05:15:48 +0000 [debug]: #0 [kafka] 89489 messages send.
2022-01-01 05:15:53 +0000 [debug]: #0 [kafka] 88484 messages send.
2022-01-01 05:16:08 +0000 [debug]: #0 [kafka] 87478 messages send.
2022-01-01 05:16:20 +0000 [debug]: #0 [kafka] 86473 messages send.

Because record_transformer is for adding or deleting the field, is based on kubernetes_metadata. So we must test the performance based on kubernetes_metadata.

We can see the performance is about 8k-9k/s, compare to without record_transformer, it’s only 10% reduced, indicating that record_transformer have high performance.

5、REMOVE PLUGINS

1、remove plugin detect_exceptions

The usage of detect_exceptions is to merge multi-logs to one log can detect such as Java.

The usage of concat is the regular matching of logs, and then merging multiple lines of logs.

The principle of concat is very simple, regular matching the beginning of log, the configuration of plugin is :

/^[\[ ]*\d{1,4}[-\/ ]+\d{1,2}[-\/ ]+\d{1,4}[ T]+\d{1,2}:\d{1,2}:\d{1,2}/

Because the normal log is the beginning of time. If there are multiple lines: one log is beginning of time + multiple logs without beginning of time + one log is beginning of time. Then there should really be only two logs, one logs is the beginning of time + multiple log without the beginning of time are the same log.

eg:

1
2
3
4
5
6
7
2022-01-01 05:38:50 +0000 [debug]: console config
a=1
b=2
c=3
c=4
d=5
2022-01-01 05:38:50 +0000 [debug]: #0 [kafka] 119654 messages send.

There are really only two logs:

1
2
3
4
5
6
7
8
9
2022-01-01 05:38:50 +0000 [debug]: console config
a=1
b=2
c=3
c=4
d=5


2022-01-01 05:38:50 +0000 [debug]: #0 [kafka] 119654 messages send.

But when you think about it, isn’t Exception the same situation?

So plugin concat has contained the function of detect_exceptions. Just one thing that needs special treatment.

The format of logs in k8s cluster should be matched by regular expressions, otherwise will not be merged multiple logs.

But this is not a big problem, we can come up with a specification that the logs of the project deployed on our k8s platform should be formatted to a certain format.

Then we can remove detect_exceptions plugin to improve performance.

2、remove plugin kubernetes_metadata

Can we find a way to remove the kubernetes_metadata plugin?

What this plugin does is add some pod-related information to the logs:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
{
"docker": {
"container_id": ""
},
"kubernetes": {
"container_name": "",
"namespace_name": "",
"pod_name": "",
"container_image": "",
"container_image_id": "",
"pod_id": "",
"pod_ip": "",
"host": "",
"labels": {
"pod-template-hash": "",
"k8s_kuboard_cn/layer": "",
"k8s_kuboard_cn/name": ""
},
"master_url": "",
"namespace_id": "",
"namespace_labels": {
"alertmanager": "",
"kubernetes_io/metadata_name": ""
}
}
}

But a lot of informationrelated to the k8s cluster is involved and needs to be removed we don’t need them. All we need is a tag to position the pod’s log in Kibana, like pod_id, container_name, namespace_name ,and so on.

And this information is existing the tag field. The tag contains *[podName][nameSpace]*[deploymentName]-[containerId] **,is already enough.

We add tag ${tag_parts[4]} in the configuration file, then we can search logs in Kibana through tag : podName or tag : containerId.

Then we can remove the plugin kubernetes_metadata.

6、NEW CONFIG FILE TEST

We have already removed both detect_exceptions and kubernetes_metadata plugins. Each plugin reduced almost 50% performance. And modify other plugins configuration, the newest config file is below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
namespace: kube-system
data:

fluent.conf: |
<source>
@type tail
path /var/log/containers/*.log
exclude_path ["/var/log/containers/kube-*.log", "/var/log/containers/test-*.log", "/var/log/containers/calico-*.log", "/var/log/containers/heapster-*.log", "/var/log/containers/etcd-*.log", "/var/log/containers/kubernetes-*.log", "/var/log/containers/fluentd-*.log"]
follow_inodes true
pos_file /var/log/es-containers.log.pos
tag raw.kubernetes.*
format none
</source>

<filter raw.**>
@type parser
key_name message
<parse>
@type json
</parse>
</filter>

<filter raw.**>
@type concat
key log
stream_identity_key container_id
multiline_start_regexp /^[\[ ]*\d{1,4}[-\/ ]+\d{1,2}[-\/ ]+\d{1,4}[ T]+\d{1,2}:\d{1,2}:\d{1,2}/
separator ""
flush_interval 1
timeout_label @NORMAL
</filter>

<match **>
@type relabel
@label @NORMAL
</match>

<label @NORMAL>
<filter kubernetes.**>
@type record_transformer
enable_ruby true
<record>
message ${record["log"]}
# here can add firld. like add source:test
source test
tag ${tag_parts[4]}
</record>
remove_keys $.log,$.stream
</filter>

<match **>
@id kafka
@type kafka2
@log_level debug
# list of seed brokers
brokers brokers
topic_key topic_key
default_topic topic_key
use_event_time true
# buffer settings
<buffer>
@type file
path /var/log/fluentd-buffers/kubernetes.system.buffer

flush_mode interval
flush_thread_count 16
flush_interval 10s
retry_max_interval 30

chunk_limit_size 256M
queue_limit_length 16
overflow_action drop_oldest_chunk
</buffer>
# data type settings
<format>
@type json
</format>
required_acks -1
compression_codec gzip
</match>
</label>

After testing, the performance is almost 9k+/s, it has exceeded the expected target.

1
2
3
4
5
6
7
8
9
10
2022-01-01 05:16:48 +0000 [debug]: #0 [kafka] 98538 messages send.
2022-01-01 05:16:55 +0000 [debug]: #0 [kafka] 83456 messages send.
2022-01-01 05:17:01 +0000 [debug]: #0 [kafka] 94517 messages send.
2022-01-01 05:17:12 +0000 [debug]: #0 [kafka] 96527 messages send.
2022-01-01 05:17:22 +0000 [debug]: #0 [kafka] 91501 messages send.
2022-01-01 05:17:32 +0000 [debug]: #0 [kafka] 95522 messages send.
2022-01-01 05:17:42 +0000 [debug]: #0 [kafka] 88484 messages send.
2022-01-01 05:17:51 +0000 [debug]: #0 [kafka] 85468 messages send.
2022-01-01 05:18:01 +0000 [debug]: #0 [kafka] 92346 messages send.
2022-01-01 05:18:12 +0000 [debug]: #0 [kafka] 91542 messages send.

7、REVIEW & SUMMARY

After performance tuning, there is 3x performance. Our thinking is very clear, first of all, search the performance of Fluentd to position the problem is that Fluentd’s problem or our config problem. Second of all, we have confirmed it’s our config problem, our goal is to look for the poor performance plugins, and looking forward to removing them or replace to other high-performance plugin. Step by step. the result is exceeded the expected target.

Also, since Fluentd is developed in Ruby, which has GIL(Global interpreter lock), it only allows one thread to execute at a time, so there is bottlenecks in performance. If you have higher performance requirements, you can refer to the official documentation, use the multiple worker mode to further improve performance.


评论

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×