Skip to main content

Munin CPU Disk Throughput reduction load average

The System

I run Munin 2.0.16 on CentOS 6 to monitor about 30 hosts.
I mostly monitor 25-30 parameters per host (this depends to the running services)
About half of the hosts are nearby (with a LAN latency) the other part is far (with about 250ms network latency).

The facts

I have noticed "munin-graph" and "munin-html" took a long time. Long enough to overlap themselves if I keep the interval default (5 minutes).
I also noticed a huge "load average"
One of the most annoying thing is the Munin cronjob exiting with error (leading to a mail sent to the administrators) because a lock file (the one of the previous launch) still existing. This is very noise generator and lead to lower the importance of messages sent by munin...

Some solutions

And I decided to put both /var/lib/munin and /var/www/html/munin in tmpfs.
In order to reduce the data loss when rebooting, I make an hourly dump to the disk file system. This is not expensive.

Results

I made it step by step: 
  1. /var/www/html/munin then watch a moment
  2. /var/lib/munin and see
About disk usage, this is what happened:
About load average, this is what happened:

Conclusion

I just saved some disk I/O mostly writes. Nothing more.

Comments

Popular posts from this blog

vmware net_device trans_start

VMWare Workstation 12 and Kernel 4.7 When recompiling vmware kernel modules on a kernel 4.7, I get this error:

/tmp/modconfig-xrrZGZ/vmnet-only/netif.c:468:7: error: ‘struct net_device’ has no member named ‘trans_start’; did you mean ‘mem_start’?     dev->trans_start = jiffies;
This seems to be an already encountered problem: http://rglinuxtech.com/?p=1746http://ferenc.homelinux.com/?p=356 I choosed to replace the line, instead of deleting it.

- dev->trans_start = jiffies; + netif_trans_update(dev); I also noted that I had to re-tar the modified sources instead of leaving them untared, because the compilation process only takes the archives. 
On precedent editions of these files, I just left the modified folders "vmnet-only/" and "vmmon-only/" expanded without the need to re-tar them.


Jira workflow for new projects

Associated workflow creation I'm a Jira Cloud user and begining from some version 6, I noticed that when I create a project, it automatically creates a Workflow and Issue Scheme that is prepended by the project key and which is a copy of the default scheme.
I always had to make a cleanup after creating a project. Default workflow for new projects I also miss a feature that would allow me to make a custom workflow (and globally custom project setting) the default for new projects I create.
Solution: Create with shared configuration While searching, I noticed that with Jira Cloud which is version 7.1.0 at the time I write, there is a link at the bottom of the "Create project" wizard:
"Create with shared configuration" will allow me to select the project I want the new one to share configuration with.

The new created project will use the same configuration as the project I selectThere will be no creation of Workflow and Issue Scheme that I need to cleanup

This fea…

dockerfile multiline to file

Outputing a multiline string from Dockerfile
I motsly use a Dockerfile by sourcing from a base ditribution: CentOS or Debian.
But I also have a local mirror and would like to use it for packages installation.

Espacially on CentOS it is about many lines to write to the /etc/yum.repos.d/CentOS-Base.repo file.

Easiest way: one RUN per line The first method that comes in mind is to issue one RUN per line to write.
Here you are:

RUN echo "[base] " > /etc/yum.repos.d/CentOS-Base.repo RUN echo "name=CentOS-$releasever - Base " >> /etc/yum.repos.d/CentOS-Base.repo RUN echo "baseurl=ftp://packages-infra.mg.rktmb.org/pub/centos/7/base-reposync-7 " >> /etc/yum.repos.d/CentOS-Base.repo RUN echo "gpgcheck=0 " >>…