linux nvidia fonts session

Fonts in desktop environment always change across reboots

On my Archlinux, I installed proprietary nVidia drivers because of the presence of an nVidia card.
Unfortunately, each time I reboot, I have to set the fonts through the desktop environment in order to have the correct font size.

It's a DPI detection problem

According to some community posts, it has to do with DPI detection

Hard seting the DPI

The solution I choosed is to edit "/etc/X11/xorg.conf" and add
  Option "DPI" "96 x 96"
in the "Monitor" section.

The "Monitor" section becomes:
Section "Monitor"
 Identifier "Monitor0"
 VendorName "Unknown"
 ModelName "Unknown"
 HorizSync  28.0 - 33.0
 VertRefresh 43.0 - 72.0
 Option  "DPMS"
 Option "DPI" "96 x 96"
Don't touch other sections unless you know what you're doing.
Reboot and enjoy.


path to lines

Convert PATH to lines in order to grep

I want to regexp check if a path is in my PATH environment variable.
There are many ways to achieve this, but this one is the one I want to show you today:
# echo $PATH | awk '{gsub(":","\n",$0); print $0;}'

I can then "grep" what I want from this.


exim all mail catcher

All mail catcher with Exim 4, on Debian 8

We have un bunch of "development" VM that has the feature of sending a mail via a relay, or MTA. 

We usually achieve this by setting the "mail host" setting in the used framework or CMS.

But for development purpose, there is no need to really send the message over the Internet: if the "mail host" catches it all and delivers it to a mailbox, the work is done.

Here is how to setup an Exim 4 on Debian 8 in order to make it catched all mail for all destination and always deliver it to a single local mailbox. That signel local mailbox can then be accessed via IMAP so that the development team can check if the message has been sent by the application.

Configuring with "debconf"

The first stage of configuration is done with debconf

# dpkg-reconfigure exim4-config

Then choose the following answers:

  • Internet site; mail is sent and received directly using SMTP
  • System mail name: (put the FQDN of this machine)
  • IP-addresses to listen on : (leave empty)
  • Other destinations for which mail is accepted:   *
    (Remember we want to catch all destination)
  • Domains to relay mail for: *
    (Remember we want to catch all destination)
  • Machines to relay mail for:
    (Ajust to your subnet)
  • Keep number of DNS-queries minimal (Dial-on-Demand)? <yes>
    (As far as we wont deliver to the outside world, we dont need to query DNS)
  • The remaining options are up to you

Modify the "deconf'd" configuration

The generated configuration is stored in

We need to copy this to

Then we edit it, and the only line to edit is in the "system_aliases:" router:
-  data = ${lookup{$local_part}lsearch{/etc/aliases}}
+  data = mihamina

This will route all the messages to the "mihamina" Maildir: just install a Dovecot-IMAP and a webmail in order to see the messages.

Note that
  • "mihamina" user must have been created.
  • we dont need to touch the "aliases" file


openldap nouvelle configuration

Configuration OLC (on-line configuration)

Historiquement, OpenLDAP se configurait via des fichiers textes "normaux", qu'on modifie et il fallait relancer le serveur pour prendre en charge la nouvelle configuration.

Depuis sa version 2.4, OpenLDAP utilise un nouveau système qu'il appelle OLC.

Dans ce document, il sera traité l'initialisation d'un OpenLDAP avec ce nouveau système, sachant que nous souhaitons:
  • "dc=rktmb,dc=org" comme racine
  • "cn=admin,dc=rktmb,dc=org" comme super administrateur
  • "rktmb" comme mot de passe du super administrateur
Ce document se base sur une CentOS 7, mais il est applicable sur toute autre distribution Linux et même des BSD.

Importation des schémas de base

Dans "/etc/openldap/schema/" il y a plusieurs schemas à charger selon le type d'entrée avec lesquelles l'annuaire sera peuplé.
Les utilisations courantes mettent généralement en jeu "core", "cosine" et "inetorgperson".

Pour cela il faut d'abord importer les schemas avec
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/core.ldif
Ce qui donnera le message suivant:
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
adding new entry "cn=core,cn=schema,cn=config"
ldap_add: Other (e.g., implementation specific) error (80)
additional info: olcAttributeTypes: Duplicate attributeType: ""
La dernière ligne qui mentionne une erreur n'est pas très grave: dans la plupart des installations, le schéma "core" est déjà chargé et cette ligne nous indique que justement, il est déjà chargé.

Les autres schémas se chargent avec les commandes:
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/cosine.ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif

Etablir la racine "dc=rktmb,dc=org"

Créer le fichier "0-base.ldif" qui contient ceci:
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=rktmb,dc=org

Attention, la dernière ligne vide est requise.

Importer la modification avec la commande:
ldapadd  -Y EXTERNAL -H ldapi:/// -f 0-base.ldif

Générer le hash du mot de passe "rktmb"

La commande "slappasswd" permet de générer un hash du mot de passe. Ce hash sera utilisé pour initialiser le mot de passe de l'utilisateur administrateur de l'annuaire.
New password:
Re-enter new password:
La dernière ligne est le hash désiré.

Importer l'utilisateur administrateur

Créer un fichier "1-root.ldif" avec le contenu:
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootDN
olcRootDN: cn=admin,dc=rktmb,dc=org

Attention, la dernière ligne vide est requise.

Importer avec la commande:
ldapadd -Y EXTERNAL -H ldapi:/// -f 1-root.ldif

Pour attribuer le mot de passe "rktmb" à l'administrateur, créer un fichier "2-password.ldif" avec le contenu:
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootPW
olcRootPW: {SSHA}ft5mv5CzuIL/mMOzj8Mo/Mimfpa4MDuv

La dernière ligne vide est requise, le hash est celui obtenu plus haut.

Importer les modifications avec:
ldapadd -Y EXTERNAL -H ldapi:/// -f 2-password.ldif

A partir de ce moment, l'annuaire est accessible via le réseau:

Les opérations d'ajout d'entrées peuvent se faire soit en ligne de commande, soit via une interface graphique.

Peupler l'annuaire

Pour peupler l'annuaire, il faut pour cet exemple
  • un "top"
    • un groupe
      • un utilisateur dans ce groupe
Créer un fichier "3-top.ldif" avec le contenu:
dn: dc=rktmb,dc=org
objectClass: dcObject
objectClass: top
objectClass: organization
dc: rktmb

Importer avec la commande:
ldapadd -x -w rktmb -D cn=admin,dc=rktmb,dc=org -H ldapi:/// < 3-top.ldif

Remarquer que l'accès à l'annuaire se fait désormais via le réseau, en fournissant les identifiants.

L'entrée est désormais créée:

Créer un fichier "4-groupe.ldif" avec le contenu:
dn: ou=Users,dc=rktmb,dc=org
changetype: add
objectClass: organizationalUnit
objectClass: top
ou: Users

Importer avec la commande:
ldapadd -x -w rktmb -D cn=admin,dc=rktmb,dc=org -H ldapi:/// < 4-group.ldif

Le groupe est désormais créé:

Enfin, créer un fichier "5-user.ldif" avec le contenu:
dn: uid=mihamina.rakotomandimby,ou=Users,dc=rktmb,dc=org
objectClass: inetOrgPerson
objectClass: organizationalPerson
objectClass: person
objectClass: top
uid: mihamina.rakotomandimby
sn: Rakotomandimby
cn: Mihamina

Importer avec la commande:
ldapadd -x -w rktmb -D cn=admin,dc=rktmb,dc=org -H ldapi:/// < 5-user.ldif

L'annuaire est complètement peuplé:


vmware net_device trans_start

VMWare Workstation 12 and Kernel 4.7

When recompiling vmware kernel modules on a kernel 4.7, I get this error:

error: ‘struct net_device’ has no member named ‘trans_start’;
did you mean ‘mem_start’?
    dev->trans_start = jiffies;

This seems to be an already encountered problem:
I choosed to replace the line, instead of deleting it.

- dev->trans_start = jiffies;
+ netif_trans_update(dev);

I also noted that I had to re-tar the modified sources instead of leaving them untared, because the compilation process only takes the archives. 

On precedent editions of these files, I just left the modified folders "vmnet-only/" and "vmmon-only/" expanded without the need to re-tar them.


tomcat ssl existant

Tomcat: activer HTTPS avec des certificats SSL existants

Dans le cas ou un certificat SSL existe déjà, voici comment faire en sorte que Tomcat serve en HTTPS avec les certificats existants.

Pour que cela fonctionne, il faut avoir en sa possession:
  • La clé privée qui a servie à générer le CSR, généralement un "*.key"
  • Le certificat délivré par le registrar (ce qui a été délivré en réponse à la CSR), généralement un "*.cert"
  • Le certificat de l'autorité, généralement un "*.pem". Par exemple pour Gandi, c'est https://www.gandi.net/static/CAs/GandiStandardSSLCA.pem, docmenté dans https://wiki.gandi.net/en/ssl/intermediate
Noter que la documentation officielle de Tomcat couvre un certain cas d'utilisation mais pas celui-ci. En effet, https://tomcat.apache.org/tomcat-8.0-doc/ssl-howto.html traite des cas ou on souhaite autosigner le certificat, ou alors il traite du cas ou l'on doit encore générer le CSR à partir d'une clé privée, toutes les 2 encore à créer.

Configuration des clés & certificats

Activer la possibilité de se logger à l'utilisateur sous lequel tourne Tomcat:
nano -w /etc/passwd
Et donner un SHELL à l'utilisateur Tomcat. Passer sous l'utilisateur sous lequel tourne Tomcat:
su - tomcat8
On crée un certificat de type "pkcs12", car c'est ce format qui est reconnu par les outils Java. La création de ce certificat met en jeu la clé privée et le certificat (que Gandi a délivré). Il demande une passphrase: je mets "rktmb" partout. C'est une mauvaise pratique, mais pour le tutoriel cela simplifie la tâche.
openssl pkcs12 -export -name tomcat \
  -in /usr/share/tomcat8/ssl.key/rktmb.crt \
  -inkey /usr/share/tomcat8/ssl.key/rktmb.key \
  -out  /usr/share/tomcat8/ssl.key/rktmb.p12
On converti ce certificat de type "pkcs12" en "keystore", dont le chemin est ''/usr/share/tomcat8/.keystore'' (certaines documentations préfère utiliser un fichier avec extension ".jks"):
keytool -importkeystore -destkeystore /usr/share/tomcat8/.keystore \
                          -srckeystore /usr/share/tomcat8/ssl.key/rktmb.p12 \
                          -srcstoretype pkcs12 -alias tomcat
A cette étape, on a créé le keystore et on l'a appelé "tomcat". Il ne faut plus en créer, on vient de le faire. Le certificat du CA n'a pas encore été importé, on le fait avec:
keytool -import -alias root   -keystore /usr/share/tomcat8/.keystore -trustcacerts -file /usr/share/tomcat8/ssl.key/GandiStandardSSLCA.pem

Configuration de Tomcat pour utiliser tout cela

Dans ''/etc/tomcat8/server.xml'', décommenter:

<Connector  port="8443" 
              scheme="https" secure="true"
              sslProtocol="TLS" />

Restart du service et tests

systemctl status tomcat8
  systemctl stop tomcat8
  systemctl status tomcat8

systemctl start tomcat8
  systemctl status tomcat8

Aller sur https://tomcat-ssl-test.rktmb.org:8443/


vmware hostif userif get_user_pages

Kernel 4.6 VMware Workstation 12 get_user_pages error

My Archlinux system just upgraded to kernel 4.6 and when compiling VMware Workstation 12 modules, I get:

error: too many arguments to function ‘get_user_pages’

Fortunately, this is a known problem, solved in the VMware Workstation Community forum.

The solution is to replace all "get_user_pages" calls with "get_user_pages_remote".

I got to replace:

  • 1 occurence in "vmmon-only/linux/hostif.c"
  • 1 occurence in "vmnet-only/userif.c"

This made it for me. Thanks go to "the community".

References: https://bugzilla.redhat.com/show_bug.cgi?id=1278896


dockerfile multiline to file

Outputing a multiline string from Dockerfile

I motsly use a Dockerfile by sourcing from a base ditribution: CentOS or Debian.
But I also have a local mirror and would like to use it for packages installation.

Espacially on CentOS it is about many lines to write to the /etc/yum.repos.d/CentOS-Base.repo file.

Easiest way: one RUN per line

The first method that comes in mind is to issue one RUN per line to write.
Here you are:

RUN echo "[base]                                                                           "   >      /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "name=CentOS-$releasever - Base                                                   "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "baseurl=ftp://packages-infra.mg.rktmb.org/pub/centos/7/base-reposync-7           "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "gpgcheck=0                                                                       "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "[updates]                                                                        "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "name=CentOS-$releasever - Updates                                                "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "baseurl=ftp://packages-infra.mg.rktmb.org/pub/centos/7/updates-reposync-7        "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "gpgcheck=0                                                                       "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "[extras]                                                                         "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "name=CentOS-$releasever - Extras                                                 "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "baseurl=ftp://packages-infra.mg.rktmb.org/pub/centos/7/extras-reposync-7         "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "gpgcheck=0                                                                       "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "[centosplus]                                                                     "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "name=CentOS-$releasever - Plus                                                   "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "baseurl=ftp://packages-infra.mg.rktmb.org/pub/centos/7/centosplus-reposync-7     "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "gpgcheck=0                                                                       "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "[contrib]                                                                        "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "name=CentOS-$releasever - Contrib                                                "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "baseurl=ftp://packages-infra.mg.rktmb.org/pub/centos/7/contrib-reposync-7        "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "gpgcheck=0                                                                       "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "[epel-mada]                                                                      "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "name=CentOS-$releasever - EPEL Mada                                              "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "baseurl=ftp://packages-infra.mg.rktmb.org/pub/epel/7/epel-reposync-7             "   >>     /etc/yum.repos.d/CentOS-Base.repo  
RUN echo "gpgcheck=0                                                                       "   >>     /etc/yum.repos.d/CentOS-Base.repo

This has one big drawback: It creates one layer just for a line to write and that make it very slow. Obviously, this is a NOGO solution.

More subtle way: one multiline RUN

I found this solution from the Docker Github issue discussion about multiline.
Just see how it is:

RUN echo  $'[base]                                                            \n\
name=CentOS-$releasever - Base                                                \n\
baseurl=ftp://packages-infra.mg.rktmb.org/pub/centos/7/base-reposync-7        \n\
gpgcheck=0                                                                    \n\
[updates]                                                                     \n\
name=CentOS-$releasever - Updates                                             \n\
baseurl=ftp://packages-infra.mg.rktmb.org/pub/centos/7/updates-reposync-7     \n\
gpgcheck=0                                                                    \n\
[extras]                                                                      \n\
name=CentOS-$releasever - Extras                                              \n\
baseurl=ftp://packages-infra.mg.rktmb.org/pub/centos/7/extras-reposync-7      \n\
gpgcheck=0                                                                    \n\
[centosplus]                                                                  \n\
name=CentOS-$releasever - Plus                                                \n\
baseurl=ftp://packages-infra.mg.rktmb.org/pub/centos/7/centosplus-reposync-7  \n\
gpgcheck=0                                                                    \n\
[contrib]                                                                     \n\
name=CentOS-$releasever - Contrib                                             \n\
baseurl=ftp://packages-infra.mg.rktmb.org/pub/centos/7/contrib-reposync-7     \n\
gpgcheck=0                                                                    \n\
[epel-mada]                                                                   \n\
name=CentOS-$releasever - EPEL Mada                                           \n\
baseurl=ftp://packages-infra.mg.rktmb.org/pub/epel/7/epel-reposync-7          \n\
gpgcheck=0                                                                    \n'\
> /etc/yum.repos.d/CentOS-Base.repo

Note the "$" on the first line.
This has the advantage of issuing only one RUN for all these lines, but be carefull to escape BASH special characters. One minor drawback I found  is it screws syntax highlighting & indentation. But it's minor.

COPY a file

The last method I will cover here (becarefull, there are many other methods) is to have a CentOS-Base.repo file ready and jus copy it this way:

COPY CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo

The only drawback I see for this is if you change its content and rebuild your image, you have to rebuild it without using the cache ( --no-cache). But it's fast and simple.


ssh fingerprint authenticity prompt

The authenticity of host can't be established

I faced a weird problem today:

  • A Jenkins post-build job is configured to deploy via scp to a target server
  • Jenkins runs as "integration" user
  • As "integration"  user, I already made sure the server is in "known_hosts", by manually SSH connected to it (when SSH-ing to it, I'm not prompted about the target server's identity anymore)
  • The Jenkins job is still prompted about the target server's identity
What was really weird:
  • From the Jenkins job, the target server's fingerprint is RSA based and is d9:fa:90:e6:2b:d2:f7:92:8b:28:3f:94:1e:bf:1b:fa.
  • From an SSH session, the target server's fingerprint is ECDSA based and is 0d:2a:c3:3b:8f:f1:e9:bc:1f:5d:68:d3:84:6d:71:a8.

This is because

  • The Jenkins SSH plugin I use is not up to date and still use weak and old fashioned algorithms: the negiciation stops at a weak one, DSA.
  • The SSH client (in SSH session) negociation ends up a stronger algorithm, ECDSA.

This is proven by these commands.

To force RSA algorithm:
ssh -o HostKeyAlgorithms=ssh-rsa-cert-v01@openssh.com,\
                         ssh-rsa,ssh-dss  integration@target-host.rktmb.org

The prompt is:

The authenticity of host 'target-host.rktmb.org (' can't be established.
RSA key fingerprint is d9:fa:90:e6:2b:d2:f7:92:8b:28:3f:94:1e:bf:1b:fa.

To let the negociation go on and end up with ECDSA:

ssh integration@target-host.rktmb.org

The prompt is:

The authenticity of host 'target-host.rktmb.org (' can't be established.
ECDSA key fingerprint is 0d:2a:c3:3b:8f:f1:e9:bc:1f:5d:68:d3:84:6d:71:a8.

So, in order to add the target host to the "known_hosts", I had to use the command forcing RSA to be used:

ssh -o HostKeyAlgorithms=ssh-rsa-cert-v01@openssh.com,\
                         ssh-rsa,ssh-dss  integration@target-host.rktmb.org

And then issue the "yes" confirmation.

This way the Jenkins job can smoothly SSH-connect to the target host in order to deploy.

Thanks to http://askubuntu.com/a/217066 and https://blog.cloudflare.com/ecdsa-the-digital-signature-algorithm-of-a-better-internet/


solr jetty listen 0000

Make Solr Jetty listen on

I recently downloaded and installed Solr 5, and by default it listens to
To make it listen on, an edit is needed to "jetty-http.xml".

The line
        <Set name="host"><Property name="jetty.host" /></Set>
Needs to be 
        <Set name="host"><Property name="jetty.host" default="" /></Set>
This makes it listen on requests from any client.
You should be carefull if you enable this.


artifactory cfengine cache repository

Industrial Linux administration

I manage a bunch of servers, more or less 1000 VMs, running either Debian (lenny,wheezy, jessie) or CentOS (5,6,7).
In order to handle this, I use CFEngine.
I mostly:

  • Create the VM
  • Add CFEngine repository (apt or yum)
  • Install CFEngine (via apt or yum)
  • Bootstrap  CFEngine

State of the nation

I performed the same steps for all VM I installed for the last 3 years.
The main problem I face is the fragmentation of the agents versions: some old installations are still with CFEngine 3.5.x and the latests are on 3.8. This is not a bearable situation: I need to align versions.

Methods and attempts

My attempts to upgrade CFEngine from CFEngine did not pass tests, mostly because promising version from within CFEngine will remove the package (the running process) and leave the system in an anormal state.
I tried several other ways to do it and specifically for the case of CFEngine package, I will manage it with a script that I'm going to launch outside CFEngine and will upgrade CFEngine (including restart the deamon) then bootstrap from the hub and we're done.

The CFEngine APT & Yum repository

CFEngine is very kind to provide a repository for their APT or Yum packages. The problem is, if I'm going to massively upgrade my thousand of VMs, there is a small risk of disturbance. As far as I use Artifactory for the development activity, I decided to use its YUm & Apt component to be a cache of CFEngine repository.

How it looks like without Artifactory

Apt source list:

deb https://cfengine.com/pub/apt/packages stable main

Yum repo file


Configure Artifactory to cache those

I need to be adminsitrator of the Artifactory instance. Then I got to the "Admin" section:

Then I go to the "remote" tab, as I want to provide something related to a remote repository:

Next I create a new repository and there I can choose wether I want to setup an Apt or a Yum one:
Finally I enter

  • What I want as local name for the repository
  • The URL of the root of the remote

Specifically for our case,

For the Apt repository:

  • Name: cfengine-debian
  • URL: https://cfengine.com/pub/apt/packages

For th Yum repository:

  • Name cfengine-centos
  • URL: http://cfengine.com/pub/yum/x86_64/

How to use this

Apt source list:
deb https://artifactory.rktmb.org/artifactory/cfengine-debian stable main

Yum repo file:

Note that the GPG check is enabled both on Debian and CentOS and as far as this repository is just a cache one, the needed key is the original CFEngine repository key: https://cfengine.com/pub/gpg.key

The local artifactory GPG key is useless in our case now.

Jira workflow for new projects

Associated workflow creation

I'm a Jira Cloud user and begining from some version 6, I noticed that when I create a project, it automatically creates a Workflow and Issue Scheme that is prepended by the project key and which is a copy of the default scheme.

I always had to make a cleanup after creating a project.

Default workflow for new projects

I also miss a feature that would allow me to make a custom workflow (and globally custom project setting) the default for new projects I create.

Solution: Create with shared configuration

While searching, I noticed that with Jira Cloud which is version 7.1.0 at the time I write, there is a link at the bottom of the "Create project" wizard:

"Create with shared configuration" will allow me to select the project I want the new one to share configuration with.

  • The new created project will use the same configuration as the project I select
  • There will be no creation of Workflow and Issue Scheme that I need to cleanup

This feature is a solution to the reported issues:

  • https://answers.atlassian.com/questions/150807/how-can-i-change-default-workflow
  • https://answers.atlassian.com/questions/45156/how-do-you-change-the-default-workflow-to-a-custom-one

Also having a Jira 6.4 Hosted, the dialog is a bit different and unfortunately what I avoid is only the creation of Workflow and Issue Scheme. I have to click on "Jira Default Schemes":

The new project will have the default configuration that I will have to manually change, but no Workflow and Issue Scheme will be added to the system.


dpkg set-selections package database

When trying to reproduce installed packages from one Debian (or Ubuntu) to another, the built-in solution is to

  1. Get the installed packages on the one hosts and store a dump to a file
  2. Copy that dump to the second host
  3. Set the packages listed in the dump to be installed
  4. Run the installation

This is respectively achieved with

  1. $ dpkg --get-selections > /tmp/installed-software
  2. $ scp /tmp/
  3. # dpkg --set-selections < /tmp/installed-software
  4. # apt-get -u dselect-upgrade

But on step 3, I often run into a "dpkg: warning: package not in database at line X:".
Thanks to , the solution is to install "dselect" first:

# apt-get install dselect
Then perform again step 3 then step 4.


jira datepicker date format

When using Jira, I often make use of date fields:

  • In "due date"
  • In Version "start" and "release" date
  • In many other fields

I mostly get helped by the date picker to fill that date field. But if the form expects one format and the datepicker fills with another format, you run into a bad format error.

There are two places where date format must be set in a coherent way:

  1. /secure/admin/AdvancedApplicationProperties.jspa:
    • jira.date.picker.java.format : d/MM/yy
    • jira.date.picker.javascript.format : %e/%b/%y
  1. /secure/admin/LookAndFeel!default.jspa:
    • Day/Month/Year Format: d/MM/yy


vmware workstation 12 unable to load libvmwareui.so

Using VMWare Workstation on ArchLinux, it suddenly refused to launch.
when inspecting the logs, which BTW are in /tmp/vmware-<id>, I see:

2015-12-11T17:41:54.442+03:00| appLoader| I125: Log for appLoader pid=1727 version=12.0.1 build=build-3160714 option=Release
2015-12-11T17:41:54.442+03:00| appLoader| I125: The process is 64-bit.
2015-12-11T17:41:54.442+03:00| appLoader| I125: Host codepage=UTF-8 encoding=UTF-8
2015-12-11T17:41:54.442+03:00| appLoader| I125: Host is unknown
2015-12-11T17:41:54.448+03:00| appLoader| W115: HostinfoReadDistroFile: Cannot work with empty file.
2015-12-11T17:41:54.448+03:00| appLoader| W115: HostinfoOSData: Error: no distro file found
2015-12-11T17:41:54.448+03:00| appLoader| I125: Invocation: "/usr/lib/vmware/bin/vmware-modconfig --launcher=/usr/bin/vmware-modconfig --appname=VMware Workstation --icon=vmware-workstation"
2015-12-11T17:41:54.448+03:00| appLoader| I125: Calling: "/usr/lib/vmware/bin/vmware-modconfig --launcher=/usr/bin/vmware-modconfig --appname=VMware Workstation --icon=vmware-workstation"
2015-12-11T17:41:54.448+03:00| appLoader| I125: VMDEVEL not set.
2015-12-11T17:41:54.449+03:00| appLoader| I125: VMWARE_SHIPPED_LIBS_LIST is not set.
2015-12-11T17:41:54.449+03:00| appLoader| I125: VMWARE_SYSTEM_LIBS_LIST is not set.
2015-12-11T17:41:54.449+03:00| appLoader| I125: VMWARE_USE_SHIPPED_LIBS is not set.
2015-12-11T17:41:54.449+03:00| appLoader| I125: VMWARE_USE_SYSTEM_LIBS is not set.
2015-12-11T17:41:54.449+03:00| appLoader| I125: Using configuration file /etc/vmware/config.
2015-12-11T17:41:54.449+03:00| appLoader| I125: Using library directory:  /usr/lib/vmware.
2015-12-11T17:41:54.450+03:00| appLoader| I125: Shipped glib version is 2.24
2015-12-11T17:41:54.450+03:00| appLoader| I125: System glib version is 2.46
2015-12-11T17:41:54.450+03:00| appLoader| I125: Using system version of glib.
2015-12-11T17:41:54.450+03:00| appLoader| I125: Detected VMware library libvmware-modconfig.so.


2015-12-11T17:41:54.774+03:00| appLoader| I125: Loading shipped version of libvmwareui.so.
2015-12-11T17:41:54.834+03:00| appLoader| W115: Unable to load libvmwareui.so from /usr/lib/vmware/lib/libvmwareui.so/libvmwareui.so: /usr/lib/vmware/lib/libvmwareui.so/libvmwareui.so: undefined symbol: _ZN4Glib10spawn_syncERKSsRKNS_11ArrayHandleISsNS_17Container_Helpers10TypeTraitsISsEEEENS_10SpawnFlagsERKN4sigc4slotIvNSA_3nilESC_SC_SC_SC_SC_SC_EEPSsSG_Pi
2015-12-11T17:41:54.834+03:00| appLoader| W115: Unable to load dependencies for /usr/lib/vmware/lib/libvmware-modconfig.so/libvmware-modconfig.so
2015-12-11T17:41:54.834+03:00| appLoader| W115: Unable to execute /usr/lib/vmware/bin/vmware-modconfig.

I made it work with:
$ vmware


VMWare Workstation scripts

VMWare Workstation scripts to ease my life

I currently use VMWare workstation as a daily basis in order to virtualize.
As DevopS and writing CFEngine promises, I often need to star from a virgin old OS and test upgrade and configuration deployment.
I wrote several simple scripts to ease my work: https://bitbucket.org/rakotomandimby/vmware-workstation-scripts/src


command line mail with accent

Send characters like é è à in a command line mail

I run CentOS 6, and made my first try with the "bsd-mailx" "mail" command line.

export LANG=fr_FR.UTF-8
export LC_ALL=fr_FR.UTF-8
echo "Tâches du $( date +"%a %d %b %Y" -d "+1days" )" | mail -s "$( date +"%a %d %b %Y" -d "+1days" )" toto@rktmb.org

When current month is "August", "Août" in French, it is just the big mess: characters are not encoded!

I tried with

mail -a "Content-Type: text/plain; charset=UTF-8"

But it displayed right for the body, the subject was screwed yet

Then I switched to "mailx":

# yum remove bsd-mailx
# yum install mailx

I tried with

mail -S sendcharsets=utf-8,iso-8859-1 

And this works!


antlr java parser tree tokens

Use antlr to get a tree view of a Java piece of code

Let this loop be (save it under ~/workdir/LoopOne.java):
class LoopOne {
    public static void main(String[] args) {
        float x = Float.parseFloat(args[0]);
        while ( 0 - x <= 0){
            x = x - 1;

If you want to get it parsed, you can use Antlr (http://www.antlr.org/)

cd ~/workdir

curl -O http://www.antlr.org/download/antlr-4.5-complete.jar
curl -O https://raw.githubusercontent.com/antlr/grammars-v4/master/java8/Java8.g4

In ".bashrc":
export CLASSPATH=".:/home/mrakotomandimby/workdir/antlr-4.5-complete.jar:$CLASSPATH"
alias antlr4='java -Xmx500M -cp ".:/home/mrakotomandimby/workdir/antlr-4.5-complete.jar:$CLASSPATH" org.antlr.v4.Tool'
alias grun='java org.antlr.v4.runtime.misc.TestRig'

antlr4 Java8.g4 
javac LoopOne.java Java8*.java
grun Java8  compilationUnit -tree LoopOne.java
grun Java8  compilationUnit -gui LoopOne.java


archlinux lxc debian centos container

Create Debian8 and CentOS7 LXC containers on Archlinux host

Using LXC to create Debian 8 and CentOS 7 containers require to play with AUR.

In order to ease the work:

# pacman -S base-devel

In  /etc/pacman.conf
SigLevel = Never
Server = http://repo.archlinux.fr/$arch

# pacman -S lxc arch-install-scripts netctl
# pacman -S yaourt
$ yaourt -S debootstrap yum

This last command will pull several packages from AUR then expect some long time compiling
I dont really know why, but I had to reboot to have everything working smoothly.

Create a CentOS7 container

# export CONTAINER_NAME=c7-00
# lxc-create  -t centos --name ${CONTAINER_NAME} -- --release 7 --arch x86_64 \
               --repo YOU_CUSTOM_REPO

Then edit the configuration:
# nano -w /var/lib/lxc/${CONTAINER_NAME}/config

Mine looks like this

lxc.rootfs = /var/lib/lxc/c7-00/rootfs
lxc.include = /usr/share/lxc/config/centos.common.conf
lxc.arch = x86_64
lxc.utsname = c7-00
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.ipv4 =
lxc.network.ipv4.gateway =
lxc.network.hwaddr = 9e:32:da:11:44:5e

Start the container with the previous settings:
# lxc-start --name ${CONTAINER_NAME}

Enter the container
# lxc-attach --name ${CONTAINER_NAME}

# export PATH="/sbin:"${PATH}
# yum install -y nano

Install a Debian8 container

# export CONTAINER_NAME=b7-00
# lxc-create  -t debian --name ${CONTAINER_NAME} -- --mirror=YOUR_MIRRORdebian/ \
      --release=jessie --security-mirror=YOUR_MIRROR/debian-security/ --arch=x86_64

Then edit the configuration:
# nano -w /var/lib/lxc/${CONTAINER_NAME}/config

Mine looks like this

lxc.rootfs = /var/lib/lxc/d8-00/rootfs
lxc.include = /usr/share/lxc/config/debian.common.conf
lxc.utsname = d8-00
lxc.arch = amd64
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.ipv4 =
lxc.network.ipv4.gateway =
lxc.network.hwaddr = 9e:32:da:11:44:5a

Start the container with the previous settings:
# lxc-start --name ${CONTAINER_NAME}

Enter the container
# lxc-attach --name ${CONTAINER_NAME}

# export PATH="/usr/sbin:/sbin:/bin:"${PATH}

archlinux switch to bridge network

Switch from normal networking to bridged

Just after installation, you need to run networking on a traditional way in order to make initial setup (some needed packages like SSH, netctl,...)

"Traditional networking" is here to use the "ethN" or "enoZZZZZ" interface and assign an IP to that interface.

But if you want to switch to bridged one, you need to disable that traditional setting and activate the bridged one.

Traditional setting

  • Interface name: eth0
  • Address assignation: static


Start "systemd-networkd.service"
# systemctl start systemd-networkd

To enable it at boot,
# systemctl enable systemd-networkd

Switch to bridged setting

# pacman -S netctl

Description="LXC bridge"

To disable traditional networking at boot,
# systemctl disable systemd-networkd

Enable the bridge
# netctl enable lxcbridge

Enable (also) the bridge
# systemctl enable netctl-auto@br0.service


docker network centos debian

Docker Network configuration on Debian 8 and CentOS 7

My configuration is to run a full VM and then launch several containers on it.
Tha VM can be a CentOS or a Debian.
As of writing, current versions are Debian 8 and CentOS 7

Debian 8

You can decide what subnet you want the containers work in.
Default is "", and the "docker0" belongs to that range.
By setting up the "--bip" option in "/etc/default/docker" you can force "docker0" range:

DOCKER_OPTS=" --bip= "

CentOS 7

CentOS has a slightly different configuration layout.
By setting up the "--bip" option in "/etc/sysconfig/docker-network" you can force "docker0" range:



docker discovery usefull commands

Some usefull commands and settings when using docker

# docker login
Will create a .dockercfg file which is reusable on other hosts
Usefull if you choosed a PITA username and password, like I did.
# docker pull xxx
# docker --name zzz run xxx:yyy
Can be chortened as
# docker run --name zzz [...] xxx:yyy
It will pull for you.
If the container zzz ever stopped
# docker start zzz
Show the full parameters of a container
# docker inspect zzz
You'll see it's a JSON file, and if you just want to print some parameters:
# docker inspect --format '{{.NetworkSettings.IPAddress}}' zzz


debian packaging basic environment

When packaging with Debian, this is the minimal set of package that shoudl be installed
  # apt-get install build-essential devscripts debhelper dh-make git subversion 
Go to the source directory
  $ mk-build-deps
This will generate a "<package-name>-build-deps_1.0_amd64.deb" file
Move that file one directory up
  $ mv "<package-name>-build-deps_1.0_amd64.deb" ../
as root:
  # dpkg -i /home/mrakotomandimby/<package-name>-build-deps_1.0_amd64.deb
This will output an error (dont worry)
But continue with
  # apt-get install -f


Archlinux db sig failed

Powerpill rsync lead to errors
Switching to powerpill lead me to get these errors:
rsync: link_stat "/archlinux/core/os/x86_64/core.db.sig" (in pub) failed: No such file or directory (2)
rsync: link_stat "/archlinux/extra/os/x86_64/extra.db.sig" (in pub) failed: No such file or directory (2)
rsync: link_stat "/archlinux/community/os/x86_64/community.db.sig" (in pub) failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1637) [generator=3.1.1]
Then my install and upgrade fails.
In order to get rid of these, in "pacman.conf":
SigLevel= PackageRequired
LocalFileSigLevel = Optional

This solution was found in https://bbs.archlinux.org/viewtopic.php?id=153818


centos7 install postgresql

Using stock CentOS repositories

Installing PostgreSQL on CentOS7 can be done two ways:
  • From the PostgreSQL repositories
  • From the official CentOS repositories
The later will be described in this document
[root@localhost ~]#yum install -y postgresql postgresql-server postgresql-libs postgresql-contrib 
[root@localhost ~]# postgresql-setup initdb
Make sure it starts at boot
[root@localhost ~]# systemctl enable postgresql
Start the service
[root@localhost ~]# systemctl start postgresql
Switch to the service user
[root@localhost ~]# su - postgres
MAke sure you access locally
-bash-4.2$ psql
User and database creation
-bash-4.2$ createuser --interactive --pwprompt jira
-bash-4.2$ createdb --owner=jira jira
Switch to the Jira user
[root@localhost ~]# su - jira
[jira@localhost ~]$ psql --username=jira --password
psql (9.2.10)
Type "help" for help.
If you want to access PostgreSQL without being "jira" user, you need to configure
[root@localhost ~]# nano -w /var/lib/pgsql/data/pg_hba.conf
And add
host   jira jira     md5


arhclinux list aur packages

List AUR installed packages

yaourt -Qm 

local/chromium-pepper-flash 1:
local/firefox-nightly 41.0a1.20150526-1
local/freshplayerplugin 0.2.4-1
local/gdk-pixbuf 0.22.0-12
local/glib 1.2.10-12
local/gnome-colors-icon-theme 5.5.1-2
local/gnome-colors-icon-theme-extras 5.5.1-2
local/gnome-themes-extras 2.22.0-3
local/google-chrome 43.0.2357.81-1
local/gtk 1.2.10-15
local/gtk-theme-numix-ocean 2.0.2-2
local/hal 0.5.14-22
local/hal-info 0.20091130-2
local/itcl3 3.4.1-1
local/libpurple-meanwhile 2.10.11-1
local/libxfcegui4 4.10.0-5
local/meanwhile 1.0.2-8
local/package-query 1.5-2
local/pdsh 2.29-2
local/prips 0.9.9-1
local/tcptraceroute 1.5beta7-8
local/thunar-shares-plugin 0.2.0.git-2
local/ttf-office-2007-fonts 1.0-2
local/ttf-win7-fonts 7.1-8
local/ubuntu-font-family-console 0.80-0
local/vmware-systemd-services 0.1-2 (vmware)
local/xfce-theme-murrine-unity 20110416-4
local/xfce4-quicklauncher-plugin 1.9.4-10 (xfce4-goodies)
local/yaourt 1.5-1 


Python self keyword not mandatory

A long time ago I subscribed to the Python language mailing list, and today I discovered in a discussion that "self" is not mandatory.
https://docs.python.org/2/tutorial/classes.html#random-remarks states:
Often, the first argument of a method is called self. This is nothing
more than a convention: the name self has absolutely no special
meaning to Python. Note, however, that by not following the
convention your code may be less readable to other Python
programmers, and it is also conceivable that a class browser program
might be written that relies upon such a convention.
 It might obvious for some, but for me it is worth of note.


Get UUID from VM and ESX

Sometimes one wants to get the relation between a VM from the ESX inventory and from the system.
From ESX:
  # esxcli vm process list
  UUID: 56 4d 7c f7 f7 a6 a3 b1-fb 71 cd 3b 3c 34 65 0a

From Linux command line
  $ sudo dmidecode | grep UUID
  UUID: 564D7CF7-F7A6-A3B1-FB71-CD3B3C34650A

This is the relation and it's up to the user to reformat all these outputs.


Archlinux MySQL Workbench result grid


I run ArchLinux for a while and one of my key software is MySQL Workbench.
Unfortunately, I did not get it working for a while.
Dont worry, it's working now, and this is my attempt to collect the informations in order to try to know what happenned.

What did not work

Several week after having a base Archlinux installation working, I managed to install MySQL Workbench from AUR.

The software behaved correctly except the result grid never displayed: I just got a blank grid, but when switching to the edit form the data were there. Data have been correctly retrieved and can be edited, but the problem was on the result grid display.

It was version 6.1.6 at that time, arround the end of June 2014.
I recompiled "mysql-connector-c++", "ctemplates" before building mysql-workbench: no way, no result grid.

I even tried to compile Mysql Workbench development version (6.2 at that time) against AUR  "mysql-connector-c++", "ctemplates" but no way: no result grid.

As far as I really needed some GUI to manipulate MySQL, I used Squirrel.SQL.

What made it works

Some time later, on the Mysql Workbench AUR package comments, I noticed there was a bunch of upgrades: Mysql Workbench was 6.2.3, and there was a glib2 patch!
Wait.. glib2? This Glib2? Great, let's try it now!

I upgraded "glib2" via "pacman" (as I keep my system up to date, it was already the latest available), then rebuilt "mysql-connector-c++" and "ctemplates" (just to be safe), then rebuit "mysql-workbench".

Guess what? I finally got the result grid!

What was wrong?

To be honest, I did not find any related information either in the mysql-workbench nor the Glib2 release notes. I don't understand what was not compatible, what did not work together, what was conflicting?

I'm writing this blog post in order to try to collect information about what happened. SO, folks, if you ever know: please tell me!


Madagascar OpenData en phase de consultation

Le gouvernement malgache est en train de s'interesser à l'ouverture des données publiques:
 Voici le calendrier des évènements:
Je pense que ça peut interesser plusieurs personnes (physiques et morales) alors je diffuse.
Noter que j'ai reçu l'invitation le 30 juillet, via le GOTICOM, c'est un peu tard pour participer aux premières séances, mais celles qui m'interessent le plus sont les dernières.


Archlinux installation notes


I switched to Arch Linux, a totally new distribution for me, and these are some tips I had to perform in order the make things work.


The installation guide page is quite clear. It's a step by step set of instruction an intermediate Linux user may blindly follow.
The only thing I missed is the syntax of "/etc/vconsole.conf", in order to get my french keyboard considered. Instead of having it on another page, I'd rather it would be in this page.


At work we need Pidgin with the support of Sametime. I had to recompile it from ABS, and compile the "meanwhile" utilities from AUR.
The Pidgin wiki page misses the indication that libpurple-meanwhile has to be installed with meanwhile.


I choosed to have my network managed with NetworkManager.
As described on the guide, the installation instructions miss the step to enable the NetworkManager service at boot.
I had my EVDO ZTE Modem unrecognized until the service was started.
There is also a problem with the version of NetworkManager I installed: when you connect to a secured WiFi network, it doesn prompt for the key. I had to enter it manually by configuring the connection.


For the moment, I just used it for a week. That is not enough to get a decent conclusion. Let's see within 3 months.


Debian automated installation equivalent to kickstart on CentOS

In the RedHat world and its derivative, after installing a system you usually get a file named “/root/install.ks”.
This is not the case in the Debian world, but I think I found a way to get a “dump” of the answaers I provided in the installation process:

### Preseeding other packages
# Depending on what software you choose to install, or if things go wrong
# during the installation process, it's possible that other questions may
# be asked. You can preseed those too, of course. To get a list of every
# possible question that could be asked during an install, do an
# installation, and then run these commands:
#   debconf-get-selections --installer > file
#   debconf-get-selections >> file

 I took that from


Batch editing LDAP passwords

I need to implement a groupware for my organisation. In order to achieve this, I need to test each user login, and make some interaction tests, espacially for free/busy features of the calendar.
The authencation is against an LDAP server.
I need to copy the LDAP server and batch modify the passwords to be all the same in my test environment.
In order to achieve this, I need to list all “dn” and modify the “userPassword” value.

This is done in two steps:

ldapsearch -w 'admin-password' \
           -x -D 'cn=admin,dc=rktmb,dc=org' \
           -b 'ou=Users,dc=rktmb,dc=org' \
           -s one \
           -H ldap://localhost  dn 

To list the users
ldapsearch -w 'admin-password' \
           -x -D 'cn=admin,dc=rktmb,dc=org' \
           -b 'ou=Users,dc=rktmb,dc=org' \
           -s one \
           -H ldap://localhost  dn \
    | awk '/^dn: /{print $0"\nchangetype: modify
                            \nreplace: userPassword
                            \nuserPassword:: e3NoYX11MWRucUpaQ\n";}' \
    > modified.ldif

To write it to an output file.
ldapmodify -c -w 'admin-password' \
              -x -D "cn=admin,dc=rktmb,dc=org" \
              -H ldap://localhost \
              -f  modified.ldif -S modified.log

To make the changes.


Installation Tomcat 7 Solr 4 par JNDI sur CentOS 6


Pour des sites à nombre élevé de contenu, il peut être une solution de s'aider de Solr pour l'indexation du contenu et rapidifier les recherches. Pour cela il faut installer Tomcat et ajouter certains modules à l'installation de Drupal. Ceci agrémenté d'une configuration.
Il existe une documentation d'installation de Solr mais basée sur Tomcat 6. C'est un Wiki, donc en théorie je devrais pouvoir contribuer, mais avant, je publie la version française sur mon blog.

Installer Tomcat 7

On choisi le répertoire “/opt/tomcat/” comme racine
La version choisie est la dernière sortie à l'écriture de ce document.
La distribution binaire proposée par le site de Tomcat convient très bien pour cet exercice.
Les répertoires dont on va se servir sont
  • /opt/tomcat/conf/Catalina/localhost/
  • /opt/tomcat/lib/

Installer Solr

Solr est téléchargeable depuis son site Web.
On a à disposition une archive “.tar.gz”. On n'aura pas besoin de tout son contenu, mais maulheureusement il faudra la télécharger intégralement.
Une fois téléchargée, il faut la désarchiver et on se servira des répertoires et fichiers suivant:
  • example/solr
  • resources/*
  • lib/ext/*
  • dist/solr-X.Y.Z.war
Comme affirmé plus haut, le reste n'est pas utile...

On choisi d'utiliser JNDI pour l'installation
Le répertoire utilisé comme racine ce cette instance Solr sera “/opt/solr/solr1/”.
Dans ce répertoire racine
  • on copiera le “.war” et on le nommera “solr.war”.
  • On copiera l'entégralité du répertoire “example/solr”.
    • On aura donc un répertoire nommé “/opt/solr/solr1/solr” qu'on appelera le “SolrHome”.
On revient dans l'arborescence du Tomcat et dans “/opt/tomcat/conf/Catalina/localhost/solr1.xml” on mettra
<?xml version="1.0" encoding="utf-8"?>
<Context docBase="/opt/solr/solr1/solr.war" crossContext="true" >
<Environment name="solr/home"
override="true" />

  • On copie "dist/solrj-lib/*.jar" dans "/opt/tomcat/lib/"
  • On doit ajouter "/opt/tomcat/lib/" en tant que "java.library.path":
    • Dans "/opt/tomcat/bin/setenv.sh", mettre "JAVA_OPTS="-Djava.library.path=/opt/tomcat/lib""
En relançant Tomcat avec “/opt/tomcat/bin/shutdown.sh” puis “/opt/tomcat/bin/startup.sh”, puis en allant dans le Manager Tomcat, on devrait avoir une application nommée “solr1”.

Installer Drupal et les modules

Il faut le module ApacheSolr pour intéragir avec Solr et Devel+DevelGenerate pour générer du contenu aléatoire massif.
Installer ces modules et générer beaucoup de contenu. Au moins 500 contenus sur une machine moderne.

Configurer Solr pour utiliser les shema de contenu Drupal

On va copier le contenu de “/var/www/html/sites/all/modules/apachesolr/solr-conf/solr-4.x/” dans “/opt/solr/solr1/solr/collection1/conf/”. Ceci a pour but d'informer Solr de la structure des documents dans Drupal.
“/var/www/html/sites/all/modules/apachesolr/solr-conf/solr-4.x/” est fourni avec le module Drupal Apachesolr.

Configurer Drupal

Configuration des “Active Search Modules”

Configuration de l'URL du Solr.

Déclencher la réindexation.


Exim 4 Client Authentication on CentOS 6

I want my Exim 4 MTA to use another host as SmartHost.
The remote SmartHost requires authentication, but on port 25.

The procedure is to
  1. Disable direct delivery by looking up MX DNS record (dnslookup section)
  2. Enable SmartHosting (by telling it to use remote_msa transport)
  3. Ajusting remote SMTP settings (remote_msa section)
  4. Setup client authentication

To achieve this, the configuration should be (yes, you comment the whole section):

#  driver = dnslookup
#  domains = ! +local_domains
#  transport = remote_smtp
#  ignore_target_hosts = :
#  no_more


  driver = manualroute
  domains = ! +local_domains
  transport = remote_msa
  route_data = the.name.of.remote.smtp

  driver = smtp
  port = 25
  hosts_require_auth = *


  driver = plaintext
  public_name = LOGIN
  client_send = : test : P@ssw0rd

Note that the default Exim configuration suggests CRAM-MD5 authentication method, but my example gives you how to LOGIN.

I took my inspiration from Exim authentication recipes.


Munin CPU Disk Throughput reduction load average

The System

I run Munin 2.0.16 on CentOS 6 to monitor about 30 hosts.
I mostly monitor 25-30 parameters per host (this depends to the running services)
About half of the hosts are nearby (with a LAN latency) the other part is far (with about 250ms network latency).

The facts

I have noticed "munin-graph" and "munin-html" took a long time. Long enough to overlap themselves if I keep the interval default (5 minutes).
I also noticed a huge "load average"
One of the most annoying thing is the Munin cronjob exiting with error (leading to a mail sent to the administrators) because a lock file (the one of the previous launch) still existing. This is very noise generator and lead to lower the importance of messages sent by munin...

Some solutions

And I decided to put both /var/lib/munin and /var/www/html/munin in tmpfs.
In order to reduce the data loss when rebooting, I make an hourly dump to the disk file system. This is not expensive.


I made it step by step: 
  1. /var/www/html/munin then watch a moment
  2. /var/lib/munin and see
About disk usage, this is what happened:
About load average, this is what happened:


I just saved some disk I/O mostly writes. Nothing more.


Principe clé de Cfengine

Pour comprendre et utiliser CFengine correctement, il faut apréhender ses principes: CFengine permet de faire des promesses sur l'état du système.

Ainsi, avec CFEngine, on ne liste pas des actions mais des promesses.

On ne dit pas

  1. Ajouter la ligne XXXXX à un fichier
  2. Ajouter l'utilisateur YYYY
  3. Lancer Apache

Mais plutot

  1. S'assurer que la ligne XXX est présente dans le fichier
  2. S'assurer que YYYY existe
  3. S'assurer qu'Apache est lancé

De ce fait,

  1. Si la ligne XXXX est supprimée du fichier, CFengine la remettra
  2. Si l'utilisateur YYYY se fait supprimer,  CFengine le remettra
  3. Si Apache se fait tuer, CFEngine le relancera


munin apache server-status

When you want to monitor Apache with Munin, you have to let Munin node access the extended server status.

The server status is available on http://server/server-status and it has to be secured.

Firts, enble Extended Server Status.
In httpd.conf (or whatever is used by you installation)
ExtendedStatus On
Second, in the same file:

<Location /server-status>    SetHandler server-status    Order deny,allow    Deny from all    Allow from</Location>

With these settings, you should get Apache accesses graph on Munin.

CentOS6 packaging basic environment

When working on packaging some unpackaged yet software for CentOS, I often start with a clean minimal VM and then setup a new development+packaging environment.

This becomes repetitive so that I needed a memo in order to just copy/paste.

Note that this is just the very basic packaging tools I need and this does not override any official packaging guide.

sudo yum -y install rpm-build rpmdevtools redhat-rpm-config make gcc autoconf automake gcc-c++ yum-utils
sudo yum -y groupinstall "Development Tools"

Then I install the complete build dependencies depending on the package I want to build.

After this, I have to setup a minimal build environment:
mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
When building with "rpmbuild" I might encouter some insatisfied build dependencies.
This is solved with 

yum-builddep /path/to/package.spec


Libvirt KVM fixed IP address

When playing with KVM LibVirt Vms, I often need them to have fixed IP address.
Ironically, I enjoy fixing them with DHCP.
Here is the “default” configuration I use in order to fix them.
Redefining the "default" network is done with:

virsh --connect qemu:///system net-destroy default
virsh --connect qemu:///system net-undefine default
virsh --connect qemu:///system net-define /tmp/network.xml
virsh --connect qemu:///system net-start default

Feel free to get inspiration:
  <forward mode='nat'/>                                     
  <bridge name='virbr0' stp='on' delay='0' />               
  <ip address='' netmask=''>      
      <range start='' end='' /> 
      <host mac="52:54:00:9a:81:00" name="centos6-00.vm.mihamina.netapsys.fr" ip="" />
      <host mac="52:54:00:e0:0e:8a" name="centos6-01.vm.mihamina.netapsys.fr" ip="" />
      <host mac="52:54:00:c1:ff:12" name="debian7-00.vm.mihamina.netapsys.fr" ip="" />
      <host mac="52:54:00:06:a1:f4" name="debian7-01.vm.mihamina.netapsys.fr" ip="" />
      <host mac="52:54:00:91:d1:96" name="debian7-02.vm.mihamina.netapsys.fr" ip="" />
      <host mac="52:54:00:22:20:cd" name="fedora19-00.vm.mihamina.netapsys.fr" ip="" />
      <host mac="52:54:00:1d:0f:9e" name="fedora19-01.vm.mihamina.netapsys.fr" ip="" />
      <host mac="52:54:00:2a:a6:41" name="gitlab-c6-01.vm.mihamina.netapsys.fr" ip="" />
      <host mac="52:54:00:f7:30:03" name="gitlab-c6-02.vm.mihamina.netapsys.fr" ip="" />
      <host mac="52:54:00:86:a0:d6" name="win7-00.vm.mihamina.netapsys.fr" ip="" />
      <host mac="52:54:00:e8:a8:b8" name="win7-01.vm.mihamina.netapsys.fr" ip="" />


Le cout de la vie à Madagascar

Il paraît que c'est moins cher à Madagascar

Dans plusieurs conversation sur la comparaison entre Madagascar et l'Europe en général, j'ai entendu que le cout de la vie à Madagascar était inférieur à celui en Europe. Mes calculs personnels indiquent pourtant le contraire. J'ai investigué et suis arrivé à une conclusion, que j'argumente dans ce billet.

Les profils rencontrés à Madagascar

Madagascar est une destination prisée par les entrepreneurs qui souhaitent externaliser de la main d’œuvre (qualifiée ou non) de la France. Ceci parce que c'est un pays francophone et qu'une forte proportion d'étudiants malgaches poursuivent leur études supérieures en France pour « revenir » au pays par la suite.
Compte tenu de ces éléments, on se retrouve avec une population composée de dirigeants d’entreprise d'origine française à la recherche et de la main d’œuvre « pas chère » d'un coté, et de l'autre coté de jeunes diplomés formés en France mais qui revendiquent des salaires en se référant aux fourchettes pratiquées en France. Lors des discussion salariales cette référence est mise à mal par les recruteurs. Dans la majorité des cas l'argument évoqué est l'infériorité du cout de la vie à Madagascar comparé à celui en France. Beaucoup de candidats, à l'usure, se résignent et cèdent tant bien que mal en acceptant de ne plus se référer à leurs critères initiaux mais à d'autres, plus locaux.

Le mode de vie recherché

En regardant pourtant de plus près, un jeune diplomé qui a suivi ses études supérieures en France et qui rentre à Madagascar pour y travailler aspire a un mode de vie précis. Il a vécu durant ses études dans un pays ou Internet n'est pas cher et de bonne qualité, ou on mange du Muesli au petit dejeuner, ou les chaines TV payantes sont nombreuses et à un prix abordable (généralement incluses dans le forfait Internet), ou il est possible de s'acheter le dernier smartphone en payant en 3 fois sans frais, ou un petit véhicule d'occasion citadin n'est pas très cher (ni à l'achat ni à l'entretien) et se dégrade lentement, ou aller à DisneyLand Paris se fait en prenant le train...
Ramenée à Madagascar, cette même façon de consommer qui est somme toute objectivement « normale » pour un diplomé d'études supérieures revient plus cher !
Une connexion Internet du même débit qu'il avait en France et de la même qualité relève des offres « grands comptes » Entreprise, le Muesli est importé et donc plus cher, les bouquets des chaines de satellite sont plus chers, acheter à crédit n'est pas possible (ni même d'avoir un découvert autorisé par la banque), l'achat d'un véhicule (neuf ou d'occasion) inclus le paiement du transport et des taxes d'importation ce qui fait que le même véhicule coute plus cher arrivé à Madagascar, sans compter que le carburant est lui même plus cher, tout comme l'entretien qui revient plus cher à cause de l'état du réseau routier et donc de la dégradation accélérée du véhicule. Summum de la comparaison, pour un travailleur malgache, passer un week-end en famille dans le parc Disney le plus proche (USA ou France ou ailleurs) couterait au moins 20 fois plus cher...
Ma liste n'est pas exhaustive mais relate ce qu'un diplomé de l'enseignement supérieur a envie de pouvoir faire une fois entré dans la vie active. Et en faisant bien ses comptes, cette vie est bien plus chère à Madagascar comparée à ce qu'elle coute en France, à activité égale.
Évidemment, sauf à se contenter de manger du manioc matin midi et soir.

Le point de vue entrepreneur

D'autre part, il faut aussi comprendre qu'un entrepreneur qui décide de délocaliser à Madagascar recherche le cout moindre de la main d'oeuvre. Il n'y a réellement aucun intérêt à délocaliser si financièrement l'opération revient au même. En partant de là, chercher à comparer les grilles salariales est d'emblée exclu.
En plus de devoir couter moins cher, il est aussi indispensable de compenser certains autres inconvénients : l'éloignement et le contexte économique & politique.
Travailler de manière performante avec une équipe « éclatée » sur 12.000km n'est pas facile. Il faut contourner la distance avec des installations qui simulent la proximité. Ceci se fait avec l'acquisition de matériel de visio-conférence, puis avec la location d'une connexion à Internet qui allie quantité et qualité. Ce n'est pas donné à Madagascar.
Enfin, le contexte économique et social à Madagascar est loin d'être rassurant. Courant 2013, le pays s'est retrouvé sans chef d'état, la sécurité publique laisse à désirer,... Rien n'est fait pour rassurer l'investisseur sur la pérennité de son entreprise. Cela conduit à une attitude de recherche de rentabilité rapide pour amortir rapidement dans le but de rentrer dans ses frais avant qu'il ne se passe quelque chose de grave.
Il est tout à fait compréhensible d'exclure des débats salariaux toute comparaison ou référence avec la France.

C'est plus cher à Madagascar

Il est donc évident que les investisseurs viennent à Madagascar pour la cout de la main d'oeuvre et avec la volonté de rentabiliser à court terme. Cela se comprend, se défend et constitue un avantage. Cependant, lors des discussions relatives au salaire, le marteler la fausse idée que la vie coute moins cher à Madagascar est à mon avis une grosse erreur. Cet argument est essentiellement formulé par les chefs d'entreprises et mon idée n'est pas d'enseigner à ces gens comment mener leur bateau. Mais comme je l'ai expliqué, tout est plus cher à Madagascar, à commencer par ce que eux même consomment...