Installing Graylog on Centos 7

Graylog-logoLet’s start out 2016 with setting up a logging system called Graylog. If you have not used Graylog before then I encourage you to check it out. This is an open source log management system and is pretty flexible as it can capture, index and analyze almost anything. Once up and running this system can be scaled out for an enterprise wide log management system. High availability, clustered, and replicated is what Graylog thrives on. In this demo I am going to have two systems. One is the Graylog server, web server and will also have a Mongo database. The other system will be an Elasticsearch node which is what will have the actual data stored in and indexed. For bigger “production” ready setups you just scale this out to separate systems.

Documentation is key here and Graylog has done an excellent job of it so here it is in case you need it:

I’m already assuming that you have CentOS 7 minimal installed if not you can pick up the latest CentOS at .The Centos systems also need a connection to the internet, and have just the root account.

I’m using two systems for this tutorial so keep that in mind, here are the names of each system and what their role is:

Graylog1 ( Graylog/Web/Mongo Server
Graylog2 ( Elasticsearch Node

First let’s login as the root account and update both systems and accept any updates before we do anything else.

yum update

Install by favorite text editor for Linux called nano and wget on both systems. (We need to make modifications to some configuration files)

yum install nano wget

Also we need Java on both systems so let’s install that.

yum install java-1.8.0-openjdk

Let’s start with server Graylog2 which will have the Elasticsearch service installed. This can be pretty big in size depending on the amount of logs you’re pushing into Graylog and the retention period you have. (By default its 20 indices) Also as a note you have the the ability to cluster these Elasticseach nodes which can add additional space and redundancy to Graylog. In this tutorial we are just doing one server and currently Graylog does not support Elasticsearch 2.X so we to need install Elasticsearch 1.7.x.

I downloaded the RPM package to root’s home directory on Graylog2. You can either use WinSCP or use wget to download that file. I then ran the following to install Elasticsearch 1.7.x. Remember install the latest 1.7.x version.

rpm -Uvh elasticsearch-1.7.4.noarch.rpm

(Side Conversation): By default the Elasicsearch stores its information in the var/lib/elasticsearch folder. Depending on the size of the Centos server you may want to change the default location, especially if you had Centos auto partition your drive. Usually the /home folder has the most space available and this varies on installations so on your own you can check by running df -h to view partitions. To change the save location for Elasicsearch edit the elasticsearch.yml located at /etc/elasticsearch/. You want to look for and uncommnet it out and put down where you want Elasicsearch to store indexes. (Its easier to do this now then to wait until your partition is full 🙂 )

Start elastic on bootup:

sudo systemctl enable elasticsearch.service

Start the Elasticsearch service:

sudo systemctl start elasticsearch.service

Verify Elasticsearch service is started:

[root@GRAYLOG2 ~]#  systemctl status elasticsearch.service
â elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2015-12-21 21:00:42 MST; 52s ago
 Main PID: 2835 (java)
   CGroup: /system.slice/elasticsearch.service
           ââ2835 /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiating...

Dec 21 21:00:42 GRAYLOG2 systemd[1]: Started Elasticsearch.
Dec 21 21:00:42 GRAYLOG2 systemd[1]: Starting Elasticsearch...

Additional verification:

[root@GRAYLOG2 ~]# curl -XGET
  "status" : 200,
  "name" : "Norrin Radd",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "1.7.4",
    "build_hash" : "0d3159b9fc8bc8e367c5c40c09c2a57c0032b32e",
    "build_timestamp" : "2015-12-15T11:25:18Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.4"
  "tagline" : "You Know, for Search"
[root@GRAYLOG2 ~]#

Allow a range of ports on the CentOS system for management and Elasticsearch functions.

firewall-cmd --permanent --zone=public --add-port=9200-9400/tcp

Reload the firewall:

firewall-cmd --reload

Install the browser plugin for Elasticsearch.

cd /usr/share/elasticsearch/
bin/plugin -install mobz/elasticsearch-head

Point your browser to the Elasticsearch node http://ip-address:9200 you should see the same information as you did when you issued the curl command. You can view the plugin that was install by going to http://ip-address:9200/_plugin/head/ (This is very helpful to see the elastic cluster)

We now need to modify some configuration on the Elasticsearch node. We will be editing /etc/elasticsearch/elasticsearch.yml

nano /etc/elasticsearch/elasticsearch.yml

Edit the following:

################################### Cluster ###################################

# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
# graylog

#################################### Node #####################################

# Node names are generated dynamically on startup, so you're relieved
# from configuring them manually. You can tie this node to a specific name:


# Unicast discovery allows to explicitly control which nodes will be used
# to discover the cluster. It can be used when multicast is not present,
# or to restrict the cluster communication-wise.
# 1. Disable multicast discovery (enabled by default):
# false
# 2. Configure an initial list of master nodes in the cluster
#    to perform discovery when new nodes (master or data) are started:
# ["host1", "host2:port"]

# EC2 discovery allows to use AWS EC2 API in order to perform discovery.
# You have to install the cloud-aws plugin for enabling the EC2 discovery.

Restart Elasticsearch service:

systemctl restart elasticsearch

Verify service is running:

systemctl status elasticsearch

Open your browser up to http://ip-address:9200/_plugin/head/ you should now see your elasticsearch node named correctly.

Elasticseach Node up and running on server Graylog2

On server Graylog1:

Allow the installation of EPEL packages, We use this to install pwgen if you have another way of generating random passwords then you don’t need it.

To install EPEL packages on Centos 7:

rpm -Uvh

Install pwgen:

yum install pwgen

Install the Mango software repositories


Create a /etc/yum.repos.d/mongodb-org-3.2.repo file so that you can install MongoDB directly, using yum.

nano /etc/yum.repos.d/mongodb-org-3.2.repo

Copy following into the mongodb-org3.2.repo

name=MongoDB Repository

Install Mongodb:

yum install mongodb-org

Once installed start it up by running this command:

/etc/init.d/mongod start

Verify mongo is up and running by running the mongo command. Should look something like this:

[root@GRAYLOG1 ~]# mongo
MongoDB shell version: 3.2.0
connecting to: test
Server has startup warnings:
2015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten]
2015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten]
2015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten]
2015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 4096 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files.
2015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten]

Make sure Mongo starts up when the server reboots:

chkconfig -- mongodb

We are now ready to install Graylog sever and Graylog Web


Run the following commands, RPM is to get the package version so we can install it using yum.

sudo rpm -Uvh
sudo yum install graylog-server graylog-web

When/If the server reboots let’s make sure graylog starts up with it:

systemctl enable graylog-server
systemctl enable graylog-web

Now we need to edit the graylog configuration:

nano /etc/graylog/server/server.conf

Make the following edits:
Note: For the shasum -a 256 command, use sha256sum instead.
Example: echo -n yourpassword | shasum -a 256 BECOMES echo -n yourpassword | sha256sum

# If you are running more than one instances of graylog2-server you have to select one of these
# instances as master. The master will perform some periodical tasks that non-masters won't perform.
is_master = true

# The auto-generated node ID will be stored in this file and read after restarts. It is a good idea
# to use an absolute file path here if you are starting graylog2-server from init scripts or similar.
node_id_file = /etc/graylog/server/node-id

# You MUST set a secret to secure/pepper the stored user passwords here. Use at least 64 characters.
# Generate one by using for example: pwgen -N 1 -s 96

# The default root user is named 'admin'
#root_username = admin

# You MUST specify a hash password for the root user (which you only need to initially set up the
# system and in case you lose connectivity to your authentication backend)
# This password cannot be changed using the API or via the web interface. If you need to change it,
# modify it in this file.
# Create one by using for example: echo -n yourpassword | shasum -a 256
# and put the resulting hash value into the following line
root_password_sha2 =YOUR_PASSWORD_HASH_GOES_HERE

# The email address of the root user.
# Default is empty
#root_email = ""

# The time zone setting of the root user.
# The configured time zone must be parseable by
# Default is UTC
#root_timezone = UTC


# How many Elasticsearch shards and replicas should be used per index? Note that this only applies to newly created indices.
elasticsearch_shards = 4
elasticsearch_replicas = 0

# Prefix for all Elasticsearch indices and index aliases managed by Graylog.
elasticsearch_index_prefix = graylog

# Name of the Elasticsearch index template used by Graylog to apply the mandatory index mapping.
# # Default: graylog-internal
#elasticsearch_template_name = graylog-internal

# Do you want to allow searches with leading wildcards? This can be extremely resource hungry and should only
# be enabled with care. See also:
allow_leading_wildcard_searches = false

# Do you want to allow searches to be highlighted? Depending on the size of your messages this can be memory hungry and
# should only be enabled after making sure your Elasticsearch cluster has enough memory.
allow_highlighting = true

# settings to be passed to elasticsearch's client (overriding those in the provided elasticsearch_config_file)
# all these
# this must be the same as for your Elasticsearch cluster
elasticsearch_cluster_name = graylog

# you could also leave this out, but makes it easier to identify the graylog2 client instance
elasticsearch_node_name = graylog1

# we don't want the graylog2 server to store any data, or be master node
#elasticsearch_node_master = false
#elasticsearch_node_data = false

# use a different port if you run multiple Elasticsearch nodes on one machine
#elasticsearch_transport_tcp_port = 9350

# we don't need to run the embedded HTTP server here
#elasticsearch_http_enabled = false

elasticsearch_discovery_zen_ping_multicast_enabled = false
elasticsearch_discovery_zen_ping_unicast_hosts =

# Change the following setting if you are running into problems with timeouts during Elasticsearch cluster discovery.
# The setting is specified in milliseconds, the default is 5000ms (5 seconds).
#elasticsearch_cluster_discovery_timeout = 5000

Add the following firewall rule:

firewall-cmd --permanent --zone=public --add-port=9000/tcp
firewall-cmd --permanent --zone=public --add-port=9201-9400/tcp
firewall-cmd --reload

Configure the Graylog web server configuration file located at /etc/graylog/web/web.conf.

# graylog2-server REST URIs (one or more, comma separated) For example: ",$

# Learn how to configure custom logging in the documentation:

# Secret key
# ~~~~~
# The secret key is used to secure cryptographics functions. Set this to a long and randomly generated string.
# If you deploy your application to several instances be sure to use the same key!
# Generate for example with: pwgen -N 1 -s 96

# Web interface timezone
# Graylog stores all timestamps in UTC. To properly display times, set the default timezone of the interface.
# If you leave this out, Graylog will pick your system default as the timezone. Usually you will want to configure it.
# timezone="Europe/Berlin"

# Message field limit
# Your web interface can cause high load in your browser when you have a lot of different message fields. The default
# limit of message fields is 100. Set it to 0 if you always want to get all fields. They are for example used in the
# search result sidebar or for autocompletion of field names.

# Use this to run Graylog with a path prefix

# You usually do not want to change this.

# Global timeout for communication with Graylog server nodes; default: 5s

# Accept any server certificate without checking for validity; required if using self-signed certificates.
# Default: true
# graylog2.client.accept-any-certificate=true

Start them up! Both graylog-server and graylog-web.

systemctl start graylog-server
systemctl start graylog-web

Verify both are running:

[root@GRAYLOG1 ~]# systemctl status graylog-server
â graylog-server.service - Graylog server
   Loaded: loaded (/usr/lib/systemd/system/graylog-server.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2015-12-23 20:05:38 MST; 11s ago
 Main PID: 11758 (graylog-server)
   CGroup: /system.slice/graylog-server.service
           ââ11758 /bin/sh /usr/share/graylog-server/bin/graylog-server
           ââ11759 /usr/bin/java -Xms1g -Xmx1g -XX:NewRatio=1 -XX:PermSize=128m -XX:MaxPermSize=256m -se...

Dec 23 20:05:38 GRAYLOG1 systemd[1]: Started Graylog server.
Dec 23 20:05:38 GRAYLOG1 systemd[1]: Starting Graylog server...
Dec 23 20:05:38 GRAYLOG1 graylog-server[11758]: OpenJDK 64-Bit Server VM warning: ignoring option Per...8.0
Dec 23 20:05:38 GRAYLOG1 graylog-server[11758]: OpenJDK 64-Bit Server VM warning: ignoring option Max...8.0
Hint: Some lines were ellipsized, use -l to show in full.
[root@GRAYLOG1 ~]# systemctl status graylog-web
â graylog-web.service - Graylog web interface
   Loaded: loaded (/usr/lib/systemd/system/graylog-web.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2015-12-23 20:05:43 MST; 13s ago
 Main PID: 11777 (graylog-web)
   CGroup: /system.slice/graylog-web.service
           ââ11777 /bin/sh /usr/share/graylog-web/bin/graylog-web
           ââ11778 java -Xms1024m -Xmx1024m -XX:ReservedCodeCacheSize=128m -Dconfig.file=/etc/graylog/we...

Dec 23 20:05:43 GRAYLOG1 systemd[1]: Started Graylog web interface.
Dec 23 20:05:43 GRAYLOG1 systemd[1]: Starting Graylog web interface...
[root@GRAYLOG1 ~]#

You should be able get to the Graylog Web interface by going to http://ip-address:9000/. You should see a login screen, type in admin followed by your password that you created.

Graylog Login Screen

Once logged if you get an error about an exception and an “OOps Message” You are likely typing the IP address of the of the server. There is some type of Java error that makes Graylog not happy 😦 so to fix it you can do two things:

  • Use the DNS name of the server if you have local DNS servers, Graylog has to use these DNS servers as well.


  • Edit the local server’s host file and put in the hostname of the server and the IP address of it to fix this error.

Go to /etc/hosts using nano: (In this example the is the Graylog1 server’s IP address and the hostname of this Graylog server is GRAYLOG1)   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 GRAYLOG1

After you edit that hosts file just refresh your browser session. You now have installed GRAYLOG! :).

Graylog Web Interface

It’s a little bit of an effort to get this thing up and running but you now have a logging system that is ready to go. If you want this thing in production please look at the documentation that Graylog offers as there are things that I did not even mention, this was bare bones installation and Graylog has a lot to offer. I hope this information was helpful, now let’s kick-off 2016!