Installing Graylog on Centos 7

Share on:

Let's start out 2016 with setting up a logging system called Graylog. If you have not used Graylog before then I encourage you to check it out. This is an open source log management system and is pretty flexible as it can capture, index and analyze almost anything. Once up and running this system can be scaled out for an enterprise wide log management system. High availability, clustered, and replicated is what Graylog thrives on. In this demo I am going to have two systems. One is the Graylog server, web server and will also have a Mongo database. The other system will be an Elasticsearch node which is what will have the actual data stored in and indexed. For bigger “production” ready setups you just scale this out to separate systems.

Documentation is key here and Graylog has done an excellent job of it so here it is in case you need it: I’m already assuming that you have CentOS 7 minimal installed if not you can pick up the latest CentOS at .The Centos systems also need a connection to the internet, and have just the root account. I'm using two systems for this tutorial so keep that in mind, here are the names of each system and what their role is:

Graylog1 ( Graylog/Web/Mongo Server
Graylog2 ( Elasticsearch Node

First let's login as the root account and update both systems and accept any updates before we do anything else.

1yum update

Install my favorite text editor for Linux called nano, don't judge me! We also wget on both systems.

1yum install nano wget

Also we need Java on both systems so let's install that.

1yum install java-1.8.0-openjdk 

Let's start with server Graylog2 which will have the Elasticsearch service installed. This can be pretty big in size depending on the amount of logs you're pushing into Graylog and the retention period you have. (By default its 20 indices) Also as a note you have the the ability to cluster these Elasticseach nodes which can add additional space and redundancy to Graylog. In this tutorial we are just doing one server and currently (as of this post) Graylog does not support Elasticsearch 2.X so we to need install Elasticsearch 1.7.x. I downloaded the RPM package to root’s home directory on Graylog2. You can either use WinSCP or use wget to download that file. I then ran the following to install Elasticsearch 1.7.x. Remember install the latest 1.7.x version. Reference:

1rpm -Uvh elasticsearch-1.7.4.noarch.rpm

(Side Conversation): By default the Elasicsearch stores its information in the var/lib/elasticsearch folder. Depending on the size of the Centos server you may want to change the default location, especially if you had Centos auto partition your drive. Usually the /home folder has the most space available and this varies on installations so on your own you can check by running df -h to view partitions. To change the save location for Elasicsearch edit the elasticsearch.yml located at /etc/elasticsearch/. You want to look for and uncommnet it out and put down where you want Elasicsearch to store indexes. (Its easier to do this now then to wait until your partition is full

Start elastic on bootup:

1sudo systemctl enable elasticsearch.service

Start the Elasticsearch service:

1sudo systemctl start elasticsearch.service

Verify Elasticsearch service is started:

 1[root@GRAYLOG2 ~]#  systemctl status elasticsearch.service
 2â elasticsearch.service - Elasticsearch
 3   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
 4   Active: active (running) since Mon 2015-12-21 21:00:42 MST; 52s ago
 5     Docs:
 6 Main PID: 2835 (java)
 7   CGroup: /system.slice/elasticsearch.service
 8           ââ2835 /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiating...
10Dec 21 21:00:42 GRAYLOG2 systemd[1]: Started Elasticsearch.
11Dec 21 21:00:42 GRAYLOG2 systemd[1]: Starting Elasticsearch...

Additional verification:

 1[root@GRAYLOG2 ~]# curl -XGET
 3  "status" : 200,
 4  "name" : "Norrin Radd",
 5  "cluster_name" : "elasticsearch",
 6  "version" : {
 7    "number" : "1.7.4",
 8    "build_hash" : "0d3159b9fc8bc8e367c5c40c09c2a57c0032b32e",
 9    "build_timestamp" : "2015-12-15T11:25:18Z",
10    "build_snapshot" : false,
11    "lucene_version" : "4.10.4"
12  },
13  "tagline" : "You Know, for Search"
15[root@GRAYLOG2 ~]#

Allow a range of ports on the CentOS system for management and Elasticsearch functions.

1firewall-cmd --permanent --zone=public --add-port=9200-9400/tcp

Reload the firewall:

1firewall-cmd --reload

Install the browser plugin for Elasticsearch.

1cd /usr/share/elasticsearch/ bin/plugin -install mobz/elasticsearch-head

Point your browser to the Elasticsearch node http://ip-address:9200 you should see the same information as you did when you issued the curl command. You can view the plugin that was install by going to http://ip-address:9200/_plugin/head/ (This is very helpful to see the elastic cluster)

We now need to modify some configuration on the Elasticsearch node. We will be editing /etc/elasticsearch/elasticsearch.yml

1nano /etc/elasticsearch/elasticsearch.yml

Edit the following:

 1################################### Cluster ###################################
 3# Cluster name identifies your cluster for auto-discovery. If you're running
 4# multiple clusters on the same network, make sure you're using unique names.
 5# graylog
 8#################################### Node #####################################
10# Node names are generated dynamically on startup, so you're relieved
11# from configuring them manually. You can tie this node to a specific name:
12# "GRAYLOG2"


 1# Unicast discovery allows to explicitly control which nodes will be used
 2# to discover the cluster. It can be used when multicast is not present,
 3# or to restrict the cluster communication-wise.
 5# 1. Disable multicast discovery (enabled by default):
 6# false
 9# 2. Configure an initial list of master nodes in the cluster
10#    to perform discovery when new nodes (master or data) are started:
11# ["host1", "host2:port"]
14# EC2 discovery allows to use AWS EC2 API in order to perform discovery.
16# You have to install the cloud-aws plugin for enabling the EC2 discovery.

Restart Elasticsearch service:

1systemctl restart elasticsearch

Verify service is running:

1systemctl status elasticsearch

Open your browser up to http://ip-address:9200/_plugin/head/ you should now see your elasticsearch node named correctly.

On server Graylog1: Allow the installation of EPEL packages, We use this to install pwgen if you have another way of generating random passwords then you don't need it. To install EPEL packages on Centos 7:

1rpm -Uvh

Install pwgen:

1yum install pwgen

Install the Mango software repositories Reference: Create a /etc/yum.repos.d/mongodb-org-3.2.repo file so that you can install MongoDB directly, using yum.

1nano /etc/yum.repos.d/mongodb-org-3.2.repo

Copy following into the mongodb-org3.2.repo

2name=MongoDB Repository

Install Mongodb:

1yum install mongodb-org

Once installed start it up by running this command:

1/etc/init.d/mongod start

Verify mongo is up and running by running the mongo command. Should look something like this:

 1[root@GRAYLOG1 ~]# mongo
 2MongoDB shell version: 3.2.0
 3connecting to: test
 4Server has startup warnings:
 52015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten]
 62015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
 72015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
 82015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten]
 92015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
102015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
112015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten]
122015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 4096 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files.
132015-12-22T18:30:06.804-0700 I CONTROL  [initandlisten]

Make sure Mongo starts up when the server reboots:

1chkconfig -- mongodb

We are now ready to install Graylog sever and Graylog Web Reference: Run the following commands, RPM is to get the package version so we can install it using yum.

1sudo rpm -Uvh\_latest.rpm
2sudo yum install graylog-server graylog-web

When/If the server reboots let’s make sure graylog starts up with it:

1systemctl enable graylog-server systemctl enable graylog-web

Now we need to edit the graylog configuration:

1nano /etc/graylog/server/server.conf

Make the following edits:

Note: For the shasum -a 256 command, use sha256sum instead. Example: echo -n yourpassword | shasum -a 256 BECOMES echo -n yourpassword | sha256sum

 1# If you are running more than one instances of graylog2-server you have to select one of these
 2# instances as master. The master will perform some periodical tasks that non-masters won't perform.
 3is_master = true
 5# The auto-generated node ID will be stored in this file and read after restarts. It is a good idea
 6# to use an absolute file path here if you are starting graylog2-server from init scripts or similar.
 7node_id_file = /etc/graylog/server/node-id
 9# You MUST set a secret to secure/pepper the stored user passwords here. Use at least 64 characters.
10# Generate one by using for example: pwgen -N 1 -s 96
13# The default root user is named 'admin'
14#root_username = admin
16# You MUST specify a hash password for the root user (which you only need to initially set up the
17# system and in case you lose connectivity to your authentication backend)
18# This password cannot be changed using the API or via the web interface. If you need to change it,
19# modify it in this file.
20# Create one by using for example: echo -n yourpassword | shasum -a 256
21# and put the resulting hash value into the following line
22root_password_sha2 =YOUR_PASSWORD_HASH_GOES_HERE
24# The email address of the root user.
25# Default is empty
26#root_email = ""
28# The time zone setting of the root user.
29# The configured time zone must be parseable by
30# Default is UTC
31#root_timezone = UTC


 1# How many Elasticsearch shards and replicas should be used per index? Note that this only applies to newly created indices.
 2elasticsearch_shards = 4
 3elasticsearch_replicas = 0
 5# Prefix for all Elasticsearch indices and index aliases managed by Graylog.
 6elasticsearch_index_prefix = graylog
 8# Name of the Elasticsearch index template used by Graylog to apply the mandatory index mapping.
 9# # Default: graylog-internal
10#elasticsearch_template_name = graylog-internal
12# Do you want to allow searches with leading wildcards? This can be extremely resource hungry and should only
13# be enabled with care. See also:
14allow_leading_wildcard_searches = false
16# Do you want to allow searches to be highlighted? Depending on the size of your messages this can be memory hungry and
17# should only be enabled after making sure your Elasticsearch cluster has enough memory.
18allow_highlighting = true
20# settings to be passed to elasticsearch's client (overriding those in the provided elasticsearch_config_file)
21# all these
22# this must be the same as for your Elasticsearch cluster
23elasticsearch_cluster_name = graylog
25# you could also leave this out, but makes it easier to identify the graylog2 client instance
26elasticsearch_node_name = graylog1
28# we don't want the graylog2 server to store any data, or be master node
29#elasticsearch_node_master = false
30#elasticsearch_node_data = false
32# use a different port if you run multiple Elasticsearch nodes on one machine
33#elasticsearch_transport_tcp_port = 9350
35# we don't need to run the embedded HTTP server here
36#elasticsearch_http_enabled = false
38elasticsearch_discovery_zen_ping_multicast_enabled = false
39elasticsearch_discovery_zen_ping_unicast_hosts =
41# Change the following setting if you are running into problems with timeouts during Elasticsearch cluster discovery.
42# The setting is specified in milliseconds, the default is 5000ms (5 seconds).
43#elasticsearch_cluster_discovery_timeout = 5000

Add the following firewall rules:

1firewall-cmd --permanent --zone=public --add-port=9000/tcp
2firewall-cmd --permanent --zone=public --add-port=9201-9400/tcp
3firewall-cmd --reload

Configure the Graylog web server configuration file located at /etc/graylog/web/web.conf.

 1# graylog2-server REST URIs (one or more, comma separated) For example: ",$
 4# Learn how to configure custom logging in the documentation:
 7# Secret key
 8# ~~~~~
 9# The secret key is used to secure cryptographics functions. Set this to a long and randomly generated string.
10# If you deploy your application to several instances be sure to use the same key!
11# Generate for example with: pwgen -N 1 -s 96
14# Web interface timezone
15# Graylog stores all timestamps in UTC. To properly display times, set the default timezone of the interface.
16# If you leave this out, Graylog will pick your system default as the timezone. Usually you will want to configure it.
17# timezone="Europe/Berlin"
19# Message field limit
20# Your web interface can cause high load in your browser when you have a lot of different message fields. The default
21# limit of message fields is 100. Set it to 0 if you always want to get all fields. They are for example used in the
22# search result sidebar or for autocompletion of field names.
25# Use this to run Graylog with a path prefix
28# You usually do not want to change this.
31# Global timeout for communication with Graylog server nodes; default: 5s
34# Accept any server certificate without checking for validity; required if using self-signed certificates.
35# Default: true
36# graylog2.client.accept-any-certificate=true

Start them up! Both graylog-server and graylog-web.

1systemctl start graylog-server
2systemctl start graylog-web

Verify both are running:

 1[root@GRAYLOG1 ~]# systemctl status graylog-server
 2â graylog-server.service - Graylog server
 3   Loaded: loaded (/usr/lib/systemd/system/graylog-server.service; enabled; vendor preset: disabled)
 4   Active: active (running) since Wed 2015-12-23 20:05:38 MST; 11s ago
 5     Docs:
 6 Main PID: 11758 (graylog-server)
 7   CGroup: /system.slice/graylog-server.service
 8           ââ11758 /bin/sh /usr/share/graylog-server/bin/graylog-server
 9           ââ11759 /usr/bin/java -Xms1g -Xmx1g -XX:NewRatio=1 -XX:PermSize=128m -XX:MaxPermSize=256m -se...
11Dec 23 20:05:38 GRAYLOG1 systemd[1]: Started Graylog server.
12Dec 23 20:05:38 GRAYLOG1 systemd[1]: Starting Graylog server...
13Dec 23 20:05:38 GRAYLOG1 graylog-server[11758]: OpenJDK 64-Bit Server VM warning: ignoring option Per...8.0
14Dec 23 20:05:38 GRAYLOG1 graylog-server[11758]: OpenJDK 64-Bit Server VM warning: ignoring option Max...8.0
15Hint: Some lines were ellipsized, use -l to show in full.
17[root@GRAYLOG1 ~]# systemctl status graylog-web
18â graylog-web.service - Graylog web interface
19   Loaded: loaded (/usr/lib/systemd/system/graylog-web.service; enabled; vendor preset: disabled)
20   Active: active (running) since Wed 2015-12-23 20:05:43 MST; 13s ago
21     Docs:
22 Main PID: 11777 (graylog-web)
23   CGroup: /system.slice/graylog-web.service
24           ââ11777 /bin/sh /usr/share/graylog-web/bin/graylog-web
25           ââ11778 java -Xms1024m -Xmx1024m -XX:ReservedCodeCacheSize=128m -Dconfig.file=/etc/graylog/we...
27Dec 23 20:05:43 GRAYLOG1 systemd[1]: Started Graylog web interface.
28Dec 23 20:05:43 GRAYLOG1 systemd[1]: Starting Graylog web interface...
29[root@GRAYLOG1 ~]#

You should be able get to the Graylog Web interface by going to http://ip-address:9000/. You should see a login screen, type in admin followed by your password that you created.

Once logged if you get an error about an exception and an "OOps Message" You are likely typing the IP address of the of the server. There is some type of Java error that makes Graylog not happy :( so to fix it you can do two things:

  • Use the DNS name of the server if you have local DNS servers, Graylog has to use these DNS servers as well.
  • Edit the local server's host file and put in the hostname of the server and the IP address of it to fix this error.

Go to /etc/hosts using nano: (In this example the is the Graylog1 server's IP address and the hostname of this Graylog server is GRAYLOG1)

1127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
2::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
3192.168.148.130 GRAYLOG1

After you edit that hosts file just refresh your browser session. You now have installed GRAYLOG! :).

It's a little bit of an effort to get this thing up and running but you now have a logging system that is ready to go. If you want this thing in production please look at the documentation that Graylog offers as there are things that I did not even mention, this was bare bones installation, and Graylog has a lot to offer! I hope this information was helpful, now let's kick-off 2016 as always I hope this information is helpful!