1. Home
  2. Know how
  3. How to Start a Client
  1. Home
  2. Videos
  3. How to Start a Client

How to Start a Client

To start the Client, Elasticsearch, Logstash, Kibana need to be Started

How to start Elasticsearch:

To start Elasticsearch, go to

  • cd /usr/elk/elasticsearch/elasticsearch-6.4.2 path
  • /usr/elk/elasticsearch/elasticsearch-6.4.2/bin/elasticsearch -d -p pid

Note: Don’t run elasticsearch as the root user.

This will start as service in background wait for few mins and execute below curl cmd to test.

How to start Kibana:

To start Kibana, go to

  • cd /usr/elk/kibana/kibana-6.4.2-linux-x86_64/
  • ./bin/kibana

This will runs as service on the console.

How to start Logstash:’

To start Logstash, go to

  • cd logstash/logstash-6.4.2
  • ./bin/logstash -f stdinout.conf

This will start the Logstash and generate output to a required format configured in the stdinout.conf. stdinout.conf is a sample configuration file.

FileGPS Client setup sample files for the database and logs

FileGPS Client database config file:

input {

jdbc {
jdbc_driver_library => “path for jdbc driver library”
# ORACLE Driver Class
jdbc_driver_class => “Java::oracle.jdbc.driver.OracleDriver”
jdbc_default_timezone => “America/New_York”
# ORACLE jdbc connection string to our database, ORACLE jdbc:oracle:thin:@hostname:PORT/SERVICE
jdbc_connection_string => “jdbc:oracle:thin:@cdldftff2-scan.es.ad.adp.com:1521/mft11d_svc1”
# The user and password for database authentication and statement to fetch data from tables.
jdbc_user => “username”
jdbc_password => “password”
schedule => “* * * * *”
statement => “SELECT trim(FGA._KEY) as processId,FGA.FILE_NAME as localFileName,FGA.FILE_SIZE as filesize,FGA.MAILBOX_PATH as srcmailboxPath,FGA.MAILBOX_PATH as mailboxPath> :sql_last_value”
filter {
mutate {
add_field => {“read_timestamp” => “%{@timestamp}”}
remove_field => [“@version”,”removekey”,”type”]
rename => { “msgtxt” => “msgTxt” “processid” =>”processId” “localfilename” => “localFileName” “remotefilename” => “remoteFileName” “clientid” => “clientId” “clientname” => “clientName” “nodeid” => “nodeId”}
output {

kafka {

topic_id => “gps”

bootstrap_servers => “ip-name(Kafka):9092”

codec => json


stdout { codec=>rubydebug }

Run the database cofig file and it will start and connect to database.

In this database conf file we have event processing pipeline has three stages: inputs → filters → outputs.

  • Input: inputs to get data into Logstash.
  • filter: Filters are intermediary processing devices in the Logstash pipeline.
  • output: Outputs are the final phase of the Logstash pipeline.
Updated on July 26, 2019

Was this article helpful?

Related Articles

Leave a Comment