All posts by Mehdi El-Filahi

Get rid of Zookeeper and use Kraft

It this post, i will explain how to be able to use Kraft and why use it.

Kraf is a concensus protocol allowing Kafka to work independently without Zookeeper.

This simplifies the fact there is nore more need of two different services (Kafka and Zookeeper) to share metadata. It also enables kafka fail over control to be almost instant. Thus the effect of kafka start and stop is faster.

The feature began with the version Kafka 2.8 and is enhanced with every new version BUT for the moment (June 2022), this solution is not yet production ready.

Of course this changes a little the infrastructure and the connection method.

With ZookeeperWithout Zookeeper
Client and
service configuration
zookeeper.connect=zookeeper:2181bootstrap.servers=broker:9092
Schema registry configurationkafkastore.connection.url=zookeeper:2181kafkastore.bootstrap.servers=broker:9092
Kafka admin toolkafka-topics –zookeeper zookeeper:2181kafka-topics –bootstrap-server broker:9092 … –command-config properties to connect to brokers
REST Proxy APIv1v2 and v3
Get Cluster IDzookeeper-shell zookeeper:2181 get/cluster/idkafka-metadata-quorum or view metadata.properties or confluent cluster describe --url http://broker:8090 --output json

How to configuration and start Kafka with Kraft

Generate a UID:

./bin/kafka-storage.sh random-uuid
xtzWWN4bTjitpL3kfd9s5g

Format the data directory to be compatible with Kraft (to run on each Kafka broker), do not forget to set the cluster ID.

./bin/kafka-storage.sh format -t <uuid> -c ./config/kraft/server.properties
Formatting /tmp/kraft-combined-logs --cluster-id XXXXXXXXXX

In the file server.properties, do not forget to set all ther brokers hosts in the line “controller.quorum.voters”

process.roles=controller
node.id=1
listeners=CONTROLLER://controller1.example.com:9093
controller.quorum.voters=1@controller1.example.com:9093,2@controller2.example.com:9093,3@controller3.example.com:9093

Start Kafka with pointing to kraft config file

./bin/kafka-server-start.sh ./config/kraft/server.properties

You are now ready to work with a broker running with no zookeeper dependencies to be healty.

MongoDB and Python

In this post, i will provide the minimal requirement to work with Python and apply CRUD operations.

Fist of all you will need Python V.3 and the module pymongo.

After installing python 3. Please read this article to see how to install pymongo ==> https://www.mongodb.com/docs/drivers/pymongo/

You will also see the compatibility matrix of pymongo driver and mongodb versions on the link above.

Now that you installed pymongo, we are ready to start.

I suggest you to install Visual Studio Code ==> https://code.visualstudio.com/download

My git repository is available here ==> https://github.com/djmhd/PythonMongoDB

Clone the repo and update the property file “config.properties” if needed to put the correct mongodb:

git clone https://github.com/djmhd/PythonMongoDB.git
  • Hostname port
  • Credentials: user password
  • Query parameter connection string
  • Database name

Run the file testCRUD.py

python3 testCRUD.py

This will, create a collection sampleCollection.

It will add two documents.

{
_id:6298b571f5643949130bda03
firstName: "John"
lastName: "Doe"
Address: "First addresss"
}
and
{
_id:6298b571f5643949130bda55
firstName: "Brandon"
lastName: "Don"
Address: "Secondaddresss"
}

The unit test file will wait for any key on the prompt command line.

You can go check that the two document are well inserted.

After it will delete the second document.

The unit test file will wait for any key on the prompt command line.

You can go check that the second document is well removed.

Then it will update the first document to update the field from the value to …..

{
_id:6298b571f5643949130bda03
firstName: "Marc"
lastName: "Doe"
Address: "First addresss"
}

RDBMS vs NoSQL

In this topic i will show the difference between RDBMS and NoSQL.

TopicsRDBMSNoSQL
FormalizationFormalizazion must be respected to get consitent dataIt does not need to respect formalization
Integrity constraintRelation data consists in primary foreign keysIt support integrity but is not mandatory.
Data StructureData is composed with Table, Row and relations between data.Data consists in key-value pairs or json data.
Schema and modelsData is less flexible and must live with fixes columns and data types.Data can be unstructured or can have dynamic schema.
ScalingVertical scaling is quite easy but horizontal scale demands more effrotsVertical and horizontal scale are more flexible.

The next table will help you understand the transition from RDBMS to NoSQL.

RDBMSNoSQL
DatabaseDatabase
TableCollection
RowDocument
IndexIndex
Foreign KeyReference

MongoDB Cloud Atlas

In this post, i will show you how to work with MongoDB Cloud. The good point is that MongoDB offers a free version that you can test with a limitation of 512 mega bytes which is a good starting point.

First go to https://cloud.mongodb.com/.

Log in with your google account or sign up.

You will be redirected to the home page

First of all we will create a New Organization

Organization can help to wrapp your projects and databases for instance in regions, departements or any idea you could have to separate your information. Click on the green button on the top left “Create New Organization”. Provide a name to your new organization.

Select your new organization and create new Project. Click on the green button on the top left “New Project”. Provide a name to your new project.

Select your new project and create a new Database. Click on the green button on the top left “+ Create”.

During the creation process, keep the cluster tier configuration to 512MB Storage and change de cluster name. The cluster will be composed of 3 loadbalanced mongodb servers.

Wait until the cluster creation process is completed. You will end up with cluster up and running.

If you click on the button “Connect” and choose “Connect your application”, this will show some snippet code example for many different programming language such as Java, Python, GO, Perl, C, C++, C#, …

In my demo i will choose Python and a future post will be created with a git repo containing python code example to connect to mongodb, create a collection and apply CRUD operations to json documents.

Now if you select the cluster by clicking on it, you will arrive to the cluster overview page.

Select the “Metrics” tab and you will get an overview of the operation counters for the 3 servers.

Select the “Collections” tab to browse data documents.

Select to “Search” tab to run DSL queries.

The “Profiler” tab, “Performance Advisor” tab, “Online archive” tab are only available with a paid plan so let’s skip those three options.

The last tab “Cmd line tools” will help you to see all the options to run command lines:

  • Connect instructions: will help you to connect from shell or programming language or with MongoDB Compass (GUI to browse and manage your Databases).
  • Atlas CLI: tool to install to manage DB by command line from brew for MacOS, yum or apt or tar.gz or deb or rpm for Linux or msi or zip for Windows.
  • MongoDB Database tools: is a suite of useful command lines to install on MacOs, Linux or Windows.
  • Mongorestore (binary exec file): is a tool to restore data comming from a dump (of Mongodump).
  • Mongodump (binary exec file): is a tool to create binary dump from a database.
  • Mongoimport (binary exec file): is a tool to import data from a csv, json, tsv file.
  • Mongoexport (binary exec file): is a tool to export data to a csv, json file.
  • Mongostat (binary exec file): is a tool to get metadata of a mongodb service.
  • Mongotop (binary exec file): is a tool providing time run for read write transaction of mongodb service.

In the DATA SERVICES section, you will see two different options:

Triggers: allow you to run events based on crud operation that occured in a collection but you can also use the schedulded trigger to cron the code to execute.

Data API: allow you to operate actions on one of multiple collections.

In the SECURITY section, you will see three different options:

Database access: this option is very important. This is where you will manage your IAM. You can create users with the option user/password access, of user/certificate access or link your Amazon WS IAM to manage your users. Each user can be configured with some basic roles attribution.

Network access: this option allows you to create an access list to filter the IP that will be allowed or denied to access the DB.

Advanced: this option allows you enable LDAP feature, data encryption and audit feature.

One last point i would like to mention is that if you would like to get a free to access any mongodb, i suggest you to download this free tool “MongoDB Compass”, you find it here ==> https://www.mongodb.com/products/compass

This concludes this small topic about Cloud Atlas.

The MQ Explorer

In this post i will help you understand what is the Eclipse Tool MQ explorer, how to use it and what are the options available.

When you download the MQ for Developers software, you can find the link here https://developer.ibm.com/articles/mq-downloads, it contains the MQ service and the MQ Explorer tool.

It allows to visualize the configuration of queue managers but also the data state and also metadata of each objects (such as queues, topics, subscriptions, …).

It is either usefull for developers to debug and configure their objects for their future development or useful for administrator to configure objects or export configurations.

After installing the MQ for Developers edition, start the MQ Explorer software.

When the tool is running, you will see in the top left corner eclipse view MQ Explorer – Navigator tab.

Lets begin by explaining each elements:

  • Queue Managers: This is one of the most import element. It allows to create a new local queue manager, transfert local queue manager to a distant server, connect to a remote queue manager but also run tests on all the local and remote queue managers configured.
  • Queue Manager Cluster: This element allowd to create a Queue manager cluster. This is will help to configure all the channel senders and receivers usefull to have an up and running cluster. The cluster configuration can be made by two different options:
    • Full repository: each QM member of the cluster will sync everything objects and data.
    • Partial repository: each QM member of the cluster will only sny objects. Data wont be synchronized.
    • The choice between the two options can have an impact on the network bandwith, The fact that only the objects must be the same but the queue managers will be separated in region for instance (Europe region will not have the same data than the Asian region but the softwares putting data will have the same behavior in Europe and in Asia).
  • JMS Administered Objects: This element is usefull to create a binding JNDI configuration file to share with developers. Each binding file will have the connection details to connect to a queue manager, a queue manager HA, a queue manager cluster and also destination queues/topics to send data. This allows to control and restrict the queues and topics availability.
  • Manage File Transfer: For those who have the feature MQ FT enabled on at least two different queue managers and that you configured your agents, this options allows to test, trace and configuration file transfer. It is also possible to cron file transfer from the MQ Explorer.
  • Service Definition Repositories: This option is less used than before. This allows to create a documentation that will become a WSDL to deploy to get description and documentation information. It can be compared to IBM WSRR.

Create a local queue manager with MQ Explorer

Create a local queue manager with MQSC

Ingest Pipelines

In this post i will explain what are ingest pipelines, what are their use case and how to create them.

https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html

  • What is an ingest pipeline: it is a watcher analyzing data entering to an index and beforing being save can be transformed.
  • Possible actions: The transformation options available are: remove field, add field, enrich value of a field, convert field type.
  • The ingest pipeline option is located into the Stack Management section.
  • Use case:
    • If you have logstash between an agent or a software feeding data to elastic, you may use filter and/or grok system to do the same actions than an ingest pipeline.
    • But if you have agents or softwares feeding data directly to elastic and would like to manipulate data before being indexed you can use the ingest pipeline to do transformation.
      • It is also a good use case when you are now allowed to change the agent or the software that feed the data.
  • How to use it:
    • In the image above, you see the home page of the ingest pipeline menu.
      • Clic on the blue button “Create pipeline”. Choose “New pipeline“.
      • Give a relevant name to your new pipeline and a small description.
      • Clic on the button add processor. You can add many processor in the same pipeline.
      • In my example i convert the type of a field from integer to string.
      • I will use the json field response.
      • Next click on the button “Add”.
      • In front of the text “Test Pipeline:” Click on the link “Add documents”.
      • Insert a json sample you would like to test and run the test with the button “Run the pipeline”.
      • See the result if the transformation worked.
      • When your pipeline is complete, it is possible to save its configuration as an HTTP PUT request which will allow you to deploy it on other ELK environment or clusters.

Here is the json sample i used, see the field in red below:

{
  "_index": "kibana_sample_data_logs",
  "_id": "l_zi9oAB8WFQcfknI5oN",
  "_version": 1,
  "_score": 1,
  "_source": {
    "agent": "Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24",
    "bytes": 4460,
    "clientip": "123.217.24.241",
    "extension": "",
    "geo": {
      "srcdest": "US:US",
      "src": "US",
      "dest": "US",
      "coordinates": {
        "lat": 42.71720944,
        "lon": -71.12343
      }
    },
    "host": "www.elastic.co",
    "index": "kibana_sample_data_logs",
    "ip": "123.217.24.241",
    "machine": {
      "ram": 11811160064,
      "os": "ios"
    },
    "memory": null,
    "message": "123.217.24.241 - - [2018-08-01T07:02:46.200Z] \"GET /enterprise HTTP/1.1\" 200 4460 \"-\" \"Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24\"",
    "phpmemory": null,
    "referer": "http://nytimes.com/success/konstantin-kozeyev",
    "request": "/enterprise",
    "response": 200,
    "tags": [
      "success",
      "info"
    ],
    "timestamp": "2022-05-25T07:02:46.200Z",
    "url": "https://www.elastic.co/downloads/enterprise",
    "utc_time": "2022-05-25T07:02:46.200Z",
    "event": {
      "dataset": "sample_web_logs"
    }
  },
  "fields": {
    "referer": [
      "http://nytimes.com/success/konstantin-kozeyev"
    ],
    "request": [
      "/enterprise"
    ],
    "agent": [
      "Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24"
    ],
    "extension": [
      ""
    ],
    "tags.keyword": [
      "success",
      "info"
    ],
    "geo.coordinates": [
      {
        "coordinates": [
          -71.12343,
          42.71720944
        ],
        "type": "Point"
      }
    ],
    "geo.dest": [
      "US"
    ],
    "response.keyword": [
      "200"
    ],
    "machine.os": [
      "ios"
    ],
    "utc_time": [
      "2022-05-25T07:02:46.200Z"
    ],
    "agent.keyword": [
      "Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24"
    ],
    "clientip": [
      "123.217.24.241"
    ],
    "host": [
      "www.elastic.co"
    ],
    "machine.ram": [
      11811160064
    ],
    "extension.keyword": [
      ""
    ],
    "host.keyword": [
      "www.elastic.co"
    ],
    "machine.os.keyword": [
      "ios"
    ],
    "hour_of_day": [
      7
    ],
    "timestamp": [
      "2022-05-25T07:02:46.200Z"
    ],
    "geo.srcdest": [
      "US:US"
    ],
    "ip": [
      "123.217.24.241"
    ],
    "request.keyword": [
      "/enterprise"
    ],
    "index": [
      "kibana_sample_data_logs"
    ],
    "geo.src": [
      "US"
    ],
    "index.keyword": [
      "kibana_sample_data_logs"
    ],
    "message": [
      "123.217.24.241 - - [2018-08-01T07:02:46.200Z] \"GET /enterprise HTTP/1.1\" 200 4460 \"-\" \"Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24\""
    ],
    "url": [
      "https://www.elastic.co/downloads/enterprise"
    ],
    "url.keyword": [
      "https://www.elastic.co/downloads/enterprise"
    ],
    "tags": [
      "success",
      "info"
    ],
    "@timestamp": [
      "2022-05-25T07:02:46.200Z"
    ],
    "bytes": [
      4460
    ],
    "response": [
      "200"
    ],
    "message.keyword": [
      "123.217.24.241 - - [2018-08-01T07:02:46.200Z] \"GET /enterprise HTTP/1.1\" 200 4460 \"-\" \"Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24\""
    ],
    "event.dataset": [
      "sample_web_logs"
    ]
  }
}

And the json result, as you can see the field response in now as a string type, see the field in red below:

{
  "docs": [
    {
      "doc": {
        "_index": "kibana_sample_data_logs",
        "_id": "l_zi9oAB8WFQcfknI5oN",
        "_version": "1",
        "_source": {
          "referer": "http://nytimes.com/success/konstantin-kozeyev",
          "request": "/enterprise",
          "agent": "Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24",
          "extension": "",
          "memory": null,
          "ip": "123.217.24.241",
          "index": "kibana_sample_data_logs",
          "message": "123.217.24.241 - - [2018-08-01T07:02:46.200Z] \"GET /enterprise HTTP/1.1\" 200 4460 \"-\" \"Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24\"",
          "url": "https://www.elastic.co/downloads/enterprise",
          "tags": [
            "success",
            "info"
          ],
          "geo": {
            "coordinates": {
              "lon": -71.12343,
              "lat": 42.71720944
            },
            "srcdest": "US:US",
            "dest": "US",
            "src": "US"
          },
          "utc_time": "2022-05-25T07:02:46.200Z",
          "bytes": 4460,
          "machine": {
            "os": "ios",
            "ram": 11811160064
          },
          "response": "200",
          "clientip": "123.217.24.241",
          "host": "www.elastic.co",
          "event": {
            "dataset": "sample_web_logs"
          },
          "phpmemory": null,
          "timestamp": "2022-05-25T07:02:46.200Z"
        },
        "_ingest": {
          "timestamp": "2022-05-25T07:23:34.685600556Z"
        }
      }
    }
  ]
}

Alerting in Kibana

In this post i will explain how to manage alerts based on data stored into indexes.

This page will help you with many different demos to understand alertings: https://www.elastic.co/webinars/watcher-alerting-for-elasticsearch?blade=video&hulk=youtube

There are two ways to create alerts:

  • Either from the Kibana interface:
  • Or From the Dev Tool:
    • To create an alert from the dev tool, we are going to send to the Watch API of elastic an HTTP PUT operation
    • In this exampl the alert is configured with a cron, targets all the logstash indexes, search for the 404 reponse in the json body field during a certain time range, if the condition matches, an email is sent.
PUT _watcher/watch/my-watch
{
  "trigger" : {
    "schedule" : { "cron" : "0 0/1 * * * ?" }
  },
  "input" : {
    "search" : {
      "request" : {
        "indices" : [
          "logstash*"
        ],
        "body" : {
          "query" : {
            "bool" : {
              "must" : {
                "match": {
                   "response": 404
                }
              },
              "filter" : {
                "range": {
                  "@timestamp": {
                    "from": "{{ctx.trigger.scheduled_time}}||-5m",
                    "to": "{{ctx.trigger.triggered_time}}"
                  }
                }
              }
            }
          }
        }
      }
    }
  },
  "condition" : {
    "compare" : { "ctx.payload.hits.total" : { "gt" : 0 }}
  },
  "actions" : {
    "email_admin" : {
      "email" : {
        "to" : "admin@domain.host.com",
        "subject" : "404 recently encountered"
      }
    }
  }
}

Restrictions:

  • The alerting can really help to monitor message passing through the log, but there are some limitations.
  • To be able to use some connectors, the minimum subscription is to have the GOLD subscription.
  • With the free and basic subscription, the only connector available are Log server (Write your message alert into a log file), Index (Create an index with your message alert into it).
  • So without a GOLD subscription, i suggest to not focus a lot on alerting seen that the only connector types will need another monitoring system to be notified.

Dashboards in Kibana

In this post, i will try to help you understand how to analyze existing data located into elasticsearch to create usefull dashboards.

To feed some data into Elasticsearch, from the kibana home page, i will use the link “Try sample data“.

Next choose one of the three sample data. For this example, i choose “Sample web logs” by clicking on the button “Add data“.

After the data insertion. Clic on the top left menu and select the option Discover to view the data logs.

If you select one row, you will see this row as a table but also will have the choice to see it as a raw json.

From there you know which json element is present and can be used to create a usefull dashboard.

{
  "_index": "kibana_sample_data_logs",
  "_id": "_vzi9oAB8WFQcfknI5kN",
  "_version": 1,
  "_score": 1,
  "_source": {
    "agent": "Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24",
    "bytes": 1588,
    "clientip": "186.181.227.73",
    "extension": "deb",
    "geo": {
      "srcdest": "US:VN",
      "src": "US",
      "dest": "VN",
      "coordinates": {
        "lat": 44.63781639,
        "lon": -123.0594486
      }
    },
    "host": "artifacts.elastic.co",
    "index": "kibana_sample_data_logs",
    "ip": "186.181.227.73",
    "machine": {
      "ram": 20401094656,
      "os": "ios"
    },
    "memory": null,
    "message": "186.181.227.73 - - [2018-07-31T16:25:10.149Z] \"GET /apm-server/apm-server-6.3.2-amd64.deb HTTP/1.1\" 200 1588 \"-\" \"Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24\"",
    "phpmemory": null,
    "referer": "http://www.elastic-elastic-elastic.com/success/dominic-a-antonelli",
    "request": "/apm-server/apm-server-6.3.2-amd64.deb",
    "response": 200,
    "tags": [
      "success",
      "info"
    ],
    "timestamp": "2022-05-24T16:25:10.149Z",
    "url": "https://artifacts.elastic.co/downloads/apm-server/apm-server-6.3.2-amd64.deb",
    "utc_time": "2022-05-24T16:25:10.149Z",
    "event": {
      "dataset": "sample_web_logs"
    }
  },
  "fields": {
    "referer": [
      "http://www.elastic-elastic-elastic.com/success/dominic-a-antonelli"
    ],
    "request": [
      "/apm-server/apm-server-6.3.2-amd64.deb"
    ],
    "agent": [
      "Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24"
    ],
    "extension": [
      "deb"
    ],
    "tags.keyword": [
      "success",
      "info"
    ],
    "geo.coordinates": [
      {
        "coordinates": [
          -123.0594486,
          44.63781639
        ],
        "type": "Point"
      }
    ],
    "geo.dest": [
      "VN"
    ],
    "response.keyword": [
      "200"
    ],
    "machine.os": [
      "ios"
    ],
    "utc_time": [
      "2022-05-24T16:25:10.149Z"
    ],
    "agent.keyword": [
      "Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24"
    ],
    "clientip": [
      "186.181.227.73"
    ],
    "host": [
      "artifacts.elastic.co"
    ],
    "machine.ram": [
      20401094656
    ],
    "extension.keyword": [
      "deb"
    ],
    "host.keyword": [
      "artifacts.elastic.co"
    ],
    "machine.os.keyword": [
      "ios"
    ],
    "hour_of_day": [
      16
    ],
    "timestamp": [
      "2022-05-24T16:25:10.149Z"
    ],
    "geo.srcdest": [
      "US:VN"
    ],
    "ip": [
      "186.181.227.73"
    ],
    "request.keyword": [
      "/apm-server/apm-server-6.3.2-amd64.deb"
    ],
    "index": [
      "kibana_sample_data_logs"
    ],
    "geo.src": [
      "US"
    ],
    "index.keyword": [
      "kibana_sample_data_logs"
    ],
    "message": [
      "186.181.227.73 - - [2018-07-31T16:25:10.149Z] \"GET /apm-server/apm-server-6.3.2-amd64.deb HTTP/1.1\" 200 1588 \"-\" \"Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24\""
    ],
    "url": [
      "https://artifacts.elastic.co/downloads/apm-server/apm-server-6.3.2-amd64.deb"
    ],
    "url.keyword": [
      "https://artifacts.elastic.co/downloads/apm-server/apm-server-6.3.2-amd64.deb"
    ],
    "tags": [
      "success",
      "info"
    ],
    "@timestamp": [
      "2022-05-24T16:25:10.149Z"
    ],
    "bytes": [
      1588
    ],
    "response": [
      "200"
    ],
    "message.keyword": [
      "186.181.227.73 - - [2018-07-31T16:25:10.149Z] \"GET /apm-server/apm-server-6.3.2-amd64.deb HTTP/1.1\" 200 1588 \"-\" \"Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24\""
    ],
    "event.dataset": [
      "sample_web_logs"
    ]
  }
}

If you check the dashboard of the sample called “[Logs] Total Requests and Bytes” and the data, there is a link between a the worldmap and this part of data

“agent”: “Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24”,

“bytes”: 1588,

“clientip”: “186.181.227.73”,

“extension”: “deb”,

“geo”: { “srcdest”: “US:VN”, “src”: “US”, “dest”: “VN”, “coordinates”: { “lat”: 44.63781639, “lon”: -123.0594486 }

}

The fields bytes, client ip, agent and other fields regroup the requests to create the map.

The field geo.coordinates are pointing the elements bytes, client ip, agent, and other fields based on lattitude and longiture fields.

Visualization types:

To create a visualization, kibana offers a wide list of options available:

Histogram bars vertical and horizontal, Metrics, Lines and Areas, Donuts, Pies will count and regroup based on certain fields you may select.

Region map will display count data based on lattitude and longitude and regroup based on certain fields you may select..