Sending Source Analytics to Elasticsearch
The following guide walks through how to collect source analytic logs from your server using Elasticsearch.
Last updated
The following guide walks through how to collect source analytic logs from your server using Elasticsearch.
Last updated
Prior to setting up collection of your source analytic (PSA) logs, please ensure that source analytics
Once source analytic are being collected on your Polarity Server you can configure Elasticsearch to receive those logs.
Login to your Elasticsearch Kibana instance.
Navigate to the "Management" -> "Fleet" page
Click on "Agent Policies"
Click on "Create agent policy"
Name the policy. For example "polarity-source-analytics"
Decide if you would also like to collect system logs and metrics (note that this is not required for Source Analytics collection)
The default "Advanced options" will work but you may want to make changes depending on your organization.
For example, you might want to add an optional description or modify the "Default namespace".
Click on "Create agent policy"
Your new policy will be created but still needs to be configured.
Your new policy will show up in the Fleet list under "Agent Policies". Click on it to view the details:
Click on "Add integration"
Search for the "Docker" integration and click on it:
Click on the "Add Docker" button to add the Docker integration to your agent policy.
Under the Docker integration configuration, set the integration name:
Set the description.
Uncheck "Collect Docker Metrics"
Check "Collect Docker container logs" and then expand the "Change defaults"
Set the "Condition" variable to only collect logs from the polarity_platform
container.
Click on "Advanced Options"
Leave the "Docker container log path" to the default value which should be:
Set the "Container parser's stream configuration" to "all"
Leave the "Additional parsers configuration" text area blank.
Under the "Processors" text area add the following configuration:
Once you've completed creating your ingest pipeline click on "Save and continue".
When prompted click on "Add Elastic Agent to your hosts"
Leave the default settings. Copy the "Linux Tar" command and run it on your Polarity server to install the fleet agent.
If you used the default namespace of default
then your Polarity telemetry logs will collected using the datastream logs-docker.container_logs-default
If you modified the namespace
when configuring the Docker integration your datastream will be in the format logs-docker.container_logs-{namespace}
.
To find this data stream navigate to "Stack Management" -> "Index Management" -> "Data Streams":
If you do not see the data stream and your Agent is reporting as "Healthy", ensure you have PSA enabled on the server and that a search has been run since you enabled it.
To make your data stream searchable you have to create a "Data View". Navigate to "Kibana" -> "Data Views" and click on "Create data view".
Give the data view a name:
and then set the "Index Pattern":
You can leave the Timestamp field with the default setting of "@timestamp".
Click on "Save data view to Kibana"
You can view the raw source analytics by navigating to "Analytics" -> "Discover"
In the top left, filter to only show data from your newly created "Polarity Source Analytics" data view.
You should now see your Source Analytics Data available in Kibana. To view the Source Analytics specific data you can click on a log file and then filter fields by the term "Polarity"
You can optionally add a custom ingest pipeline to remove fields added by Elasticsearch. Elasticsearch adds these fields as part of the Docker integration and many of them do not provide any value and unnecessarily increase the size of your index.
To add a custom ingest pipeline, navigate to the "Agent Policies" page and find your recently created agent policy called polarity-telemetry-collection
. Click on the name to edit the policy.
Click on the "Advanced Options" dropdown and click on the "Add custom pipeline".
Click on "Add a processor" and select "Remove" for the type of the processor.
Under the "Fields" option we recommend adding the following fields for removal:
Check the "Ignore missing" and "Ignore failures for this processor" options.
Once your done click on "Add processor".