Deploying Polarity Server Virtual Machine on Azure
Setup the Virtual Machine
Basics
Disk
Networking
Management, Advanced
Tags
SSH Access
Configuring Disks
Get Disk Names
Partition Disks
Format Disks
Mount the Partitions
Install Polarity Server
Install the Polarity License
Update FQDN in Polarity Config
Tune PostgreSQL
Polarity Server with 16 GB of RAM
Polarity Server with 32 GB of RAM
Polarity Server with 64 GB of RAM
Setup the Virtual Machine
The following guide walks through setting up a Polarity Server within the Azure Virtual Machine environment using Polarity provided CentOS 7 RPMs. Within Azure, navigate to Virtual Machines and add a new one.
Basics
Option
Value
Virtual Machine Name
polarity-server
Region
Please pick appropriate region for your users
Availability options
No infrastructure redundancy required
Image
RedHat Enterprise Linux 8.6+
Size
Standard_D4s_v4 - 4 vcpus, 16 GiB memory (Pilot Deployment)
Standard_D8s_v4 - 8 vcpus, 32 GiB memory (10-50 users)
Standard_D16_v4 - 16 vcpus, 64 GiB memory (50+ users)
Authentication type
SSH public key
Username
azureuser
Inbound port rules
Allow selected ports
Inbound Port Rules
SSH (22), HTTP (80), HTTPS (443)
*HTTP inbound access (port 80) is required for LetsEncrypt SSL certificate. Port 80 is redirected to Port 443 by the server and is not required to be open.
Disk
Option
Value
OS disk type
Premium SSD
Encryption type
(Default) encryption at-rest with a platform-managed key
Data disks
Add two additional disks (Create and attach a new disk)
Disk 1:
Name: polarity-server_DataDiskVar_0 Size: 128 GiB Premium SSD
Disk 2:
Name: polarity-server_DataDiskApp_0 Size: 64 GiB Premium SSD
*Note that while the Polarity server can be run off a single partition (we’d recommend a minimum of 128 GiB for the partition), better performance can be achieved if the PostgreSQL database runs on its own partition (polarity-server_DataDiskVar_0).
Networking
Option
Value
Virtual Network
*Use default
Subnet
*Use default
Public IP
*Use default
NIC network security group
Basic
Select inbound ports
SSH (22), HTTP (80), HTTPS (443)
*HTTP inbound access (port 80) is required for LetsEncrypt SSL certificate. Port 80 is redirected to Port 443 by the server and is not required to be open.
Accelerated Networking
On
Load balancing
No
Management, Advanced
Use default settings for both Management, and Advanced configuration unless otherwise required by your organization.
Tags
Apply system tags as required by your organization.
SSH Access
Configuring Disks
Most command will need to be run as root
either via sudo
or changing to the root
user.
The following instructions provide information on how to setup two partitions on your Polarity Server. If you have other tools for partitioning your disks you can skip the below instructions. You will want to have /var
on one partition for the database (128+ GiB recommended), and /app
on another partition (64+ GiB recommended).
Once the VM is up and running SSH into the server to begin the configuration process. To start we will configure the two additional disks we added when creating the server.
We added two additional disks when setting up the VM. The first 128 GiB disk is will be used to mount /var
onto. The /var
directory contains our PostgreSQL database and other system logs.
The second 64 GiB disk will be used to mount /app
which contains the Polarity server code, Polarity server logs, integrations, and integration logs.
To set up the disks we will need to do the following:
Get the names of the disks
Create partitions using
fdisk
Format the disks to XFS using
mkfs.xfs
Move
/var
to the new 128 GiB disk and mount the diskMount the new 64 GiB disk to
/app
Get Disk Names
To check the names of the disk use the command lsblk
In the output above you can see that our 128 GiB disk meant for /var
is named sdc
and our 64 GiB disk meant for /app
is named sdd
. The disks should also be represented as files in the /dev
directory.
The disk names may be flipped depending on the order you added the disks to the Virtual Machine. If the disk names are different, ensure /var
is mapped to the 128 GiB disk, and that /app
is mapped to the 64 GiB disk.
Partition Disks
Now that we have the disk names we will create partitions on both disks. We’ll start with sdc
:
Type n
to create a new partition and then p
for a primary partition.
Use the default partition settings which will create a single primary partition across the entire disk.
Finally, you will need to write the partition to the disk by typing w
.
Repeat the same process for /dev/sdd
When running lsblk
you will now see the partitions sdc1
and sdd1
Format Disks
Now that the disks are partitioned we can format them with xfs
using the utility mkfs.xfs
Mount the Partitions
The final step is to mount the new partitions we created. We’ll start with sdc1
which is the 128 GiB disk as we will need to first transfer the current /var
contents.
Next we will copy the current contents of /var
to the new location
The flags on rsync are very important. In particular the X
flag will ensure selinux contexts are copied.
Now that the contents of /var
have been copied we can unmount /mnt/newvar
Next, we’ll mount our /app
directory onto sdd1
. Since this directory does not currently exist we do not need to copy any contents into it:
Next we’ll want to get the UUID
values for our two disks so we can permanently add the mount points to our /etc/fstab
file. We can use the blkid
command to get the UUIDs of the disks:
Copy down the UUID value for /dev/sdc1
and /dev/sdd1
. Next, open the /etc/fstab
file for editing and add the following two lines to the bottom. Make sure the UUID for /app
maps to the 64 GiB drive, and that the UUID for /var
maps to the 128 GiB drive.
Then add the following two lines to the bottom of the file and save the changes.
Your UUID values will be different than those above
Restart the virtual machine to finish (can be done from the Azure web interface).
When the virtual machine restarts, SSH in and check to make sure all your partitions were mounted correctly.
You should see the sdc1
partition (128 GiB) with a mountpoint of /var
and the sdd1
partition (64 GiB) with a mount point of /app
.
Install Polarity Server
Now that our disks are configured we’re ready to install the Polarity Server.
Please contact your Polarity customer success representative for the latest installation instructions on your preferred Operating System.
customersucces@polarity.io
Supported operating systems are RHEL 7, Amazon Linux 2, RHEL 8 and Oracle Linux 8.
Once the Polarity Software is installed you can continue with the below steps.
Install the Polarity License
At this point you will need to install your Polarity License provided to you by your Polarity support or customer success team. The license file will be named polarity.lic
. Upload the polarity.lic
license file to the Polarity Server.
Copy the polarity.lic
file to the license directory
Ensure the license file is owned by the polarityd
user
Finally restart your server so the license is loaded
You can now navigate to the polarity web interface by opening a browser (Chrome is recommended) and navigating to https://<your-polarity-server>
You can login with the default user admin
and the password PolarityServer2015!
Update FQDN in Polarity Config
By default the Polarity server will assume the FQDN for your server matches the server hostname. If this is not the case, you will need to modify the Polarity Server config file to set the appropriate config.
To modify the Polarity server config begin by opening the config file in an editor.
Find the setting rest.fullyQualifiedDomainName
and set it to your FQDN
Save the change and restart the server:
Tune PostgreSQL
The following parameters were changed from their default values on the PostgreSQL database. To edit the settings open the postgresql.conf
file located at /var/lib/pgsql/13/data/postgresql.conf
.
Adjust settings per the table below and then restart PostgreSQL and the Polarity Server
Polarity Server with 16 GB of RAM
Setting
Value
max_connections
100
shared_buffers
1843MB
work_mem
18MB
maintenance_work_mem
896MB
random_page_cost
1.5
*If using SSDs, otherwise leave unchanged
max_wal_size
4GB
effective_cache_size
5500MB
cpu_tuple_cost = 0.0030
0.0030
cpu_index_tuple_cost
0.0010
cpu_operator_cost
0.0005
Polarity Server with 32 GB of RAM
Setting
Value
max_connections
100
shared_buffers
4700MB
work_mem
47MB
maintenance_work_mem
2355MB
random_page_cost
1.5
*If using SSDs, otherwise leave unchanged
max_wal_size
4GB
effective_cache_size
14GB
cpu_tuple_cost = 0.0030
0.0030
cpu_index_tuple_cost
0.0010
cpu_operator_cost
0.0005
Polarity Server with 64 GB of RAM
Setting
Value
max_connections
100
shared_buffers
10GB
work_mem
100MB
maintenance_work_mem
5GB
random_page_cost
1.5
*If using SSDs, otherwise leave unchanged
max_wal_size
4GB
effective_cache_size
30GB
cpu_tuple_cost = 0.0030
0.0030
cpu_index_tuple_cost
0.0010
cpu_operator_cost
0.0005
Last updated