Log Insight 2.5: Integrated Load Balancer

Log Insight 2.0 featured a scale-out option which relied on an external load balancer in order to support ingestion high availability. In Log Insight 2.5 an integrated load balancer is available. I would like to discuss the features of the integrated load balancer and also offer a comparison between using an external load balancer versus the integrated load balancer.

li-cluster

External Load Balancer

I covered everything about External Load Balancers (ELBs) including considerations and configurations in this post.

Integrated Load Balancer

The Integrated Load Balancer (ILB) is configured from the Cluster page of the Administration section.

NOTES:

  • The option to configure the ILB only appears when at least one worker has been joined.
  • All Log Insight nodes and the VIP address specified must be in the same layer 2 network. See the question mark for more information.

To enable the ILB simply select the checkbox and enter the IP address that you wish to use as the Virtual IP (VIP):

li-25-integrated-lb

IMPORTANT: After enabling the ILB you must unconfigure and reconfigure every vCenter Server integrated with Log Insight from the vSphere integration page or else the ILB will not be used for configured vSphere devices. Failure to do so may result in no ingestion high availability for configured vSphere devices.

Within 30 seconds, the VIP gets set and the ILB is configured. Once you receive the confirmation at the top of the page, you can now test that the ILB is working by going to the specified VIP address in your web browser. You will then be directed to your Log Insight cluster. Once you have confirmed that the VIP is working properly, it is highly recommended that you configure a FQDN for you VIP. This FQDN should be used to access both the Log Insight UI as well as be pointed at by client (e.g. syslog or ingestion API) devices forwarding events.

Comparison

Now you may be wondering what the difference are between the ELB and the ILB. Let me explain:

ELB

  • Pros
    • Multiple VIPs can be configured
    • Log Insight nodes do not have to be in the same L2 network
    • VIPs do not have to be in the same L2 as the Log Insight nodes
  • Neutral
    • Capable of being used for more than just Log Insight traffic – if you are just running Log Insight you probably do not care
    • Capable of doing stateful failover though can be challenging to configure and requires even more resources – only beneficial for TCP traffic where failover is longer than TCP retries
    • Offers SSL offloading – typically beneficial for DMZ configuration and Log Insight is typically not exposed through the DMZ
    • Active-Active VIPs are possible
  • Cons
    • Requires a third-party solution – most cost money and require a different support model; all require additional resources
    • Complex configuration required – easy for network teams to configure, but difficult to relay requirements for configuration
    • HA requires more configuration and more resources
    • Many do not offer UDP support
    • Most do not support message-based (L7) load balancing and those that do are difficult to configure
      EXCEPTION: Kemp has native support for Log Insight message-based load balancing
    • Often are run by a different team than the one running Log Insight – can introduce delays and overhead for cluster management, such as upgrades, maintenance, etc.

ILB

  • Pros
    • Runs on existing Log Insight nodes – no additional cost, no different support model, no additional resource requirements
    • Only requires an IP address for load balancer configuration
    • VIP is automatically failed over to another node if/when necessary – no additional configuration or resources
    • Offers support for all Log Insight protocols – no configuration necessary
    • Supports L7 load balancing – even if integrated load balancer is not configured
    • Supports L4 load balancing for both ingestion and query traffic
    • Does not require another team to configure and maintain
  • Neutral
    • No stateful failover – given the short failover time this is of no concern
    • No SSL offloading – SSL is handled on a per Log Insight node basis
    • Active-Active VIP is not supported – the LB VIP is capable of supporting the maximum ingestion of a Log Insight cluster
    • Log Insight nodes have to be in the same L2 network – Log Insight has never supported geo-clusters so L3 clusters do not really make sense
    • The VIP has to be in the same L2 network as the Log Insight nodes – Log Insight has never supported geo-clusters so a L3 VIP do not really make sense
  • Cons
    • Only a single VIP can be configured UPDATE: This is addressed in LI 3.3!

Summary

The ILB feature in Log Insight 2.5:

  • Supports the maximum ingestion of a Log Insight cluster – complimenting Log Insight technology
  • Supports L4 and L7 load balancing as well as automatic failover of the VIP – providing feature parity with external load balancers
  • Abstracts the complexity of load balancer configuration from the user requiring only that a VIP be set – reducing configuration complexity
  • Removes the dependency on third-party load balancers – saving money, removing dependencies on other teams and saving time

© 2015 – 2016, Steve Flanders. All rights reserved.

9 thoughts on “Log Insight 2.5: Integrated Load Balancer

  1. Tony DiSalvo says:

    Question: If you set this up with an external load ballancer, when you quiery log insight are both nodes independently parsing thier logs in order to speed up the results of your query, or is only one? My concern is the amount of time it takes. I currently have a 2TB log insight instance and it can take up to 15 minutes to do an earliest to current query.

    • Thanks for the comment! The integrated load balancer and external load balancer work the same way in terms of ingestion and query — they balance ingestion traffic across nodes and send all query traffic to the master. Now, the master sends query requests to all workers so all nodes participate in the query, it is just that the master is responsible for aggregating the information and presenting it in the UI.

      If you are experiencing performance issues, this could be for a variety of reasons including: improperly sized cluster, querying over too large a time period, inefficient query (e.g. regular expression), etc.

  2. yvan says:

    Hi Steve

    for me it seems like even with the ILB configured, and then removed, re add vcenter, ESXi syslog get configure with master loginsight name and not the VIP or VIP FQDN ?

    Thx

    • Hey Yvan — hmm this should not be the case. Is it possible the unconfigure did not work properly? When you remove the vCenter you should be presented with an option to remove the configuration from the ESXi hosts which you should select. Then next time you configure to collect from ESXi, the syslog target should be the VIP.

  3. Yvan says:

    Hi Steve
    thanks for answering. Yes I could unconfigure and the option to completely remove. Then I re add vcenter and ESXs are configured with the FQDN of the master loginsight not the ILB IP 🙁
    I opened an SR with support.
    KR

Leave a Reply