Solving problems and high availability

Did you know that the most time spent on problem resolution is identifying the root cause?

In yesterday’s webinar, “Solving problems and high availability,” Clark Everetts shared with the audience that it’s common to see over 80 percent of time is spend gathering information, recreating the problem, and analyzing the information before a fix is ever put in place.

Consider that a recent study from Akamai said that a 1 second delay in page response can result in a 7 percent reduction in sales – that’s $2.5M per year loss (for a site making $100K/day)! At that rate you can’t afford to spend significant time fixing problems.

So how can you speed up problem resolution, avoid getting a bad rep for having unreliable applications, and save costs?

Clark covered four key areas to help you resolve problems and increase reliability in your enterprise-grade apps:

  1. Monitor for faults and know quickly when you have a problem
  2. Diagnose problems using the right tools
  3. Cluster to increase scalability
  4. Synchronize session data

The conclusion is that your reputation, and that of your organization, depends largely upon the reputation of your applications. Whether the applications are public-facing or internal, they’re still mission-critical and your ability to resolve problems quickly, and handle the processing needs of the application’s users, is paramount.

Audience poll results

Here’s where our audience stood on the topics of discovering problems, diagnosis, and clustering.
How do you discover problems in your applications?
An overwhelming amount of the audience – 67% – still rely on the good old fashion phone call, email, or visit from co-workers to discover the problem. While the result wasn’t really surprising, there’s a lot of great tools available to help reduce timeframes, so we encourage folks to check them out!

We had a good spread of responses here but the majority of our audience still use manual tools like printf() and logging. Clark says that this reflects what he sees at customer sites, so it wasn’t too surprising, but given the wealth of modern debuggers, monitoring tools, and analytics, we hope the trend moves towards more advanced tools.


While the majority of the audience doesn’t currently use clustering, perhaps this session will convince them to consider how vital clustering is for improving the reliability and scalability of their applications.

Clark answers your questions

Could you use same server for several nodes or is efficacy then lost, so you need more hardware for clustering?

If you have sufficiently powerful hardware, and you are using virtualization, then it makes sense to run a cluster in VMs on a smaller number of physical boxes, since you could scale out or in with VMs to handle current demand, while having physical resources available for other workloads.

It’s no different than what any cloud provider is doing, providing great flexibility at the relatively low cost of some performance due to the virtualization.

An advantage to using multiple VMs on the same physical box is capacity planning: you get to choose how much of the server to use for your applications, and having resources available for other tasks. Another benefit is you are creating boundaries between the virtual servers, so if one fails you don’t have a cascading effect.

Now, assessing the costs of virtual machines over dedicated hardware can be both straightforward and complicated. Whether dedicated servers or virtual machines is better from a purely cost standpoint, depends on – at a minimum – the number of servers, the load and bandwidth they consume, the costs and demands of managing the servers, and how quickly all those factors change over time.

Increasing cost of VMs over time, from increased demand, could conceivably make dedicated servers (or at least some dedicated, with VMs for spikes) a better approach, but there are so many other considerations in addition that the short answer to cost is: there isn’t a one-size-fits-all answer.

How’s that for a response from a consultant?! Seriously, in the large majority of cases, virtualization is the way to go.

Is it possible to load balance a load balancer?

Excellent question! You want to prevent your load balancer from becoming a single point of failure. You can achieve this using DNS routing. As an example, one of our customers typically routes traffic from different geographic areas to different datacenters. In each datacenter they have a cluster with a load balancer.

Using multiple A records for your host name allows you to associate more than one IP address for the domain. Run nslookup on and you’ll see several IPs. Run it again over time and you’ll see the IPs change, or be listed in different order. It’s the change in ordering that results in balanced distribution of traffic to the different targets.

What are these targets? Load balancers. Your load balancers at these IPs can route traffic to any, or overlapping, clusters of servers.

This gives you redundancy for your load balancers. There are even more sophisticated network architectures that achieve load balancing without the use of load balancers, but that’s a good topic for another day. Again, great question!

Could Zend Server monitoring be combined with different IDERA SQL base performance and quality analysis tools, like IDERA PowerShell tools?

When integrating Zend Server with other tools and services, one consideration is the direction in which the communication is to take place. Zend Server can send data to another tool, or it can be on the receiving end of commands and data.

Outbound communication from Zend Server

Using the Callback URL in a Zend Server Monitoring Event Rule means that when the event is triggered, Zend Server will post event data to that URL. Typically, at that URL is a PHP script you write, which can then build and send an appropriate API request to the relevant third-party web service (you could do anything you like with the data: log to file or db table, etc.).

Some of IDERA’s products have APIs, which means there is the possibility of doing exactly that, but how to interact with the APIs would bear further study. See this page for some references to IDERA APIs.

Your question, however, specifically referred to IDERA PowerShell tools. You could use the Callback URL to post data to a PHP script that invoked a command in PowerShell. I claim to no extent to be a PowerShell expert, nor have I worked with IDERA software, but if PHP is available on the server running your IDERA PowerShell scripts, you can invoke PowerShell (I tested using PHP built-in webserver):

One of my colleagues noted that there is a webserver written in PowerShell, available here, from which you can invoke PowerShell commands. Your monitoring event callback URLs could post to the PowerShell webserver.

I’ve never used it, however, and the web site states it supports PHP 5.3 and 5.4, so be exceedingly careful, from a security standpoint. Do not expose it directly to public internet.

Inbound communication to Zend Server

Remember I said Zend Server can be on the receiving end of commands and data? This is achievable using the WebAPI.

From our documentation:

“The Zend Server Web API allows external systems to connect to a programmatic, restful API that allows access to all of Zend Server’s management features. Using the Web API, a 3rd party system can automate cluster management, application deployment, and other development and integration tasks.

The Zend Server UI is both an example and a test case for the use of the Zend Server Web API. Almost every functionality in the UI is executed via the Web API.”

This means you could obtain monitoring event data from Zend Server any time you wished, and integrate it in custom fashion with a web-based client, or from PowerShell scripts. You can write PHP CLI scripts that run in PowerShell, and make HTTP requests to the Zend Server WebAPI, or you can use the ZendServerSDK, which is a command line tool that makes interacting with the WebAPI very easy.

The ZendServerSDK is available on GitHub.

Hopefully, I’ve given you enough info to get you started. Thanks for the question, and good luck!

The following two tabs change content below.
    Kara brings together her broad technical marketing experience and innovative project management skills to drive content marketing efforts at Rogue Wave Software.

    About Kara Howson

    Kara brings together her broad technical marketing experience and innovative project management skills to drive content marketing efforts at Rogue Wave Software.

    • Yuriy Prysyazhnyuk

      Hi, Clark! Thank you for your answer to the question about a load balancer. Indeed, I tried to avoid ” load balancer from becoming a single point of failure”.
      I thought about creating several A domain records, but in this case I faced the problem of database replication. If I do it too often, it’ll create too much load, otherwise users (e.g. authorized users with profiles) won’t see changes in their accounts. I can only wonder how google, amazon, ebay, etc copy data between different database servers. Do they use something like transactional replication?

      I understand that it’s a broad topic, but I’d be thankful if you can give the name of technique/approach or any source of information.

      Thank you!
      Best regards, Yuriy