Software applications have historically been built using monolithic architectures, meaning they consist of one single piece of code that supports all the various modules and functionality of the application. While some may argue that a single code base can offer advantages in development simplicity, traditional monolithic applications can’t provide the scalability and agility needed in today’s always-on, always-connected world.

In just the last 10–15 years, the explosion of technological advancements like cloud, mobile, and the Internet as a means of reliable transport has led to a paradigm shift in application architectures—away from monolithic to more modular, service-based architectures. With this approach, applications depend on many external third-party services, backend integrations, and cloud APIs. While this approach brings considerable advantages in terms of scale and the ability to add new functionality quickly, it also introduces a level of complexity that can make it very challenging to identify and resolve performance issues. A simple user interaction, such as adding an item to a cart on an e-commerce site, can involve countless interdependent internal and external services that need to work together, often over the Internet, to execute an application workflow.

At the heart of all this are web-based services that expose a specific functionality, typically via a REST-based API. Some examples might be Twilio for messaging, Stripe for payment processing, or Google Maps for geolocation. More and more, APIs form a critical part of today’s modern applications. Understanding how APIs are performing, and their reachability over the Internet or cloud provider networks is crucial in assuring overall application performance.

Figure 1: Modular Application Architectures have increased dependencies and complexity.

Using Browser Synthetics to Test Cloud Services

ThousandEyes Browser Synthetics uses Selenium Webdriver to emulate a complete user journey as they exercise key business transactions—measuring an entire multi-page workflow, including backend API services, and providing a comprehensive view of the transaction with performance timings for the page load and detailed waterfall view of the sequential and cumulative exchange. By using markers within the test to identify key tasks within the workflow, such as the time it takes to confirm a purchase from an e-commerce site, also allows you to monitor the performance of a backend API service indirectly. However, app workflows can trigger multiple backend API interactions, and not all of them need explicit front-end browser interactions, as when a workflow involves a step that requires additional information, such as receipt of payment, user login, or the results of a previous search, to complete. The absence of this step means that the transaction test would be unavailable to test or validate the response fully. For example, to fully test the workflow for a user interacting with an e-commerce web app, you need to have some mechanism to respond with a receipt notification, conditional on a positive payment code response. A failure or performance issue in any one of these backend services will ultimately directly impact the customer.

Figure 2: User workflow that triggers backend API calls on multiple servers.

Introducing ThousandEyes Adaptive API Monitoring

While using browser synthetic transactions is a powerful way to test the performance and functionality of backend services, some workflows require more in-depth analysis. Application owners need to be able to test external APIs at a granular level directly, from within the context of their core application (instead of only through a front-end interaction), as well as understand the impact of the underlying network transport (typically an ISP or cloud provider network).

That’s why ThousandEyes is launching Adaptive API Monitoring. Adaptive API monitoring allows you to go beyond emulating user interactions via a customer-facing website to executing API calls directly on your API services. Its highly flexible synthetic testing framework emulates conditional backend application interactions with API endpoints.

Figure 3: Adaptive API monitoring configuration within the platform.

With this capability, application owners can now dynamically measure performance, differentiating timings between each iterative function as well as validate the logic of complex workflows. This allows for quick confirmation of problems within a workflow, as well as providing insight for potential optimization opportunities.

Figure 4: Performance timings for iterative calls across complete workflow

API monitoring tests can be run from vantage points that are external to the application environment (e.g., from ThousandEyes Cloud Agents), or from agents placed within the application hosting environment out to the API services. An advantage of this latter deployment approach is that the specific network paths and performance from the application to the API endpoints can also be monitored.

Let’s take a look at how adaptive API monitoring could be used.

In this example, an automotive e-commerce site displays listings for all its vehicles but wants to ensure that any search filter applied by the user executes as expected. The workflow begins when the user connects to the customer-facing website and enters their initial search criteria, for example, a 2002 Nissan Patrol. Having entered the search request, the web app makes a connection to the backend system to check the database and retrieve the initial results.

Example: ?year=2002&make=Nissan&model=Patrol

At this point, the user wants to refine the search and only wants to see used vehicles under a particular mileage threshold. This then generates a second API call to the backend, extending the initial query to return a further narrowed set of results.

Example: ?year=2002&make=Nissan&model=Patrol&mileage_lte=50000

In this case, the API calls are both iterative and conditional relative to the previous one. In this case, an adaptive API framework can use the output from the first endpoint as an input to the second API call, creating a workflow that searches a list of records, followed by a second step, that requests a further filtered set of details using the record from the initial response. In addition to checking the functionality and workflow, ThousandEyes Adaptive API Monitoring can validate the API response and raise an alert if the check fails. In this example, it would verify that none of the cars returned in the search has a mileage of over 50,000 kilometers.

Figure 5. Searching for a list of records, followed by requesting the details for a given record from the initial response

A synthetic transaction script emulating these adaptive API interactions can be extremely valuable in validating that only expected responses are received based on the business logic, as well as the performance of every interaction in a workflow. If either the performance or functionality of a workflow degrades, it’s simple to isolate the issue to a specific interaction, provider and API endpoint. Because the content of an API response can be validated, Adaptive API Monitoring can be used for a variety of use cases, including comparing content variation between a CDN and origin server and providing an automated approach to determining caching issues.

Adaptive API Monitoring Framework

Adaptive API Monitoring uses the “fetch” and “net” NodeJS libraries to create powerful and more granular tests that can expose issues that may otherwise not be caught by typical user workflows. The “node-fetch” library module provides a scripting API for creating HTTP requests. Using the module, a transaction script can make requests against one or more HTTP API endpoints, chaining data from one to the next if required. The fetch function takes two parameters as input, the URL, representing the URL for fetching and a JavaScript Object containing request parameters, including HTTP method, custom headers, timeout, and others.

Figure 6. HTTP GET with the node-fetch module, using the requested option as input to next instruction

The net module library provides a method for establishing TCP socket connections to network targets, and then send and read data from that socket, for example, an email client that communicates with an IMAP, SMTP or POP3 server, or an application that connects peer-to-peer or to some external hardware. In addition, a TLS API module library is also available for connecting to servers over TCP secured with TLS and is similar to the TCP socket connection.

Figure 7. Login using TCP socket connection

These libraries provide a flexible method of dynamically interacting with API endpoints, enabling contextual actions to be undertaken within the same workflow.

Application Assurance for Modern Applications

Gaining visibility into distributed, modular, API-centric applications will become even more critical as digital transformation projects accelerate the modernization of applications. Application architectures will increasingly become more modular and complex and rely on more internal and external APIs, working together to deliver application functionality. Visibility into API interactions is now essential to application assurance. ThousandEyes Adaptive API Monitoring, ThousandEyes delivers a powerful framework to address visibility across every composite application service and even multi-service interactions, enabling enterprises to manage every aspect of their application, even beyond their domain of ownership. The result is faster issue identification and resolution, and the data to validate functionality and optimize performance. Enterprises can now take advantage of best-of-breed API-based services while maintaining control of the end-to-end application experience.

Subscribe to the Internet and Cloud Intelligence Blog!
Subscribe
Back to ThousandEyes Blog