HTTP 404 for Missing API Resources

Should an API return HTTP 404 status when the specified resource cannot be found? Of course, that’s exactly what (Not Found) means. As RFC2616 states pretty clearly, 404 Not Found means:

The server has not found anything matching the Request-URI.

However, if you think APIs are like web pages, you might be perplexed by such an interpretation. Maybe 404 feels like an error because it connotes that something went wrong with the web app. Oh no, the page wasn’t found! The world is coming to an end! Run for the hills! Save the women and children!

But APIs aren’t web apps, are they? They aren’t pages at all. APIs are cacheable, uniformly-addressable, resource-oriented interfaces. Given the address for a resource, if it’s not available, the API should return the HTTP 404 status in most cases.

Unfortunately, there’s so much momentum around the idea of 404 being an error condition that it’s really hard to get developers to think differently about how the web is supposed to work. Monitoring tools, log indexers and server health checks don’t help much for changing minds. So many of these tools dutifully treat 404 as an error condition that well-meaning API developers can’t see outside of the little boxes into which they’ve been coaxed.

One of the most troubling manifestations of this page-oriented bias is Microsoft Azure’s App Service server health check. Deploying a pure Web API to an Azure App Service, you’ll find that you’re subject to all sorts of page-specific constructs related to health. Most notably, if you return HTTP 404 from a server too often, the health checks will mark the instance as failing. Obviously, if pages are missing, something’s clearly wrong with that one naughty instance out of ten. The sheer narrow-mindedness of that idea just boggles my mind given that all the servers run the same code and connect to the same resources by definition.

If this incorrect behavior meant that Azure simply recycled instances more often than it needed to, that wouldn’t be too high a price to pay for their pedantic interpretation of the HTTP specification. Unfortunately, we don’t stop paying the price there. The standard load balancer for Azure App Service seems to be completely perplexed when specific server instances are marked as unhealthy. Performance degrades quickly as more and more instances are quarantined and taken out of rotation while new ones have to be created and spun up. What a mess!

If you’re using a managed API gateway like Azure API Management (APIM), there’s a fairly elegant way to deal with this particular problem. Have your API return 200-series statuses for these conditions, keeping the health checks ignorant and happy, then translate the outbound HTTP statuses to the ones you really want to convey at the edge. Here’s some outbound policy you can add to your API to pick up a special response header named RealHttpStatus and return that instead.

The policy begins by fetching the special response header called RealHttpStatus if it exists. The API should inject this header whenever it means to return something that might be misinterpreted or mishandled by the application server’s management tools. Next, the policy removes the special header so clients won’t see how our chicanery was perpetrated. Lastly, if the integer value of the special header is 404, the actual status returned by APIM will be 404, regardless of what the back end actually provided via the actual HTTP status code.

Of course, this policy’s <choose> element can be extended to include as many other interesting HTTP status types as you might require. Your API can go on respecting HTTP for all its beauty and prescience while keeping those fossilized ne’er-do-wells completely in the dark about your villainous plans.

Create SAS Tokens for Azure API Management with an Azure Function

Shared Access Signature (SAS) tokens are required to call Azure API Management’s original REST API. We can generate these manually on the Azure portal for testing. However, in production, if you want to invoke the APIM REST APIs programmatically, you’ll need to generate these tokens with a bit of code. There’s a snippet available in the APIM documentation that shows how to do this but it’s (possibly) got a flaw that I’ll address below. Moreover, with Azure Functions available these days, it makes sense to expose this token generator as a service. Here’s the code for an Azure Function to do just that:

This Azure Function requires two web application settings named APIM_SAS_ID and APIM_SAS_KEY to be used in the hashing process. You can fetch those from the APIM publisher portal (and wherever they may be on the main Azure portal once APIM is fully integrated there). The token that gets generated from this code will be good for ten minutes. You can add more time if you like by modifying the line of code that calls DateTime.AddMinutes(). Currently, APIM SAS tokens can be generated to last up to thirty days although it’s not good practice to make them last that long.

The problem that I found with the snippet of code that was shown in the APIM documentation is that the inclusion of the seconds in the expiration time caused it to fail validation no matter how the middle (EX) portion of the SAS token was formulated. Perhaps I was doing something else wrong but I found that by setting the seconds to zero in the expiration date, I was able to generate SAS tokens that are honored by the APIM REST API. Here’s a GET operation that fetches the value of a property in APIM named SOME_APIM_PROP using the SharedAccessSignature Authorization schema:

With this Azure Function in place (and the credentials to access it), I can generate SAS tokens for APIM any time I like using a simple, clean HTTP interface. Azure Functions are great architectural building blocks for any modern, API-centric design. If you agree or disagree with that assertion, let me know by reaching me on Twitter @KevinHazzard. Enjoy.

Extract JWT Claims in Azure API Management Policy

JSON Web Tokens (JWT) are easy to validate in Azure API Management (APIM) using policy statements. This makes integration with Azure Active Directory and other OpenID providers nearly foolproof. For example, one might add the following directive to the <inbound> policy for an API to ensure that the caller has attached a bearer token with acceptable audience, issuer and application ID values in the signed JWT:

That’s nice. A little bit of markup and all that nasty security plumbing is handled outside the API. But what if we want to pass some individual claims named inside the token on to the API backend? Unfortunately, Azure APIM doesn’t have that built into JWT token validation policy. Ideally, we’d be able to extract claims during validation into variables and pass them in HTTP headers before the request is forwarded to the backing API. Until that feature is added, here’s how you can do that:

In this code, I’ve added some script inside the <set-header> policy statement to fetch the Authorization header from the request, check that it’s a Bearer type token, attempt to parse it (which checks the token’s signature), then finally extracts the value of one specific claim. Most of that work already happens inside <validate-jwt> policy, as you can imagine. Until there’s an easier way to extract JWT claims individually, the solution shown here works nicely. Enjoy.

If you agree with me that this feature should be built right into the <validate-jwt> policy, please upvote the feature request I wrote on the APIM feedback site.

The Simplest Possible Thing Principle

I mentor lots of young developers. It became a passion for me in the 1990s when I started teaching computer programming at a local college. I was not a good teacher for the first couple of years, admittedly. But I studied pedagogy and learned how to balance lecture and lab time to maximize the understanding of my students. More importantly, I learned how to prepare myself to help my students learn. Preparing yourself to teach often means boiling ideas down into simple, memorable principles. One of them is called the Simplest Possible Thing principle.

Continue reading The Simplest Possible Thing Principle

Invoking Stored Procedures from Azure Mobile Services

One of the questions I’m often asked is whether it’s possible to call SQL stored procedures from Azure Mobile Services. The answer is yes and it’s probably easier than you think. In case you don’t know, Azure Mobile Services is a way to very simply expose an HTTP service over data stored in an Azure SQL database. By default, Azure Mobile Services exposes the SQL tables directly as resources. So the HTTP methods GET, POST, PUT and DELETE will essentially be mapped to SQL operations on the underlying tables.

While this simple mapping mechanism is good for resource-oriented access to the tables, the logic to produce a usable Web API is often a bit more complex than that. Stored procedures can provide an interesting abstraction layer that allows us to use the efficiency of the SQL Server query engine to reduce round trips to and from Internet clients, for example. Or perhaps stored procedures might be used to hide normalization peculiarities from clients or perhaps to use advanced parameter handling logic. Whatever the case may be, it would be helpful from time to time to be able to invoke stored procedures from the HTTP API that Azure Mobile Services provides.

Let’s start by assuming that an Azure Mobile Service exists with some data that we would like to expose via a stored procedure. For the purposes of this example, my service is called MobileWeatherAlert which contains a backing table in an Azure SQL Database named [MobileWeatherAlert_db]. It’s really helpful that Azure Mobile Services uses schema separation in the underlying database to manage all of its data. That schema separation allows us to expose many separate middle-tier services from one common database if needed. So, in my weather database, there’s a schema called [MobileWeatherAlert] corresponding to the name of the service that it supports. For the purposes of this example, that schema contains a table called [Observation] which is used to collect weather data by [City]. Figure 1 shows a very simple stored procedure called [GetObservationsForCity] that I’d like to be able to call from the service API.

Figure 1 - A simple stored procedure to fetch weather observations for a given city.
Figure 1 – A simple stored procedure to fetch weather observations for a given city.

There are a number of places where this procedure might be invoked. For this example, I’ll implement a custom API in the mobile service called observation. Figure 2 shows the dialog in the Azure management console where the custom API will be created.

Figure 2 - Creating the custom observation API for the MobileWeatherAlert service.
Figure 2 – Creating the custom observation API for the MobileWeatherAlert service.

For this simple example, I’ll only implement the HTTP GET method in the API to invoke the stored procedure. For simplicity of the example, I’ll open up access to everyone to avoid having to pass any sort of credentials. Now I can add a bit of JavaScript to the API to make the stored procedure call. Figure 3 demonstrates adding that JavaScript to the API via the Azure management console.

Figure 3 – The JavaScript to invoke the GetObservationsForCity stored procedure with a URL-sourced city parameter.
Figure 3 – The JavaScript to invoke the GetObservationsForCity stored procedure with a URL-sourced city parameter.

Lines 1 through 9 in the script encompass the get function that will be invoked when the HTTP GET method is used to call the service. The parameters passed to the JavaScript function are the request and response objects. From the request object, line 2 shows how to obtain a reference to the mssql object which exposes a query function for making calls into the database. Line 3 demonstrates how to call the query function to execute the [GetObservationsForCity] stored procedure, passing a single parameter for the City by which to filter. It’s important to note here that the schema in which the stored procedure resides is not named in the EXEC call. This is counter-intuitive, in my opinion, and is likely to trip up novices as they experiment with this functionality. Since we are invoking the GET method for the MobileWeatherAlert service, there’s an implicit assumption used in the preparation of the SQL statement that objects will reside in a similarly-named database schema.

Notice also on Line 3 that the request object passed into the JavaScript function exposes a query property that conveniently contains an object named city which will be parsed directly from the URL. Figure 4 shows how that URL might be passed from PostMan, a really excellent Google Chrome plug in that allows the invocation of nearly any sort of HTTP-oriented web service or API.

Figure 4 - Calling the new API via PostMan to get weather observations for the city of RIchmond.
Figure 4 – Calling the new API via PostMan to get weather observations for the city of Richmond.

Finally, lines 4 through 6 of the JavaScript method, the success function that process the results of the SQL query logs the results and returns them to the caller with an HTTP 201 (OK) response. I’ve included a called to the console.log() function to show how easy it is to log just about anything when you’re debugging your JavaScript code in Azure Mobile Services. After invoking an API or custom resource method that logs something, check out the logs tab of the mobile service in the management console to see what got saved. Of course, you’ll want to do minimal logging in production but while you’re testing and debugging, the log is a valuable resource.

In studying the URL and its output in Figure 4, remember that the JavaScript for the observation API didn’t have to do any special parsing of the row set returned by SQL Server to produce this result. Simply returning that data from SQL Server caused the API to emit JavaScript Object Notation (JSON) which has arguably become the lingua franca of the Internet for expressing data.

In closing, I’ll share a couple of thoughts. If you’re interested in building a simple query interface on top of a mobile service, you don’t have to use stored procedures as shown here. Azure Mobile Services implements fairly rich OData support directly on table resources. With OData, filtering, sorting and pagination of SQL data are built in, so to speak. Also, the web way of doing services (sometimes called RESTful based on Dr. Roy Fielding’s dissertation and the HTTP standards that flowed from it) assume that we’ll use HTTP in the way it was intended: accessing and linking resources at a more basic level, using the HTTP methods GET, POST, PUT, and DELETE as a complete, fully-functional language for accessing those resources. Database people inherently understand and respect this access pattern better than many programmers working in traditional programming languages like C# and Java. After all, we’re accustomed to using four basic methods to manipulate data in our databases: SELECT, INSERT, UPDATE, and DELETE. Yet, as database people, we also know that giving software developers strict table-level access can cause all sorts of performance problems. For those situations, where you know that some complex database operation could be performed much more efficiently with a bit of T-SQL code, a stored procedure or a view may be just the prescription your developers need. Hopefully, this article has helped you understand how to invoke programmatic resources in a SQL Azure database and perhaps it will help you along the way to making the correct architectural choices in the design of your modern, data-driven web applications.

Automating Windows Defender Updates

If you have automatic Windows Updates turned off, you may have noticed that you also don’t get Windows Defender updates automatically. There are a lot of reasons why you may want to disable automatic updates. Perhaps you want to pick and choose the ones you accept. Or maybe you don’t want to smash through your data plan limits so you wait until you’re on another network to get your updates. Whatever your reasons are for disabling automatic Windows Updates, not getting up-to-date virus and malware signatures every day is an unfortunate side effect. However, you can force Windows Defender to update its signature file with the following command:

Try it out at a command prompt to see it in action and to verify that it works as advertised. On my computers, I simply created a new task in Task Scheduler that executes the command daily. Now, I can do my Windows Updates manually and receive Windows Defender malware signatures (effectively) automatically.