Create SAS Tokens for Azure API Management with an Azure Function

Shared Access Signature (SAS) tokens are required to call Azure API Management’s original REST API. We can generate these manually on the Azure portal for testing. However, in production, if you want to invoke the APIM REST APIs programmatically, you’ll need to generate these tokens with a bit of code. There’s a snippet available in the APIM documentation that shows how to do this but it’s (possibly) got a flaw that I’ll address below. Moreover, with Azure Functions available these days, it makes sense to expose this token generator as a service. Here’s the code for an Azure Function to do just that:

This Azure Function requires two web application settings named APIM_SAS_ID and APIM_SAS_KEY to be used in the hashing process. You can fetch those from the APIM publisher portal (and wherever they may be on the main Azure portal once APIM is fully integrated there). The token that gets generated from this code will be good for ten minutes. You can add more time if you like by modifying the line of code that calls DateTime.AddMinutes(). Currently, APIM SAS tokens can be generated to last up to thirty days although it’s not good practice to make them last that long.

The problem that I found with the snippet of code that was shown in the APIM documentation is that the inclusion of the seconds in the expiration time caused it to fail validation no matter how the middle (EX) portion of the SAS token was formulated. Perhaps I was doing something else wrong but I found that by setting the seconds to zero in the expiration date, I was able to generate SAS tokens that are honored by the APIM REST API. Here’s a GET operation that fetches the value of a property in APIM named SOME_APIM_PROP using the SharedAccessSignature Authorization schema:

With this Azure Function in place (and the credentials to access it), I can generate SAS tokens for APIM any time I like using a simple, clean HTTP interface. Azure Functions are great architectural building blocks for any modern, API-centric design. If you agree or disagree with that assertion, let me know by reaching me on Twitter @KevinHazzard. Enjoy.

Extract JWT Claims in Azure API Management Policy

JSON Web Tokens (JWT) are easy to validate in Azure API Management (APIM) using policy statements. This makes integration with Azure Active Directory and other OpenID providers nearly foolproof. For example, one might add the following directive to the <inbound> policy for an API to ensure that the caller has attached a bearer token with acceptable audience, issuer and application ID values in the signed JWT:

That’s nice. A little bit of markup and all that nasty security plumbing is handled outside the API. But what if we want to pass some individual claims named inside the token on to the API backend? Unfortunately, Azure APIM doesn’t have that built into JWT token validation policy. Ideally, we’d be able to extract claims during validation into variables and pass them in HTTP headers before the request is forwarded to the backing API. Until that feature is added, here’s how you can do that:

In this code, I’ve added some script inside the <set-header> policy statement to fetch the Authorization header from the request, check that it’s a Bearer type token, attempt to parse it (which checks the token’s signature), then finally extracts the value of one specific claim. Most of that work already happens inside <validate-jwt> policy, as you can imagine. Until there’s an easier way to extract JWT claims individually, the solution shown here works nicely. Enjoy.

If you agree with me that this feature should be built right into the <validate-jwt> policy, please upvote the feature request I wrote on the APIM feedback site.

The Simplest Possible Thing Principle

I mentor lots of young developers. It became a passion for me in the 1990s when I started teaching computer programming at a local college. I was not a good teacher for the first couple of years, admittedly. But I studied pedagogy and learned how to balance lecture and lab time to maximize the understanding of my students. More importantly, I learned how to prepare myself to help my students learn. Preparing yourself to teach often means boiling ideas down into simple, memorable principles. One of them is called the Simplest Possible Thing principle.

Continue reading The Simplest Possible Thing Principle

.NET Back to Basics – Delegates to Expression Trees

I led a talk at the Richmond, Virginia .NET User Group on 2016/2/4 about how delegates have evolved in .NET since 2002. We had about 40 in attendance from my rough count and the discussion was energetic. Thanks to everyone who attended. Below, you’ll find the links for the presentation and source code from the meeting. The slides are light on content but they’ll help connect you to the ten different examples in the attached source code, at least.

Get the Source Code

Get the Slides

If you are a user group leader and would like me to deliver this presentation to your group, contact me on Twitter as KevinHazzard. Enjoy!

Kevin’s Career Upgrade v6.0 – Amazon AWS

I started my career in software development 34 years ago. I was 16 years old and a relative needed a program that helped her with a file classification project at a local hospital. I wrote a really bad piece of software in BASIC that took hours to solve the problem each time it was needed. I refactored the program over and over again until I got it to run in a few seconds instead. The process of refactoring the software to improve it was exciting to me. It wasn’t a battle with the machine. It was a battle in my mind and I believed, step by step, that I could win through intelligence and sheer will. My relative showed the program to the hospital staff and they bought it from me for $50. I was ecstatic. Moreover, I was hooked on software development for life.

Over the past 3.5 decades, I’ve made four major shifts in my career. When I started out, I wrote code in assembly language and C language. If you don’t know what those are, no worries. Just suffice it to say that they are super low-level abstractions and close to the actual hardware. I was really good at understanding the machine architecture and using it to my advantage. I even claimed that for a few minutes in 1984, I knew everything there was to know about the PC. Since then, I’ve been slipping. That’s true of everyone in this field though. There is no person who can know everything about all the complex systems that make up the modern PC and the Internet. I know a lot more than the average developer given my background that reaches all the way back to those low-level language days. But truth be known, I rarely use that old knowledge about electrical engineering and processor architecture to get work done today.

The first major shift in my career, which I’ll call v2.0, happened in the mid-1980s. The out-of-date mainframe pre-compiler I used during the metamorphosis was called C with Classes but the language had already been renamed C++. The idea of object-orientation was dazzlingly cool to me. Being able to hide data inside of objects that expose safe access methods was liberating and empowering. I became a real expert in object-oriented design and in the use of the C++ Standard Template Library, riding that wave for more than a decade to create some very cool software.

Career v3.0 for me came in the late 1990s when Java appeared on the scene. This highly expressive but simpler derivative of C++ promised to make our code safer and portable from one processor architecture to another. I was working with a group within the Intel Architecture Lab (IAL) that was implementing a high-performance Java Virtual Machine for the Intel processors. My team was creating all sorts of system-level software in Java that was going to change everything about the PC ecosystem, we believed. Then, one day, high-level folks from Microsoft visited our campus in Hillsboro, Oregon and spent all day in conferences with our IAL managers. Within days, all of the Java projects in IAL, including the screamingly fast new Java compiler and virtual machine were shut down. It was a real tragedy that changed the history of the PC forever. I left Intel and spent the next few years doing all sorts of interesting Java work that I enjoyed.

In 2001 while teaching C++ at a local college, a student asked what I thought of C# (pronounced C Sharp for those who don’t know). I had no idea what it was so I called a friend in IAL and he explained that the reason Microsoft had abandoned Java (and ostensibly got its partner Intel to do the same) was to make way for a new language that would compete with Java. I was intrigued so he put me in touch with someone at Microsoft who sent me a gold CD-R with “Cool” scrawled on it in red Sharpie ink. (Cool was the project code name for C#.) I popped the CD in a drive, ran the installer and started playing. In the matter of minutes, I could tell that C# was different from Java in some interesting ways. Over the next hour, I fell in love with it. Thus began v4.0 of my career. I dove head first into C# and into Microsoft’s .NET ecosystem. I was probably one of the first college professors to teach C# in the world, filling my first classroom with .NET fledglings in 2002. By 2008, I had become a Microsoft C# Most Valuable Professional (MVP) and stayed in the award program for the next seven years.

At the height of my experience as an MVP, I started work on a very large scale database project. Every software developer works with data but this project was really huge, at least one hundred times the size of any data project I’d worked on to date. I discovered that my skills as a programmer were massively inadequate to operate in that environment. The big problem was that my programming brain was wired to be iterative and imperative. I had spent 20 years telling computers what to do. The problem with really big data is that it’s often so big and so unwieldy that you just can’t tell it what to do. Instead, you must work with languages that allow you to describe what you want done instead of how it must be done. This is called the declarative programming model which is essentially the opposite of the imperative (command-based) programming I’d been doing for a couple of decades.

For the first time, I really needed to use declarative, set-based languages like Transact Structured Query Language (T-SQL) to get my work done. There was really no other way to pull it off in C# (or any other language I knew at the time). My mind was transformed through the process. I could never see the world in the same ways again. My job, which had always been to virtualize the world through silicon, became a search for patterns instead. There were scores of data patterns to be discovered. Code patterns emerged at every turn when I started looking for them. I became a pattern junkie in v5.0 of my career. I even wrote a book with my friend Jason Bock called Metaprogramming .NET which focuses on techniques for generating code based on patterns to make software fault tolerant and adaptive to change.

Version 6.0 of my career starts tomorrow. As I write this, I am sitting in a hotel room in Las Vegas, Nevada waiting for the start of’s 2015 AWS re:Invent conference. I am convinced that most companies will shift their infrastructures to the cloud in the next few years. And I’m sure that the way Amazon AWS products are structured, they will capture and keep the majority of cloud market for the next decade.

The reasons for my beliefs about the success of AWS are complex but it boils down to this. AWS is all about micro-architecture at any scale. That may sound like gibberish to those who haven’t built an enterprise system but it’s demonstrably true that all successful, complex systems are built from small parts working in concert to create value. AWS has generally designed their products in this way, deliberately or otherwise. Each product is a kind of gear that can connected to the others to create really interesting new things. There are serious integration challenges remaining but the AWS products fit together quite well, in general. Moreover, they scale dynamically so there’s appeal to small companies and large ones, alike.

We’re already seeing the leading edge of the conversion as small- and medium-sized companies are testing the waters of cloud computing in large numbers to gain some operational advantages over their larger competitors. In the next five years, the motivation for moving to the cloud will shift from operational, financial and tactical to strategic. Services will become available in the cloud that we could never even imagine in our racks of single-purpose servers sitting in private data centers. Machine learning will give way to what’s next: predictive analytics that permeates all of our data, all of our code naturally and automatically whether it’s shaped and purposed for that or not. Those companies trying to compete by running single-purpose, monolithic software will simply fall behind.

In reality, I started the shift to v6.0 of my career a while back. I’ve been using Microsoft Azure and Amazon AWS for a couple of years for my clients. But I’ve been using these services in the old-school way. When AWS Lambda was released a few months ago, that’s when the light bulb in my mind really lit up brightly. AWS Lambda is the simplest idea. What if you could write functions that could be instantiated in microseconds to get any sort of generic work done? And what if the system could scale out the number of available machines for running those functions heuristically and automatically?

Lastly, what if that system of scalable functions could be triggered from anywhere? Saving a file might run a function. That code might write to a database which invokes other functions. Et cetera. Suddenly, I saw a way to compose complex, scalable systems with simple, interconnected building blocks. Moreover, this is all done without owning a single server in the classical sense. In 34 years, I’ve come full circle from wanting to master everything about the machines I use to not wanting any machines at all. Version 6.0 of my career will be just as exciting as v1.0 was to me. For that, I’m truly grateful to be in this profession.

Let me finish by saying that although I think Amazon AWS will dominate in this space, Microsoft and Google will also do quite well. I’m not ignoring them. But the elegance, simplicity and highly composable nature of Amazon’s AWS products will make them a great choice for my clients. Viva Las Vegas!

Schema Refactoring with Views

Database triggers are generally pretty awful. Anyone who has had to deal with a database heavily laden with DML triggers knows this and avoids using them. If anyone tells me that they must use a trigger, I can always prove that there’s a better way that doesn’t bury dependencies and business logic so deeply in the database. However, there’s one special case where I find triggers helpful.

Refactoring a schema to support new work is seemingly impossible when you must also serve legacy applications that cannot be easily changed. Views provide an interesting  abstraction that can give you the flexibility you need in this case. Acting like façades over the table structures evolving beneath them, views may use instead-of triggers to translate the data back and forth between the old and new formats, making it seem to the legacy applications that the database hasn’t changed. Let me start with a simple  Address table to demonstrate the idea.

Now suppose that we need to make two changes to this table without disrupting legacy applications that depend on the Address table remaining defined as it has been for some time. First of all, to separate the subject areas of the database better, we’ve decided that we must move the Address table from the [dbo] schema into a schema named [geo]. Next, the individual [Latitude] and [Longitude] attributes must be replaced with a single column of type GEOGRAPHY to support some enhanced geographic processing functions being added for new applications. The new Address table will be created like this:

Moving the data into the new table is straightforward using the GEOGRAPHY::Point function:

After fixing up any foreign key references from the old Address table to the new one, we’re ready to drop the old table and create a new view in its place:

The new view is named exactly like the dropped Address table so SELECT queries work just as they did before:

Now let’s try to insert some data into the view:

The insert fails with the message:

Update or insert of view or function ‘dbo.Address’ failed because it contains a derived or constant field.

The problem is that the view is based on some logic for parsing out latitude and longitude parts from a GEOGRAPHY type. The view doesn’t know how to convert the individual latitude and longitude components back into a GEOGRAPHY so let’s provide an instead-of trigger to do that:

With the INSTEAD OF INSERT (IOI) trigger in place, the insert statement tried before now works. We should add INSTEAD OF UPDATE (IOU) and INSTEAD OF DELETE (IOD) triggers to the view to make sure those operations continue to work for legacy applications, too:

With those triggers in place, the following statements work as we had hoped:

In closing, I’ll admit that this pattern has some potential problems that you may need to address. If you’re using an Object-Relational Mapping (ORM) tool that dynamically inspects and validates metadata in the database, it may get confused by the use of a view where it once found a real table. Also, what the Microsoft Entity Framework (EF) refers to as Navigation Properties, representing the foreign key relationships between tables, may break using this pattern. Also, the UPDATE trigger does not allow any way to update the primary key value as currently implemented. That’s certainly possible using the [deleted] row set provided to the trigger. However, since modifying surrogate, primary keys isn’t commonly expected or allowed, I didn’t provide that more complex implementation. Lastly, you’ll find that as your evolving database design drifts further and further from what the legacy applications use, the harder it will be to maintain the views and their triggers. My Schema Refactoring Pattern, as I call it, is best to employ when you have a firm date in hand when you know the old schema can be deprecated. Triggers are still evil so you should have a solid plan when you begin the refactoring process to stop using them as soon as possible.

Generating Filenames Dynamically in SSIS

A file’s name and location are often used to express what’s inside it. Filenames are not required to be meaningful to human beings but they often follow some sort of pattern for categorizing and describing the data inside them. In this way, we can think of the name of a file as being somewhat like metadata. In this article, I’ll focus on a simple example that follows this idea: generating a filename in SQL Server Integration Services (SSIS) that contains the time and date when the file was created. The file creation time is important metadata that other systems can use to make decisions in downstream ETL processes.

In SSIS, the Script Task is good for this sort of thing because a small bit of C# code can add just the kind of needed flexibility and reuse we require. For example, imagine that a file with Employee information will be created from the HR schema. Those bits of metadata can easily be embedded in the filename because they’re somewhat static with respect to the package. However, adding something dynamic like the current date and time to the filename requires some code. To satisfy the requirements in a reusable way, imagine a simple, string-based template that looks like this:


The text between the curly braces is what we need to parse out to be replaced with the current date and time values. To find the escaped date and time sequences, a regular expression will do nicely. In Figure 1, observe the template string being evaluated in a popular, web-based regular expression tester.

Using a web-based regular expression tool to find escaped date and time sequences.

Figure 1 – Using a web-based regular expression tool to find escaped date and time sequences. Click or tap to see the full-sized image.

Regular expressions are weird but with a bit of study and tools like the one shown in Figure 1, it’s easy to experiment and learn quickly. There are many such regular expression testing tools online as well as a few that can be run natively on your computer. The simple expression tested here is \{\w+\} which looks rather cryptic if you’re unaccustomed to regular expression syntax. However, it’s really quite simple. Reading left to right, the expression means we’re looking for:

  1. A starting curly brace followed by
  2. Any word sequence followed by
  3. An ending curly brace

As you can see in the target string near the bottom of Figure 1, both of the sequences in the template have been found using this regular expression. That means the regular expression will work in the C# code. All that’s needed now is the code that will find those sequences in the template and replace them with their current date and time values.

Before we look at that however, I must drag a new Script Task onto the control flow of my SSIS package. I also need to add two variables to the package that will be used to communicate with the script. Figure 2 shows the control flow with the new Script Task on the design surface, the two new variables that were added and the opened configuration dialog for the Script Task.

Configuring a new Script Task to the SSIS package.

Figure 2 – Configuring a new Script Task to the SSIS package. Click or tap to see the full-sized image.

After dragging a Script Task object from the toolbox onto the control flow, double-clicking it shows the Script Task Editor dialog. To support the invocation of the C# code, two package-level variables called FilenameTemplate and GeneratedFilename were created. You can see them in the variables window near the bottom of Figure 2. Notice that the FilenameTemplate variable has the text with the escaped date and time parts tested earlier. In the Script Task Editor, the FilenameTemplate variable has been added to the ReadOnlyVariables collection and the GeneratedFilename variable has been added to the ReadWriteVariables. That’s important. Failing to add the variables to those collections means they won’t be visible inside the C# code and exceptions will be thrown when trying to use them.

Now we’re ready to write some script code. Clicking the Edit Script button in the Script Task Editor dialog will start a new instance of Visual Studio with the standard scaffolding to support scripting in SSIS. Find the function called Main() and start working there. The first line of code must fetch the contents of the FilenameTemplate variable that was passed in. Here is the line of C# code to do that:

string template = Dts.Variables["FilenameTemplate"].Value.ToString();

With the template in hand, we can convert and save the escaped date and time sequences with the following line of code:

Dts.Variables["GeneratedFilename"].Value = ExpandTemplateDates(template);

Of course, to make that work, we need to implement the ExpandTemplateDates() function, so the following code should be added inside the same class where Main() function is defined.

This method creates the \{\w+\} regular expression tested earlier and uses it to replace the matching sequences in the template parameter. That’s simple to do with .NET’s DateTime class which has a handy ToString() function that can accept the yyyyMMdd and HHmmss formatting strings found in the template. Figure 3 brings all the code together to help you understand.

The script code to find and replace escaped date and time formatting sequences.

Figure 3 – The script code to find and replace escaped date and time formatting sequences. Click or tap to see the full-sized image.

Before closing the C# code editor, it’s a good idea to run the Build command from the menu to make sure there are no syntax errors. To use the new dynamic filename generator, I’ll add one more variable to the package called Filepath. That will be concatenated with the GeneratedFilename to form the full path on disk where the output file from the package will be stored. The connection manager for that file needs to have its ConnectionString property modified at runtime so I’ll use the Expression Builder to do that.

Using the Expression Builder dialog to modify the target ConnectionString.

Figure 4 – Using the Expression Builder dialog to modify the target ConnectionString. Click or tap to see the full-sized image.

From the properties for the connection manager, click the ellipsis (…) button next to the Expressions property and add an expression for the ConnectionString as shown in Figure 4. Once that expression is saved, the full path and name of the file to be saved will be assembled from the Filepath and the GeneratedFilename variables at runtime.

Bringing it all together, Figure 5 shows the results of running the package with a source table, the target flat file bearing the new ConnectionString expression and a Data Flow Task that moves some data from the source to the target. The data flow itself isn’t relevant so it isn’t shown here. What’s important to demonstrate is that the C# code correctly fetched the template variable, processed the regular expression, matched the sequences, replaced them with the date and time values and saved the new filename. The connection manager’s ConnectionString expression also correctly applied the newly generated filename to the path when saving the file to disk.

A test run showing the dynamic filename that was generated and the file on disk with that name.

Figure 5 – A test run showing the dynamic filename that was generated and the file on disk with that name. Click or tap to see the full-sized image.

I marked up the screen shot with a red arrow pointing to the package log output showing the filename that was generated by the C# code when it ran. The blue arrow points to the actual target file on disk, showing that the two match.

There are other ways to do what’s been demonstrated here. However, I find this solution to be both simple and extensible. The example shown here can be easily modified to include many types of dynamic metadata other than dates and times. Moreover, this is a highly reusable pattern given that you need only copy the Script Task into a new SSIS package and set up a couple of package variables to use it anywhere you like. In the next article in this series, I’ll focus on consuming files with dynamically assigned filenames.

On Primary Key Names

If you use frameworks like Microsoft Azure Mobile Services or Ruby on Rails, then you’re accustomed to complying with a host of development conventions. Frameworks are often said to be opinionated, forcing certain design decisions on the developers who use them. Given that very few software design choices are perfect in every situation, the value of having an opinion is often more about consistency than it is about correctness.

“A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do.” — Ralph Waldo Emerson, Essay: First Series on Self-Reliance

Emerson’s famous quote about the potential perils of standardization is sometimes misapplied. For example, I once attended a seminar by a software vendor where the speaker referred to Ruby on Rails developers as hobgoblins because of their unswerving reliance on programming conventions. Yet, those who understand the Ruby on Rails framework, understand that it is loaded with touchstones that lead us to exhibit good and helpful behaviors most of the time. The seminar speaker’s broad derision of the Rails framework was either based on his misunderstanding or an intent to misdirect the hapless audience for some commercial gain. In his famous essay, Emerson clearly relegates only those friendly yet troublesome creatures of habit that lead to folly as the ones to be categorically avoided.

“One man’s justice is another’s injustice; one man’s beauty another’s ugliness; one man’s wisdom another’s folly.” — Ralph Waldo Emerson

Yet, it is true now and again that the conventions expressed in Rails, Azure Mobile Services, Django, CakePHP and many other frameworks can lead to unfortunate consequences based on the tools’ misapplications or by circumstances  that could simply not be foreseen by the frameworks’ designers. Nowhere else is this more true than in the area of data access. A data pattern that the framework designer considers beautiful in many situations may repulse some database administrators in practice. Application frameworks are often developed and sold for so-called greenfield solutions, those not naturally constrained by prior design decisions in the database and elsewhere. However, many real-world implementations of application frameworks are of the brownfield variety, mired in the muck and the gnarled, organic growth of  the thousands of messy technical decisions that came before. In environments like that, the framework designer’s attempt to enforce one kind of wisdom may prove to be quite foolish.

Primary keys are one of the concerns that application frameworks tend to be opinionated about. Ruby on Rails, Microsoft Azure Mobile Service and Django all name their surrogate, primary keys id by default, meaning identifier or identity. In fact, the word identity is based on the Latin word id which means it or that one. So in these frameworks, when you use the primary key to identify a specific record, you’re sort of saying “I mean that one.”

Database developers and administrators often argue that relational databases aren’t so-called object databases and that naming primary keys the same for all tables leads to confusion and errors in scripting. It’s true that when you read the SQL code in a database that uses id for all the primary key names, it can be a bit confusing. Developers must typically use longer, more meaningful table aliases in their queries to make them understood. Ironically, when the application frameworks that desire uniformity in primary key names generate database queries dynamically, they often emit short table aliases or ones that have little or no relationship to the names of the tables they represent. Have you ever tried to analyze a complex query at runtime that has been written by an Object-Relational Mapping (ORM) tool? It can be positively maddening precisely because the table aliases typically bear no resemblance to the names of the tables they express.

Another problem with having all the primary keys named the same is that it sometimes inhibits other kinds of useful conventions. For example, MySQL supports a highly expressive feature in its JOIN syntax that allows you to write code like this:


In this case, because the Order table’s primary key is named the same as the LineItem’s foreign key to orders, the USING predicate makes it really simple to connect the two tables. One has to admit that’s a very natural-feeling expression. The aforementioned application frameworks’ fondness for naming  all primary keys id makes this sort of practice impossible.

Now that I’ve spent some time besmirching the popular application frameworks for the way they name primary keys, let me defend them a bit. As I said in the beginning, when it comes to frameworks, their opinions are oftentimes more about consistency than objective or even circumstantial correctness. For application developers working in languages like Ruby or C#, having all the primary keys named similarly gives the database a more object-oriented feel. When data models have members that are named consistently from one object to the next, it feels as though the backing database is somewhat object-oriented, with some sort of base table that has common elements in it. If such conventions are reliable, all sorts of time-saving and confusion-banishing practices can be established.

Having done a lot of data architecture work in my career and an equal amount of application development work, my opinion is that naming all database primary keys the same has more benefits than drawbacks across the ecosystem. My opinion is based on the belief that application developers tend to make more mistakes in their interpretations of data than database people do. I believe this is true because as stewards of information, database developers and administrators live and breath data as their core job function while Ruby and C# developers use data as just one of many facets that they manage in building applications. Of course, this is the sort of argument where everyone is correct and no one is. So I’ll not try to claim that my opinion is authoritative. I’m interested in hearing your thoughts on the subject.

Invoking Stored Procedures from Azure Mobile Services

One of the questions I’m often asked is whether it’s possible to call SQL stored procedures from Azure Mobile Services. The answer is yes and it’s probably easier than you think. In case you don’t know, Azure Mobile Services is a way to very simply expose an HTTP service over data stored in an Azure SQL database. By default, Azure Mobile Services exposes the SQL tables directly as resources. So the HTTP methods GET, POST, PUT and DELETE will essentially be mapped to SQL operations on the underlying tables.

While this simple mapping mechanism is good for resource-oriented access to the tables, the logic to produce a usable Web API is often a bit more complex than that. Stored procedures can provide an interesting abstraction layer that allows us to use the efficiency of the SQL Server query engine to reduce round trips to and from Internet clients, for example. Or perhaps stored procedures might be used to hide normalization peculiarities from clients or perhaps to use advanced parameter handling logic. Whatever the case may be, it would be helpful from time to time to be able to invoke stored procedures from the HTTP API that Azure Mobile Services provides.

Let’s start by assuming that an Azure Mobile Service exists with some data that we would like to expose via a stored procedure. For the purposes of this example, my service is called MobileWeatherAlert which contains a backing table in an Azure SQL Database named [MobileWeatherAlert_db]. It’s really helpful that Azure Mobile Services uses schema separation in the underlying database to manage all of its data. That schema separation allows us to expose many separate middle-tier services from one common database if needed. So, in my weather database, there’s a schema called [MobileWeatherAlert] corresponding to the name of the service that it supports. For the purposes of this example, that schema contains a table called [Observation] which is used to collect weather data by [City]. Figure 1 shows a very simple stored procedure called [GetObservationsForCity] that I’d like to be able to call from the service API.

Figure 1 - A simple stored procedure to fetch weather observations for a given city.
Figure 1 – A simple stored procedure to fetch weather observations for a given city.

There are a number of places where this procedure might be invoked. For this example, I’ll implement a custom API in the mobile service called observation. Figure 2 shows the dialog in the Azure management console where the custom API will be created.

Figure 2 - Creating the custom observation API for the MobileWeatherAlert service.
Figure 2 – Creating the custom observation API for the MobileWeatherAlert service.

For this simple example, I’ll only implement the HTTP GET method in the API to invoke the stored procedure. For simplicity of the example, I’ll open up access to everyone to avoid having to pass any sort of credentials. Now I can add a bit of JavaScript to the API to make the stored procedure call. Figure 3 demonstrates adding that JavaScript to the API via the Azure management console.

Figure 3 – The JavaScript to invoke the GetObservationsForCity stored procedure with a URL-sourced city parameter.
Figure 3 – The JavaScript to invoke the GetObservationsForCity stored procedure with a URL-sourced city parameter.

Lines 1 through 9 in the script encompass the get function that will be invoked when the HTTP GET method is used to call the service. The parameters passed to the JavaScript function are the request and response objects. From the request object, line 2 shows how to obtain a reference to the mssql object which exposes a query function for making calls into the database. Line 3 demonstrates how to call the query function to execute the [GetObservationsForCity] stored procedure, passing a single parameter for the City by which to filter. It’s important to note here that the schema in which the stored procedure resides is not named in the EXEC call. This is counter-intuitive, in my opinion, and is likely to trip up novices as they experiment with this functionality. Since we are invoking the GET method for the MobileWeatherAlert service, there’s an implicit assumption used in the preparation of the SQL statement that objects will reside in a similarly-named database schema.

Notice also on Line 3 that the request object passed into the JavaScript function exposes a query property that conveniently contains an object named city which will be parsed directly from the URL. Figure 4 shows how that URL might be passed from PostMan, a really excellent Google Chrome plug in that allows the invocation of nearly any sort of HTTP-oriented web service or API.

Figure 4 - Calling the new API via PostMan to get weather observations for the city of RIchmond.
Figure 4 – Calling the new API via PostMan to get weather observations for the city of Richmond.

Finally, lines 4 through 6 of the JavaScript method, the success function that process the results of the SQL query logs the results and returns them to the caller with an HTTP 201 (OK) response. I’ve included a called to the console.log() function to show how easy it is to log just about anything when you’re debugging your JavaScript code in Azure Mobile Services. After invoking an API or custom resource method that logs something, check out the logs tab of the mobile service in the management console to see what got saved. Of course, you’ll want to do minimal logging in production but while you’re testing and debugging, the log is a valuable resource.

In studying the URL and its output in Figure 4, remember that the JavaScript for the observation API didn’t have to do any special parsing of the row set returned by SQL Server to produce this result. Simply returning that data from SQL Server caused the API to emit JavaScript Object Notation (JSON) which has arguably become the lingua franca of the Internet for expressing data.

In closing, I’ll share a couple of thoughts. If you’re interested in building a simple query interface on top of a mobile service, you don’t have to use stored procedures as shown here. Azure Mobile Services implements fairly rich OData support directly on table resources. With OData, filtering, sorting and pagination of SQL data are built in, so to speak. Also, the web way of doing services (sometimes called RESTful based on Dr. Roy Fielding’s dissertation and the HTTP standards that flowed from it) assume that we’ll use HTTP in the way it was intended: accessing and linking resources at a more basic level, using the HTTP methods GET, POST, PUT, and DELETE as a complete, fully-functional language for accessing those resources. Database people inherently understand and respect this access pattern better than many programmers working in traditional programming languages like C# and Java. After all, we’re accustomed to using four basic methods to manipulate data in our databases: SELECT, INSERT, UPDATE, and DELETE. Yet, as database people, we also know that giving software developers strict table-level access can cause all sorts of performance problems. For those situations, where you know that some complex database operation could be performed much more efficiently with a bit of T-SQL code, a stored procedure or a view may be just the prescription your developers need. Hopefully, this article has helped you understand how to invoke programmatic resources in a SQL Azure database and perhaps it will help you along the way to making the correct architectural choices in the design of your modern, data-driven web applications.