Beware of sensible information leakage when doing HTTP actions inside Logic Apps

Using Azure Logic Apps for any sorts of workflow automation or process orchestration is quite comfortable and easy, especially since almost any HTTP-related task consists of nothing more than adding some action steps to the execution flow.

But have you ever had a closer look to the actual HTTP request data which is finally sent over the wire in the background by the Logic App engine? You would be surprised what additional informations (=metadata) are automatically included to the final HTTP request. Let’s have a look on an example.

I just created an empty Logic App and added an HTTP action calling test endpoint over at Pipedream. Then I did manually invoke it using the Run command. Just ignore the BadRequest error response, it doesn’t matter here.

The final HTTP request sent by the Logic App engine over the wire then looks like this:

Additional HTTP headers automatically added by Logic App engine to outgoing request

As you can see, there are a lot of platform-internal and informations leaking through to the receiving party. Including sensible informations like

  • the client machines IP address from which the Logic App execution was triggered from
  • the Azure subscription id
  • the Logic App ressource id, the ressource group name in which it is contained in
  • the Logic App’s name as well the name of the HTTP action that caused the HTTP request

While this kind of (unencrypted) metadata exchange might be very helpful or even required in certain Azure solution scenarios, it must be assured that this happens in a secure environment where every receiving party is trusted. However, this is NOT the case when HTTP requests are going out to any external service which is operated by a 3rd party.

Conclusion

When using HTTP actions in Logic Apps, ask yourself if the receiving party is trustworthy and if the information leakage via proprietary HTTP headers shown above is tolerable. Otherwise, make sure to filter out the questionable X-* headers from the resulting request using for example a proxy service or specifiying outgoing policies on an Azure API Management ressource.

Headaches with CallbackRegistration entity, user permissions and CDS triggers in Logic App/Flow

Just a reminder: when creating an automated Flow with a CDS trigger (f.e. Create or Update of certain entity), the service principal used to authorize the API Connection in the background, needs to have full privileges on system entity callbackregistration (german: “Rückrufregistrierung“). Privilege scope User is sufficient.

If you forget this, you might ran into a cumbersome troubles. 😉 The most obvious sign, is that your Power Automate Flow or Logic App does (no longer) trigger. In the overview blade of your Logic App ressource, no invocation is listed so you need to go deeper through See trigger history in order to see every invocation that took place. Entries with Status = Failed and indicate Fired column indicate that this is no regular invocation (from CDS userland) but one that happened internally. Peeking into the trigger output data, we now would see an error message like this, when CDS trigger creation did silently fail in the background:

"Principal user (Id=b21db7e8-2a64-40ea-96d9-ef0e023bf1c4, type=8, roleCount=3, privilegeCount=745, accessMode=4), is missing prvCreateCallbackRegistration privilege (Id=a916618b-c454-45bb-ae0a-c30446362191) on OTC=301 for entity 'callbackregistration'. context.Caller=801866e1-0fd9-43f0-9259-d964984c5446"

Or this, when CDS trigger invocation fails silently after a matching event happened in CRM/CDS:

"Principal user (Id=b21db7e8-2a64-40ea-96d9-ef0e023bf1c4, type=8, roleCount=3, privilegeCount=744, accessMode=4), is missing prvReadCallbackRegistration privilege (Id=a916618b-c454-45bb-ae0a-c30446362191) on OTC=301 for entity 'callbackregistration'. context.Caller=801866e1-0fd9-43f0-9259-d964984c5446"

Purpose of callbackregistration entity

In version 9.0 a new event handler type was introduced to the PowerPlatform (aka Dynamics 365), called WebHooks that can be registered similarly to for example Plugins, and be configured to listen for certain Messages like Create or Update and also be narrowed to certain Attributes only. But compared to Plugins — which consist of custom code that is compiled, published as assembly to CRM and being executed right in the physical context of the underlying CDS/CRM database –, a WebHook is actually just an mechanism used to inform a remote party (represented by an URL-addressable HTTP endpoint) that a certain event has happened. The term WebHook is nothing invented by Microsoft in this context, rather more it’s the name of a well-known HTTP communication pattern for doing a primitive form of Inter-Process Communiation (IPC) by exchanging plain HTTP requests with some well-defined payload. The WebHook pattern is quite popular nowadays and it’s origins reach back already 10+ years, where it has been concepted as a simple, versatile and powerful mechanism to enable a connected real-time web, in which different webservices are able to communicate with each other regardless of their underlying technologie stack.

The way WebHooks has been adopted by Microsoft in the PowerPlatform is quite straight forward. Represented by callbackregistration records, their definition consists basically of the entity type and message a WebHook should be registered on, Attributes that should be monitored, the systemuser responsible for it and what data to be included in the WebHook payload.

Summary

In our original case, the problem arised by using a Service Principal for the API Connection used by the CDS trigger of our Logic App or Flow, that had no sufficient privileges to Read the callbackregistration entity everytime a certain entity record was Created or Updated for which a WebHook has been defined (happening automatically in the background when CDS trigger is created in Logic App/Flow → CDS trigger registration will silently fail when Service Principal lacks Create privilege on callbackregistration entity, too).

This little but important detail is unfortunately only mentioned once in a side sentence of a little paragraph across the whole Microsoft documentation, and even asking Google doesn’t yield any further details. Especially when crafting Security Roles with only the really neccessary privileges, such a tiny detail could easily be forgotten.

Thoughts about automated API Connection resource deployments as part of Logic App ARM templates

Recently I did re-evaluate the current possibilities when tasked with fully automated ARM template deployments of Logic Apps which contain API Connections. TL;DR; as of today there is still no sophisticated method for automating the authorization process of OAuth secured API connections. Microsoft still recommends the same custom PowerShell approach as back in 2018 when I first did evaluate this deployment aspect. Of course this should be understood as a general inspiration, rather than the ultimate solution. But in the year 2020 I would expect that there are already more sophisticated approaches, which ideally should have been integrated to Azure DevOps release pipelines, similar as the Azure Key Vault release tasks or the neat parameter mapping features of the Azure Resource Group Deployment task.

You might ask, why this is of relevance in the days of Service Principal access methods (aka App Users, AAD App Registrations asf.), Azure managed resource odentities or even simpler static API keys. Short answer: not all API Connections do support these advanced authentication mechanisms yet.

How to authorize OAuth connections (as of today)

Quick recap: exemplary OAuth 2.0 authentication flow (Source)

In essence I see three choices:

  1. implement some semi-automated OAuth login process using f.e. PowerShell which needs manual intervention for doing the actual “login” (example) → moderate initial effort; can be re-used
  2. implement automated login process which f.e. follows the common OAuth 2.0 authentication flow. Login credentials could for example be stored and retrieved from Azure Key Vault in order to avoid security leakage → rather complex and relatively high initial invest, but can be re-used
  3. just open the affected API Connection ressources after they have been deployed and manually authorize them → low effort

Keep in mind, that #1 and #2 also need to be integrated to the automated deployment process with, for example, Azure DevOps releases. For #1 this will impose additional complexity, since the OAuth login dialog must somehow be invoked on the user’s client machine in order for him to take action. And strictly speaking, #1 and #2 do contradict the philosophy of truely automated releases (CI/CD) since they involve manual intervention.

My comment

In the end, one needs to weigh out the costs/savings of following a strict philosophy (automation; CI/CD) vs. following a rather pragmatic approach, where both sides have the same goals: streamlining the whole release process and making it as efficient as possible. For me, following choice #2 would represent that most elegant solution, which will only amortize in the mid-/longterm and is best suited for very mature customer/project contexts, where this very specific optimization would imho pay off the most. While choice #3 is the most pragmatic and also cheapest (in many ways) way to accomplish the same goal.

Logic App Finding: Multiple run-after configurations when joining parallel branches

Here is my finding of the day: When working with parallel branches, after the execution paths have been joined again, the next action step can have different run-after configurations for each of the previously joined parallel branches! This ability is hidden behind some bad UI/UX, so I discovered it just today and rather “accidentially”. This might not be needed very often, but when, it can help to keep the LA execution paths simpler, since no bulky if-else branching and redundant actions would be needed.

Click on the previous action first, in order to configure individual state constraints in the run-after settings

Since Logic Apps and Power Automate (aka Flow) practically share the same platform fundament, differen run-after configurations can also be created in Flows.