Posts by Samuel Kastberg

Integration consultant working Solidify Sweden. My opinions only, most certainly not necessarily those of my employer.

Logic Apps Standard, Managed connections in Visual Studio Code

I had a talk at Integrate 2022 in London about Lifecycle of Logic Apps Standard. I thought it would be a good idea to write down some key takeaways as blog posts. Today(June 2022), handling managed connections while developing Logic Apps standard in Visual Studio Code is not straight forward. The Visual Studio Code extension for Logic Apps Standard expect you to create the connections from within the editor, this is not ideal as more and more we strive towards creating our Azure resources with code (ARM|Bicep|Terraform), additionally you cannot use existing connections. Another problem is that when the connection is created a token (connectionKey) is added to the local.settings.json file and valid for only 7 days. If you want to work with a Logic App longer than 7 days or have a peer to continue work the connections need to be recreated. The main idea to solve these problems is to create Managed connections with Bicep and save the connectionKeys in a Key Vault. When you have the connectionKeys saved in Key vault your App settings can reference the secret in Key Vault. The image below shows steps and references.

Overview

Saving the connection key

I prefer to create my resources with Bicep, that said what I describe here can be achieved with ARM and most likely Terraform(not tested) also. To create a connection resource you will use the Microsoft.Web/connections resource type. I will not dive into creating the connections here, the documentation for that can be found here.

Note I have written a script to help you generate bicep for the connections, it’s not bullet proof but saves time to get the main parts in place. You find the script here.

One of the things the script does is add to the generated Bicep an additional section to save the connectionKey to Key Vault if it is a lab or dev environment. Below you can see the code to get and save the connectionKey. Each time you deploy a new version of the secret will be created with a new connectionKey that will be valid the number of days you set validityTimeSpan variable.


// Handle connectionKey
param baseTime string = utcNow('u')

var validityTimeSpan = {
  validityTimeSpan: '30'
}

var validTo = dateTimeAdd(baseTime, 'P${validityTimeSpan.validityTimeSpan}D')

var key = environment == 'lab' || environment == 'dev' ? connection_resource.listConnectionKeys('2018-07-01-preview', validityTimeSpan).connectionKey : 'Skipped'

resource kv 'Microsoft.KeyVault/vaults@2021-11-01-preview' existing = {
  name: kv_name
}

resource kvConnectionKey 'Microsoft.KeyVault/vaults/secrets@2021-11-01-preview' = if (environment == 'lab' || environment == 'dev') {
  parent: kv
  name: '${DisplayName}-connectionKey'
  properties: {
    value: key
    attributes: {
      exp: dateTimeToEpoch(validTo)
    }
  }
}

Using existing connections in Visual Studio Code

To use existing connections in Visual Studio Code you must write the managedApiConnections element in the connections.json file. As you can see in the example below the parameter element has a reference to an application setting.


{
  "managedApiConnections": {
    "bingmaps": {
      "api": {
        "id": "/subscriptions/---your subscription---/providers/Microsoft.Web/locations/westeurope/managedApis/bingmaps"
      },
      "authentication": {
        "type": "Raw",
        "scheme": "Key",
        "parameter": "@appsetting('bingmaps-connectionKey')"
      },
      "connection": {
        "id": "/subscriptions/---your subscription---/resourcegroups/---your resource group----/providers/microsoft.web/connections/bingmaps"
      },
      "connectionRuntimeUrl": "https://3b829fe9f0975a92.14.common.logic-westeurope.azure-apihub.net/apim/bingmaps/788bd6f6ab024898a829cb4e9b463d1d"
    }
  }
}

Locally the application setting is in your local.settings.json file that should not be saved to source control as it will contain secrets. In the example below you see the bingmaps-connectionKey element referring the connectionKey we saved in key vault.

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "FUNCTIONS_WORKER_RUNTIME": "node",
    "WORKFLOWS_TENANT_ID": "---your tenant id---",
    "WORKFLOWS_SUBSCRIPTION_ID": "---your subscription---",
    "WORKFLOWS_RESOURCE_GROUP_NAME": "---your resource group---",
    "WORKFLOWS_LOCATION_NAME": "---your region---",
    "WORKFLOWS_MANAGEMENT_BASE_URI": "https://management.azure.com/",

    "bingmaps-connectionKey": "@Microsoft.KeyVault(VaultName=---your key vault---;SecretName=bingmaps-connectionKey)"
}

With these steps taken you should be able to use the existing connection.

Helper script

I have written a script to extract all connections in a resource group, you can find it here. Get-UpdatedManagedConnectionFiles.ps1 will generate files containing Connection information based on the connections in a provided resource group to use in Visual Studio Code.

Assumes you have stored the connectionKeys in KeyVault.

The script calls Generate-ConnectionsRaw.ps1 and Generate-Connections.ps1 and saves the files in a folder you provide. See the table below for details on the saved files.

File Description
.connections.az.json Shows how the managed connections should look in should look in the deployment.
.connections.code.json Shows how the managed connections should look in Visual Studio Code. Copy the contents of the managedApiConnections element to your connections.json
.connectionKeys.txt Lines you can use in your local.settings.json to match the connection information created in .connections.code.json. Here we have to connectionKeys as when created the normal way in VS Code.
.KvReference.txt Lines you can in your local.settings.json to match the connection information created in .connections.code.json. Here we use Key vault references instead which is the best solution as you don’t need to update the local.settings.json when the keys are updated.

Conclusion

It is possible to workaround the current limitations handling managed connections when working with Logic Apps Standard in Visual Studio Code. Hope you find this handy.

Use MapForce transformations from Logic Apps and Azure Functions

Transformation is a key part of integration, information in one format needs to be used in another format and eventually processed in some way. While working with a client a while ago we did a successful PoC to move their integrations to Azure and want to share the findings regarding mapping. We needed to transform and the alternatives Liquid and the Enterprise Integration Pack (EIP) did not fit well. Transformations were far to complex for Liquid and maps with EIP did not offer a natural way between XML and Json. Important for the decision was also that the customer already used MapForce from Altova in their current solution, so they already had an investment in licensing and knowledge. This post is intended to show how we solved the transformations with MapForce and not a detailed step by step description. We generated code from MapForce and performed the transformations in Azure Functions.

Example mappings

One of the things we needed in the PoC was a lookup changing some identifiers in one system to their counterparts in the other, my examples are two flavors of lookups. All code can be found in the GitHub repository MappingFunctions. So you get a picture of the effort to get started, the maps in the example took me about two hours to find out how to do them.

Steps

These are the steps in short I took for the example.

  1. Create a MapForce project
  2. Create the transformations
  3. Generate code (C#)
  4. Create Azure Functions project
  5. Add the generated code to the Functions solution
  6. Write code to execute the mappings in the functions
  7. Call the functions from Logic Apps to test (excluded in the repository)

Simple lookup example

In the simple lookup example, the data is included in the value-map shape, thus to add new values we will need a new release. This can be ok in cases when data changes very seldom. 

Simple summary map

Advanced lookup example

Sometimes data changes more often, then we don’t want the data to be hard coded in the map. In the advanced lookup example, the data is provided as a second input that could come from a database or a blob, thus being able change data without a new release. Note that one input is XML and the other is Json

Advanced summary map

The lookup is performed in a user function that uses the second input.

Lookup User Function

Code generation in MapForce

Code can be generated in several languages, C#, Java, XSLT, etc., choose the one that suites your project best. As different languages have different feature sets, not all constructs works on all languages. In the PoC I wrote about we used Java as that was the language that team preferred and used. The example project uses C# (.NET Core 3.1), with the settings in the image.

Code generation settings

The projects MapForce generated could be added without changes to the Azure Functions solution. This is good as if you need to update your transformation it’s possible to just overwrite them and keep them together in source control.

Highlighted projects that are generated with MapForce

The code to execute the transformation is quite straightforward. In the example the documents are small and string variants are used, for larger documents there are variants that use streams.

    public static class SummaryAdvanced
    {
        [FunctionName("SummaryAdvanced")]
        public static async Task<IActionResult> Run(
            [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = null)] HttpRequest req,
            ILogger log)
        {
            log.LogInformation("Starting SummaryAdvanced.");

            string requestBody = await new StreamReader(req.Body).ReadToEndAsync();

            
            if (string.IsNullOrEmpty(requestBody))
            {
                return new BadRequestResult();
            }

            // TODO: Add more error handling
            
            // Create the mapper object
            SummaryLookupMapToCatalog_Summary_Schema mapper = new SummaryLookupMapToCatalog_Summary_Schema();
            
            // Sources 
            Altova.IO.Input booksSource = new Altova.IO.StringInput( requestBody);
            Altova.IO.Input shelfsSource = new Altova.IO.StringInput(MappingFunctions.Properties.Resources.Shelfs);
            
            // Mapping result 
            StringBuilder responseMessageBuilder = new StringBuilder();
            Altova.IO.Output Catalog_Summary_SchemaTarget = new Altova.IO.StringOutput(responseMessageBuilder);

            // Execute transformation
            mapper.Run(booksSource, shelfsSource, Catalog_Summary_SchemaTarget);

            
            return new OkObjectResult(responseMessageBuilder.ToString());
        }
    }

Final thoughts

Having one tool that can assist creating the maps between different formats is valuable, it saves time not needing to change tooling. I find this a good way to perform mappings and it was reasonable effort to get started. That said the documentation could be better and it can take some time to be up and running with complex transformations. I have not dived deep in all features, see this post as a starting point. Generating code in different languages is good but I assume a team will stick to what suites their environment best.
Note: These are my own thoughts and I don’t have any business contact with Altova. I used a 30-day trial version of MapForce that can be downloaded here.

Queueing Azure DevOps Pipelines from Logic Apps

Queueing pipeline execution in Azure DevOps is fully possible in several ways. The reason to do it is to orchestrate running a couple of pipelines to automate manual steps in a larger context. In this post I focus on doing it from Logic Apps Consumption using the Azure DevOps connector. Just starting a pipeline with no parameters is straight forward. Add the Queue a new build action, create the connection, fill in the required information and you’re ready to go.

Azure DevOps Queue a new build action in Logic Apps

If your pipeline has some parameters that you need to fill if you run interactively things get a bit more complicated and you need to understand how data flows in your pipeline. First of all your parameters need to have a default value, without defaults your pipeline fails.

Example of parameters when you run the pipeline interactively.

From your Logic App you can send a JSON dictionary with the values you want to use in the pipeline. With the example in the image, you will have available in your pipeline the values using the syntax $(keyName) three parameters.

Input JSON from the Logic App.

Nice but to get things working nicely being able to start your pipeline interactively and from your Logic App we need to bind the incoming values to the parameters you have in the pipeline. You can do this by setting the default value for the parameters.

Parameters section in the pipeline definition.

In the rest of your pipeline always use the normal syntax ${{parameters.%name%}} that way the pipeline will use the right values regardless of if you start from your Logic App or interactively. The picture below shows how data flows from your Logic App into the parameters and then used in the tasks.

Show data flow from the JSON into the parameters and the used in tasks.

Hope this give you a better understanding how you can start your pipelines with arguments and how the data can flow.

Additional reading:

Azure DevOps – Connectors | Microsoft Docs

Use runtime and type-safe parameters – Azure Pipelines | Microsoft Docs

Some more on using the connector Automating Azure DevOps with Logic Apps – Simple Talk (red-gate.com)

Azure API Management validation

I had the opportunity to look at the new validation functionality for APIM. I summarize my thoughts to remember and share with others. You find the documentation post here: https://docs.microsoft.com/en-us/azure/api-management/validation-policies

We have four ways to validate requests and responses, content, parameters, headers, and status code. Three actions can be taken, ignore, detect, or prevent. Ignore will skip the validation, detect will log the validation error but not interrupt execution and prevent will stop processing on first error. Validations have a high-level settings that tells what to do with a specified or unspecified settings.

Note: As stated in the documentation I needed to reimport my API using management API version 2021-01-01-preview or later. I did this with a PowerShell script that you can find here APIM-Examples/Validation at main · skastberg/APIM-Examples (github.com)

Content

Validates the size of a request or response, also ensure that the body coming in or out follow the specification in the API description. For schema validation we’re limited to json payload. Content-Type validation is checked towards the API specification. Size validation works with all content types up to 100KiB payload size. This validation can be applied on all scopes for sections inbound, outbound and on-error.

The policy name is validate-content, details on usage see here: https://docs.microsoft.com/en-us/azure/api-management/validation-policies#validate-content

Content validation examples

To ensure that the specified content-type is honoured, set unspecified-content-type-action to prevent and to limit the size of a request set max-size and size-exceeded-action to prevent.

<policies>
    <inbound>
        <base />
        <validate-content unspecified-content-type-action="prevent" max-size="102400" size-exceeded-action="prevent" errors-variable-name="requestBodyValidation" />
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

APIM will return a status code 400 for any request with a body that exceeds the max-size.

Resulting Status code 400 - max size exceeded
Resulting Status code 400 – max size exceeded

If the request doesn’t set the correct content-type a status code 400 is returned. In this case the required type is application/xml but the provided is text/xml.

Resulting Status Code 400 - unspecified content type
Resulting Status Code 400 – unspecified content type

Preventions will generate exceptions that can be seen in Application Insights, also you could Trace the errors. The image shows a query in Application Insights showing the exceptions joined with the request information.

Preventions will generate exceptions, here seen in Application Insights
Preventions will generate exceptions, here seen in Application Insights

In this example the policy validates size and content-type as the previous one and in addition the content element specifies to validate the payload.  

<policies>
    <inbound>
        <base />
        <validate-content unspecified-content-type-action="prevent" max-size="102400" size-exceeded-action="prevent" errors-variable-name="requestBodyValidation">
            <content type="application/json" validate-as="json" action="prevent" />
        </validate-content>
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>|

If the payload doesn’t conform the json schema a status code 400 is returned with at descriptive message.

Resulting Status Code 400 -  Invalid json
Resulting Status Code 400 – Invalid json

Parameters

Apart from the content(body) we receive data as parameters, header, query, or path. This validation checks the incoming parameters against the API specification. Each parameter type has its own element in the policy, depending on needs one or more are used. The API specification will show how parameters are expected, types and if mandatory or not. This validation can be applied on all scopes for the inbound section.

The policy name is validate-parameters, details on usage see here: https://docs.microsoft.com/en-us/azure/api-management/validation-policies#validate-parameters

Parameter validation examples

Any parameters

With the following policy with unspecified-parameter-action set to prevent, any parameter of any kind in the request that is not in the API specification will be stopped.

<policies>
    <inbound>
        <base />
        <validate-parameters specified-parameter-action="ignore" unspecified-parameter-action="prevent" errors-variable-name="requestParametersValidation" />
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

The resulting status code 400 response from APIM. In this case a header but it could be something else depending on your specification.

Resulting Status Code 400 -  Unspecified header
Resulting Status Code 400 – Unspecified header

Path

In this example an operation has the required parameter, format, that we want to validate before sending to the backend. If the request with wrong type for format is received, we get a 400 error from APIM. As we set specified-parameter-action and unspecified-parameter-action to ignore other errors will be disregarded.

The required integer parameter in APIM
The required integer parameter in APIM
<policies>
    <inbound>
        <base />
        <validate-parameters specified-parameter-action="ignore" unspecified-parameter-action="ignore" errors-variable-name="requestParametersValidation">
            <path specified-parameter-action="detect">
                <parameter name="format" action="prevent" />
            </path>
        </validate-parameters>
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

In this case the response is more generic not specifying the incorrect parameter, that said the exception seen in Application Insights is more specific.

Resulting Status Code 400 -  with a generic error message
Resulting Status Code 400 – with a generic error message

Headers

In this example an operation has the required header, spoken-language, that we want to validate before sending to the backend. The header is an enumeration, and we want to validate that the correct values are used before sending to the backend.

The required header in APIM
The required header in APIM

To prevent that the request is forwarded if an invalid value in the spoken-language header we can use the following policy.

<policies>
    <inbound>
        <base />
        <validate-parameters specified-parameter-action="prevent" unspecified-parameter-action="ignore" errors-variable-name="requestParametersValidation">
            <headers specified-parameter-action="detect" unspecified-parameter-action="ignore">
                <parameter name="spoken-language" action="prevent" />
            </headers>
        </validate-parameters>
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

A request with an invalid value returns status code 400.

Returned Status Code 400 telling the value is invalid for spoken-language
Returned Status Code 400 telling the value is invalid for spoken-language

Query

In this example an operation has the query parameter, dayno, that we want to validate before sending to the backend. The header is an integer, and we want to validate that the correct type is used before sending to the backend.

<policies>
    <inbound>
        <base />
        <validate-parameters specified-parameter-action="ignore" unspecified-parameter-action="ignore" errors-variable-name="requestParametersValidation">
            <query specified-parameter-action="detect" unspecified-parameter-action="ignore">
                <parameter name="dayno" action="prevent" />
            </query>
        </validate-parameters>
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

A request with a non integer value will be prevented and a status code 400 returned by APIM.

 Returned Status Code 400 telling the request could not be processed
Returned Status Code 400 telling the request could not be processed

Headers

Just as we validate incoming parameters it might be necessary to validate that our outbound data adheres to the API specification. This validation checks that the response headers is of the type we have specified in the API description. This validation can be applied on all scopes for sections outbound and on-error.

The policy name is validate-headers, details on usage see here: https://docs.microsoft.com/en-us/azure/api-management/validation-policies#validate-headers

Example

In this example an operation that has the Test-Header specified as integer.

Header in the specification
Description of Test-Header

In this policy we detect specified headers but don’t act on any errors except Test-Header. Unspecified headers will be ignored.

<policies>
    <inbound>
        <base />
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
        <validate-headers specified-header-action="detect" unspecified-header-action="ignore" errors-variable-name="responseHeadersValidation">
            <header name="Test-Header" action="prevent" />
        </validate-headers>
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

Returning a response of wrong type will result in a status code 400 with a generic text “The request could not be processed due to an internal error. Contact the API owner.” if you look into Application Insights the message is more clear “header Test-Header cannot be validated. Input string ‘yourvalue’ is not a valid number. Path ”, line 1, position 3.

Generic response with status code 400.
Generic response with Status Code 400

Status code

An important part of our API specifications are the status codes we return. This validation check the HTTP status codes in responses against the API schema. This validation can be applied on all scopes for sections outbound and on-error.

The policy name is validate-status-code, details on usage see here: https://docs.microsoft.com/en-us/azure/api-management/validation-policies#validate-status-code

Example

In this example an operation that has only status code 200 specified. If the requested session doesn’t exist the backend will return Status Code 404. We want to validate all not specified status codes except 404.

Frontend with status code 200 response specification
Only Status Code 200 is specified

To avoid the status code 502 that will be the result if we validate any unspecified status codes we add a status-code element with action set to ignore.

<policies>
    <inbound>
        <base />
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
        <validate-status-code unspecified-status-code-action="prevent" errors-variable-name="variable name">
            <status-code code="404" action="ignore" />
        </validate-status-code>
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

The prevented response if we don’t ignore the 404 status code.

Status 502 if an unspecified header is returned
Status 502 if an unspecified header is returned

The exception in Application Insights

Summary

This is definitely a set of policies that we can use to ensure that a API specification is honored. It will require some thinking to balance the trade off between the added value and the performance implication of doing the validation.

Scripting Hosts, Instances and Handlers

Having a common set of Hosts and Handlers over different environments is a quite common practice. Being able to script the setup is a great way to ensure you have the same set hosts and settings. A good approach when you write scripts to handle Hosts, instances and handlers is to be able to re-run them over and over and just creating new if not present.

Sample scripts can be found here: https://github.com/skastberg/biztalkps/tree/master/ScriptedInstallSamples

Create a host and instances

I use the PowerShell Provider that comes with BizTalk Server which I find handy. If you prefer that you could use the WMI classes MSBTS_Host, MSBTS_HostInstance and MSBTS_ServerHost. The sample below uses the PowerShell Provider. Note that for hosts I do a Push-Location to “BizTalk:\Platform Settings\Hosts\“. To create objects with the Provider, navigating to the right location is a must before calling New-Item.

To create a host

Create-Host

To create an instance

Create-HostInstance

The full samples can be found in the BizTalk-Common.psm1.

Making a host handler for an adapter

To configure handlers with WMI use the classes MSBTS_ReceiveHandler and MSBTS_SendHandler2. Doing it with the PowerShell Provider is straight forward using New-Item.

Create-Handler

The full sample can be found in the BizTalk-Common.psm1.

Exporting and importing Hosts, instances and handlers

A lot of customers have asked for a way to “copy” their setup in let’s say QA to Production. One tool you could use is the BizTalk Server Migration Tool in which you can select what parts you want to move, that said you might want to change some settings along the way. I have written a script (Export-BizTalkHostsRemote.ps1) to export Hosts, Host Instances and Handler to a CSV file and another (Configure-Hosts.ps1) to import using the edited output from the export. I could have used the existing settings export but wanted to easily be able to edit the file in Excel. I have intentionally left out throttling settings. Hope you find them handy, if you find any issues let me know.

Format of the file

Column Comment
HostName Name of the host
HostType Expected values InProcess or Isolated
GroupName Windows Group that controls access to the host
AuthTrusted Authentication Trusted setting in the properties dialogue.
IsTrackingHost Allow host tracking setting in the properties dialogue.
Is32BitOnly 32-Bit only setting in the properties dialogue.
MessagingMaxReceiveInterval Messaging Polling Interval for the host
XlangMaxReceiveInterval Orchestration Polling Interval for the host
InstanceServer A pipe | separated list of instances

Server1|Server2

InstanceUser Service Account used for the host instances
InstanceUserPwd In the export same value as InstanceUser for the import it expects path to a password in a KeePass database
ReceiveHandler A pipe | separated list of Receive adapters for this host.

WCF-SQL|MSMQ|FILE

SendHandler A pipe | separated list of Send adapters for this host. A * will mark this as the default host.

WCF-SQL|*MSMQ|FILE

 

Configure BizTalk Server with Script

To configure BizTalk Server with script you need a configuration file that describes your choices. You can export it from an existing environment or run Configuration.exe do your selections and export. To reuse the file, you will need to edit it to match the new environment. Generally, I create template files with tokens that I replace at configuration time. You can read about the configuration framework here.

Overview

The configuration files differ slightly between the first server in which you create the group and the secondaries that joins the group. In the image below you see that the Selected Attribute differs. The same kind of change for SSO if you create the group on the first server.

Compare Configs

Handling secrets

In the configuration files you will enter passwords for service accounts and SSO Backup. One alternative could be to use KeePass files that are encrypted or Azure KeyVault. That said at some point you will have secrets in clear text so ensure you delete the configuration file when you’re done. Below a sample function to extract the secrets from a KeePass database using the PoShKeePass module. You can find information extracting KeyVault secrets here.

Resolve-Secrets

The execution itself is quite straight forward starting Configuration.exe with /S parameter

ConfigureBizTalk

Other configurations

Something I see quite often is that customers forget to configure Backups and DTA Purge jobs in a timely manner which leads to unnecessary database growth. You don’t need many lines of code to configure them.

DTA-Purge

Other potential targets are to register the BizTalk PowerShell provider, WinSCP setup for the SFTP adapter and third-party software.

Sample scripts can be found here: https://github.com/skastberg/biztalkps/tree/master/ScriptedInstallSamples

 

Installing BizTalk Server binaries with script

Feature installation

When installing BizTalk Server with script you will need to tell the setup program what features to install. You can use the /ADDLOCAL parameter using a list of features as described here or use a configuration file with the /S parameter. I prefer to use the later by exporting the features from a reference installation or a previous version installation (Compare an export from previous versions with the list below since some features are no longer valid).

This table shows an export for features from the msi-files, I have matched them with a configuration export from a fully installed BizTalk Server. The feature column is the name that must be used, and it is case sensitive.

Feature Parent Feature Description
Documentation Selecting the Documentation component installs the core documentation, tutorials, UI reference (F1 help), programmer’s reference, and usage instructions for the SDK samples and utilities.
AdditionalApps Additional Software is a set of optional components that extend the functionality of Microsoft BizTalk Server.
BAMEVENTAPI AdditionalApps Select the BAM-Eventing Support component to install necessary software for the BAM-Eventing Interceptors for Windows Workflow Foundation and Windows Communication Foundation. Selecting this component also installs the BAM Event API that is used to send events to the BAM database from custom applications.  BAM-Eventing Support is part of the Business Activity Monitoring feature in Microsoft BizTalk Server.
FBAMCLIENT AdditionalApps Selecting the BAM Client component installs the necessary client side software that allows business users to work with the Business Activity Monitoring feature of Microsoft BizTalk Server.
MQSeriesAgent AdditionalApps Selecting the MQSeries Agent component installs the necessary software that enables Microsoft BizTalk Server to send and receive messages to an MQSeries message bus.
OLAPNS AdditionalApps Selecting the BAM Alert Provider component installs the necessary software that enables Microsoft BizTalk Server to provide Business Activity Monitoring (BAM) alerts.
ProjectBuildComponent AdditionalApps Project Build Component enables building BizTalk solutions without Visual Studio.
RulesEngine AdditionalApps Selecting the Business Rules Composer and Engine component installs the necessary software to compose policies that are consumed by the Business Rules Engine.Engine component provides a mechanism for capturing dynamically changing business policies and the ability to implement those changes quickly within and across business applications.
SSOAdmin AdditionalApps Selecting the Enterprise Single Sign-On (SSO) Administration component installs the necessary software for administering, managing, and connecting to SSO Servers.
SSOServer AdditionalApps Selecting the Enterprise Single Sign-On (SSO) Master Secret Server component installs the necessary software that enables this server to become the master secret server, store the master secret (encryption key), and generate the key when an SSO administrator requests it.To use this feature, you must also install the following: Enterprise Single Sign-On (SSO) Server.
AdminAndMonitoring Selecting the Administration Tools component installs the necessary software to administer Microsoft BizTalk Server. To use this feature, you must also install the following: Enterprise Single Sign-On (SSO) Administration.
AdminTools AdminAndMonitoring This feature contains tools to monitor, administer, and deploy onto a Microsoft BizTalk Server solution. These tools include MMC snap-ins, Health and Activity Tracking, Deployment Wizards and other tools for monitoring, administration and deployment.
BAMTools AdminTools Business Activity Monitoring administration
BizTalkAdminSnapIn AdminTools Configure and manage Microsoft BizTalk Server.
HealthActivityClient AdminTools Health and Activity Tracking Client
MonitoringAndTracking AdminTools Health Monitoring, Reporting and Tracking tools
PAM AdminTools PAM
WcfAdapterAdminTools AdminAndMonitoring Windows Communication Foundation Administration Tools
Development Selecting the Developer Tools and SDK component installs samples and utilities that enable the rapid creation of Microsoft BizTalk Server solutions. This includes: SDK samples and supporting documentation, BizTalk Explorer, schema and map designers, and Visual Studio 2015 project templates. This component requires Visual Studio 2015.To use this feature, you must also install the following: Enterprise Single Sign-On (SSO) Administration.
DeploymentWizard Development Deploy, import, export and remove BizTalk assembly
Migration Development Migration
SDK Development Provides programmatic access to Microsoft BizTalk Server
SDKScenarios SDK SDK Scenarios
Note: I’m not sure of the usage, it is not included in a configuration export but it is accepted and noted in the logs when used.
TrackingProfileEditor Development Business Activity Monitoring Tools
VSTools Development Visual Studio Tools
WCFDevTools VSTools Windows Communication Foundation Tools
BizTalkExtensions VSTools Biz Talk Extensions
AdapterImportWizard BizTalkExtensions Adapter Import Wizard
BizTalkExplorer BizTalkExtensions Manage BizTalk Configuration databases
MsEDISchemaExtension BizTalkExtensions Microsoft EDI Schema Design Tools
XMLTools BizTalkExtensions XML Tools
Designer BizTalkExtensions Orchestration and Pipeline designers
OrchestrationDesigner Designer BizTalk Orchestration Designer
PipelineDesigner Designer BizTalk Pipeline Designer
MsEDISDK Development Microsoft EDI SDK
MsEDIMigration MsEDISDK Selecting the Microsoft EDI Migration Wizard installs the necessary software that enables migration of existing EDI documents to
BizTalk BizTalk
WMI BizTalk WMI
InfoWorkerApps  The Portal Components are a set of services used by business people to communicate, collaborate, and reach decisions enabling them to interact, configure, and monitor business processes and workflows. To use this feature, you must also install Internet Information Services (IIS).
BAMPortal InfoWorkerApps  Selecting the Business Activity Monitoring component installs the necessary software that gives business users a real-time view of their heterogeneous business processes, enabling them to make important business decisions. To use this feature, you must also install Internet Information Services (IIS).
Runtime  Selecting the Server Runtime component installs the runtime services for Microsoft BizTalk Server. These runtime services are an essential part of the BizTalk Server platform. To use this feature, you must also install the following: Enterprise Single Sign-On (SSO) Administration, Enterprise Single Sign-On (SSO) Server.
Engine Runtime  The Engine feature contains components for performing messaging, orchestration, and tracking functions. This is the core runtime component of Microsoft BizTalk Server. This option also includes Enterprise Single Sign-On components to allow encryption of configuration data.
MOT Engine  Messaging, Orchestration, and Tracking runtime components.
MSMQ Engine  BizTalk Adapater for Microsoft Message Queue Service.
MsEDIAS2 Runtime  Selecting the BizTalk EDI/AS2 Runtime components installs the necessary software that enables Microsoft BizTalk Server to process documents in the Electronic Data Interchange (EDI) format.
MsEDIAS2StatusReporting MsEDIAS2 Microsoft EDI/AS2 Status Reporting
WCFAdapter Runtime  Selecting the Windows Communication Foundation Adapter component installs the necessary software that enables Microsoft BizTalk Server to integrate with Windows Communication Foundation.

FeatureXml

Example of an installation execution in PowerShell

Start-Process -FilePath $fullPathToBTS -ArgumentList "/S $fullPathToConfig /CABPATH `"$FullPathToCab`" /norestart /l $logFullname /companyname CONTOSO /username CONTOSO" -Wait

Adapter Pack installation

The adapter pack can also be installed silently. The process consists of several steps since you have the SDK and adapters and for each one of them 32 and 64 bit to include. In the setups I do with customers we tweak the installation to match the features that will be used on the specific installation. Detailed description on the parameters can be found here. In the samples you find a function that installs both the SDK and the adapters.

Example of an installation execution in PowerShell that will install WCF-SQL and Oracle DB adapters.

Start-Process -FilePath "$cmd" -ArgumentList "/qn ADDLOCAL=SqlFeature,DbFeature CEIP_OPTIN=false" -Wait

CU and Feature Packs

Keeping your systems updated is a good practice and you can install both CUs and Feature packs silently. In the samples you find a function that installs CU. Scripted installation of CUs and Feature pack is straight forward and you can get the required parameters running the installer with /?.

SetupParameters

Example of an installation execution in PowerShell that will install a CU or Feature pack.

Start-Process -FilePath $fullPathToCu -ArgumentList "/quiet /s /w /norestart /log $logFullname" -Wait

Good to remember

Add a check to see if your Virus scanner is enabled. The installation process will try to stop WMI which many scanners use thus protect from an it will lead the installation to fail.

While doing your test installations, you will probably want to do some retries. Most features can be uninstalled so you can write another script to uninstall, then you can start over with your installation.

$installedMsiObject = Get-WmiObject -Class Win32_Product | Where-Object { $_.Name -like "*BizTalk*"

if ($installedMsiObject) {

try {
$installedMsiObject.UnInstall() | Out-Null
}
catch {
  Write-Error "Error occurred: $_"
}
}

Sample scripts can be found here: https://github.com/skastberg/biztalkps/tree/master/ScriptedInstallSamples

Automating BizTalk Server installation and configuration

Installing and configuring BizTalk Server can be complex, time consuming and error prone. The complexity comes not from the process itself but more from all the different components and possible configurations. My objective is to share my experiences from working with several customers and some techniques you can use to create your perfect installation. The objective is not to show “the perfect” process, perfect for me might not be perfect for you. In this post I share an overview, later I will do some more with more details and share some of the functions I use.

Before you start

Decide what your main drivers are and let that be your guide through the creation. Repetition and control are generally the drivers to automate, and the goal standardized developer machines, disaster recovery preparation, test environments or ensuring the environments are equally configured over our Dev to Production pipeline.

Decide what your baseline is and document it, think what could change in 6 month or a year. With one customer we created a brilliant setup that started with bare Windows installations and within 2 hours a highly available solution was in place. Discussing a year later we concluded that creating the LUNs and volumes might have being overdoing it since the underlying storage will be changed. It would have been better to have it as a requirement in our baseline or as a distinct step in the process that could be very easily changed or replaced.

Consider internal organization and politics, i.e. if you will never be allowed to install SQL Server or create groups put it in your baseline document as a pre requisite.

Set a timeframe for your work, if not you can spend weeks searching for perfection instead of reaching your goal. If you’re too ambitious you might end up in a overly flexible process that just is a parallel to the normal one, not a good idea.

Document the execution process. Write the running order of scripts and shortly what each one does.

Windows

Generally, Windows installation is already taken care of and I see it as a part of the baseline. That said, you should ensure that the features and configurations you need are in place. You will need the Windows installation source and can use PowerShell Cmdlets like Enable-WindowsOptionalFeature or Install-WindowsFeature (will not work on Windows 10). I find this post good to find features and decide which one to use.

Sample Script to enable Windows Feature

BizTalk Server

Setting up BizTalk Server product consists of two parts, installation and configuration. Installation will add the binaries to the system. Configuration will create or join a BizTalk Group and enable/configure other features as Rules Engine.

When running Setup.exe with the /S command it will use the list of InstalledFeature elements in the configuration file you specify. The silent installation details are documented here.

When running Configuration.exe with the /S command it will use the Feature elements. Each Feature element represents a section in the configuration dialog box. I will look more in depth on this on another post.

Sections of the configuration file

Installation of additional software like Adapter Pack, CU/Feature Pack, WinSCP (needed for SFTP adapter) can also be installed silently. Setting up hosts and handlers can also be done.

SQL Server

SQL Server can be silently installed and use configuration files to use the configurations you will need. I leave this with a pointer to the documentation.

Things I have scripted post installation are

  • Setup of Availability Groups and creating empty BizTalk Databases with the file setup I want to have.
  • Setting in the Primary Check for availability Groups
  • Configuration of Backup and DTA Purge.

Wrapping up

Basically, all parts of setting up a BizTalk Server environment can be done with script. Your needs and environment set the limits. I believe scripting your environment is a good way to get to know the components you’re using. I will follow up with some more posts that will go more into hands on approach to the different parts.

I will do a session on this matter att Integrate 2019 in London https://www.biztalk360.com/integrate-2019/uk See you there!

Test your Liquid transformations without deployment

I work on a project doing integration with Azure Integration Services and Azure Functions. As always in integration projects doing mapping is a key area. I’m used from BizTalk development to be able to test my maps without doing deployment which makes it easy to develop in an iterative manner. While working with Logic Apps I started using Liquid transformations and did not find any tool to help with that. Logic Apps transforms with the DotLiquid library (C# Naming conventions). With that information in hand I created a program to test. I separated it in two a library, LiquidTransformationLib.dll, and a program, LiquidTransform.exe. The library makes it easy to use in automated tests.

Parameters for LiquidTransform.exe:

Parameter Description Required
-t | –template Full path to the Liquid template to use. Yes
-c | –content Full path to Content file. Yes
-d | –destination Full path to destination file. Will overwrite an existing file Yes
-r | –rootelement Root element to add before render. For Logic Apps you will need to use content No
-u | –rubynaming Use RubyNamingConvention, Logic Apps use C# naming conventions and will be the default. No
-? | -h | –help Show help information No

liquidtransformYou can download source code from the GiHub repo https://github.com/skastberg/LiquidTransformation

If just want the binaries https://github.com/skastberg/LiquidTransformation/tree/master/Binaries

More information about Liquid Transformation in Logic Apps:

https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-enterprise-integration-liquid-transform

https://blogs.msdn.microsoft.com/logicapps/2017/12/06/introducing-the-liquid-connector-transform-json-to-json-json-to-text-xml-to-json-and-xml-to-text/