Azure Archives - Aric Levin's Digital Transformation Blog http://aric.isite.dev Microsoft Dynamics 365, Power Platform and Azure Wed, 11 May 2022 17:22:24 +0000 en-US hourly 1 https://wordpress.org/?v=5.9.3 Co-Authoring in Canvas Apps http://aric.isite.dev/azure/post/coauthoring-canvas-apps-azure-devops/ Sun, 06 Feb 2022 06:55:00 +0000 https://aric.isite.dev/index.php/2022/02/06/co-authoring-in-canvas-apps/ Late November last year, Microsoft released an experimental feature in Canvas apps to allow co-authoring, so that multiple users can work on the app at the same time. Currently, prior to this feature, if a user is working in Canvas Studio, any other user that will try to login to Canvas Studio will be blocked stating that it is locked by the previous user.

The post Co-Authoring in Canvas Apps appeared first on Aric Levin's Digital Transformation Blog.

]]>
Late November last year, Microsoft released an experimental feature in Canvas apps to allow co-authoring, so that multiple users can work on the app at the same time. Currently, prior to this feature, if a user is working in Canvas Studio, any other user that will try to login to Canvas Studio will be blocked stating that it is locked by the previous user.

In the post, I will demonstrate how we can enable co-authoring, and show how collaborating works when multiple users are trying to work on the app at the same time.

In order to implement this, there are a couple of prerequisites. We need to connect Power Apps to a Git repository and share the app with the other users that will be co-authoring. In order to connect to a Git repository, we first need to configure it. Currently the supported repositories that can be used are Github and Azure DevOps. In this post, we will be using Azure DevOps to demonstrate how this works.

Configure Azure DevOps:

If you have worked with Azure DevOps before, this should be pretty simple. You will need to create a new repository to use for the Canvas App and generate a Personal Access Token.

Let’s start by configuring Azure DevOps. Navigate to dev.azure.com to your organization, and open the project where you want to create the Repo. If you don’t have a Project yet, you can create a new Project. The url that you should see on your screen should look like:

https://dev.azure.com/ORG_NAME/PROJECT_NAME, where ORG_NAME is the name of your Azure DevOps organization, and PROJECT_NAME is the name of the project you are working on. The screenshot below shows what you should be seeing on the screen:

Canvas Apps Co-Authoring - Azure Dev Ops Configuration (Project View)

Click on Repos to start creating a new Repo. Click on the Plus sign next to the Project Name on the left navigation, and select New Repository. This will open a Panel on the right hand side. Enter the name of the Repo, and click the create button. Make sure that that Add a README checkbox is checked. The screenshot below shows this steps.

Canvas Apps Co-Authoring - Azure Dev Ops Configuration (Create a Repo)

Once the Repo has been created, you can either copy the name of the Repo from the address bar.

For instructions on generating a personal access token, you can read the following post from a couple of years ago:

/Azure/Post/calling-azure-devops-pipeline-flow-canvas

You will not need to use GitBash to convert the token to base64 for this implementation.

Next let’s go back to our Power Apps. If you have not already done so, please make sure to share the app with the other users that will be co-authoring. Make sure it is shared with the other users as co-owners and not just users.

Canvas Apps Co-Authoring - Power Apps - Share Ownership

Now, as the owning user, open the app in edit mode. Click on Setting from the App main screen, or you can go to File menu, and click Settings from there. Within Settings, click on Upcoming Features, select Experimental and look for Show the Git version control setting. Turn this on.

Canvas Apps Co-Authoring - Power Apps - Show Git Version Control Setting

Once this is turned on, within the popup you will see on the left navigation. Click on the link, which will show a window to connect to the Git Version control.

Canvas Apps Co-Authoring - Power Apps - Connect to Git Version Control

Click on the Connect button, and then enter the Git repository URL, Branch and Directory Name

Canvas Apps Co-Authoring - Power Apps - Git Version Control Repo Settings

Next you will get a repo authentication window requesting your username and password. Enter your user account email in the username field, and the Personal Access Token that you created and copied as the password. If you get a message that the directory was not found, just let the system create the directory for you.

Once you have finished configuring this, you will that there is an additional button added to the Canvas apps command bar, shown in the image below. This is the Commit changes and check for updates button.

Canvas Apps Co-Authoring - Power Apps - Toolbar Changes

Now that we have enabled the Git version control in our app, we can try and test out Co-authoring. You will probably have to log out and log back into the app for this to work.

The video below will show you the app open in two separate windows. We will make a change to the app in one of the windows, and click on the Commit changes and check for updates button. Once done, we will click on the same button on the other window (logged in as a different user), and you will see how the changes are updated in that window as well.
NOTE: If the video does not full show in your browser, right click the video and select Open in new tab.

This is one of the long awaited features, and although still experimental, and I am sure will improve over time, this is a great start for co-authoring. I know that it is something that I will be using shortly.

The post Co-Authoring in Canvas Apps appeared first on Aric Levin's Digital Transformation Blog.

]]>
Moving to Azure Synapse http://aric.isite.dev/sql/post/move-to-azure-synapse/ Tue, 06 Jul 2021 02:14:00 +0000 https://aric.isite.dev/index.php/2021/07/06/moving-to-azure-synapse/ I have been working with a client for the past year or so, where they have been using the Data Export Service to write data from their Dataverse environment to an Azure hosted SQL Server.

The post Moving to Azure Synapse appeared first on Aric Levin's Digital Transformation Blog.

]]>
I have been working with a client for the past year or so, where they have been using the Data Export Service to write data from their Dataverse environment to an Azure hosted SQL Server.

The Data Export service, for anyone who is not aware is a Microsoft solution that uses the Data Export Framework that allows the moving of data from Microsoft Dynamics 365/Dataverse to an Azure SQL Server (whether running on an Azure VM, Azure SQL or an Azure Managed instance). This solution works without the requirement for any custom development or SQL Server Integration Services.

The Data Export Service is a good solution, but it comes with its own fallbacks, especially with large customers. Those fallbacks include failures and delays in synching at points in time, complexities in troubleshooting, the inability to copy configuration from one environment to the next, such that would be expected in an ALM process.

Last year, Microsoft introduced an alternative option to the Data Export Service. This new alternative called Azure Synapse Link (or at the time Data Lake) allowed the option to easily export data from your Microsoft Dataverse environment to an Azure Data Lake Storage.

There is no deprecation notice for the Data Export service and that there are a lot of Microsoft customers that are still using it, but it seems like the direction that Microsoft is pushing customers is to use Azure Synapse Link (formerly Azure Data Lake) in order to sync the data between their Microsoft Dataverse environment and an Azure data source.

This Azure Data Lake Storage (Generation 2) is an Azure offering that provides the ability to store big data from analytics purposes. It is based on Azure Blob storage making it cost effective and robust. The next few steps will show you how to configure the Azure Storage V2 account that is required for setting up the link between Microsoft Dataverse and Azure Synapse Link.

The first step is to login to the Azure Portal by navigating to https://portal.azure.com, and then click on the Create a resource link icon and look for Storage account. Click on the create option for Storage account.

Create Resource - Azure Storage Account (Data Lake Gen 2)

Select a subscription and Resource group, enter a storage account, select the region and you can leave the rest of the settings as is. The screenshot below shows the first tab of the Storage account creation

Create Resource - Azure Storage Account (Data Lake Gen 2) - Basics tab

Do not click on the Review and create button at the bottom of the screen, but rather the Next: Advanced button to move to the advanced tab.

You will notice in the advanced tab that there is an section for Data Lake Storage Gen 2. Check the box to enable hierarchical namespace. The image below shows this tab.

Create Resource - Azure Storage Account (Data Lake Gen 2) - Advanced tab

You can skip the rest of the tabs and click on Review + create to finalize the creation of the Storage account. Once the Azure Storage account is configured, we can go ahead and start configuring the Azure Synapse Link in our Dataverse environment.

Create Resource - Azure Storage Account (Data Lake Gen 2) - Deployment Complete

Navigate back to your maker portal, by going to https://make.powerapps.com. Within your instance, expand the Data menu, and click on the Azure Synapse Link menu item. This will open the New link to data lake panel, where you can specify your Subscription, Resource Group and Storage account that you created for use with Azure Synapse Link. The image below shows this

Dataverse - Link to Data Link - Select Storage Account

You will notice that there is also an option to Connect to your Azure Synapse Analytics workspace, which is currently in preview. This allows us to bring the Dataverse data into Azure Synapse with just a few clicks, visualize the data within Azure Synapse Analytics workspace and then rapidly start processing the data to discover insights using features link serverless data lake exploration, code-free data integration, data flows for ETL pipelines and optimized Apache Spark for big data analytics.

Let’s go back to our Azure portal and create the Azure Synapse Analytics so that we can do this at the same time within database. In your Azure portal, click on the create a resource link again, and this time search of Azure Synapse Analytics.

Create Resource - Azure Synapse Analytics

This will open the Create Synapse workspace page on the Basics tab. Select your subscription and Resource group. You will also have to enter a name for the managed resource group. A managed resource group is a container that holds ancillary resources created by Azure Synapse Analytics for your workspace. By default, a managed resource group is created for you when your workspace is created.

Enter the name of the workspace and the region. The you will have to enter the name of the Storage Gen2 that we previously created and the name of the File System. If you do not have a file system name, click on the Create new link under it and provide a name, this will create the File system for you.

Create Azure Synapse Analytics - Basics tab

Do not click on the Review + create, but on the Next: Security button. You will have to provide a password for your Sql Server admin login.

Create Azure Synapse Analytics - Security  tab

You can now click on the Review + create, and then the create buttons to create the synapse workspace. This process takes a little longer then the creation of the storage account, as more resources are being created. Once deployment is done you can go to your new Azure Synapse Analytics resource by clicking on the Go to resource group button.

Create Azure Synapse Analytics - Deployment Complete

Let’s go back to our Maker portal and select the Connect to Azure Synapse Link again, but this time we will also provide the information for the workspace.

Check the Connect to your Azure Synapse Analytics workspace (preview), enter your Subscription, Resource group, Workspace name and Storage account. In the resource group make sure that you do not select the managed resource group, as that resource group does not have the workspace and the storage account associated with it.

Dataverse - Link to Data Link - Select Storage Account (with Workspace)

Once you click on the Next button, you can select the tables that you want to sync with Synapse Link. For this purpose we will only select a few tables (account and contact), but you can select as many tables as you want.

Dataverse - Link to Data Link - Add Tables

Finally, click on the Save button. This will create the connections and allow you to start synching and reviewing data in Synapse Link and Synapse Analytics. Once completed, the page will refresh and you will see the Linked data lake showing the Storage account that you previously created.

Now let’s start by going to Azure and see what happened when we clicked on the Save button. In our list of containers you will see various containers, one of them containing the name of your dataverse organization. Clicking to that container will show you a folder for each of the tables that you selected for your synchronization as well as a model.json file which contains your schema for the entities that you selected.

If you drill down into the selected entities, you will find a single csv file containing the initial dump from Dataverse. You can view and edit the file directly in Azure Blob or download it and open it in Excel. The data will contain all the fields that are part of the entity.

Azure Storage Account - Initial Sync results

Once we either add an additional record we will notice that a new file gets created corresponding to the month of the creation. If the record already exists in our Azure Blob environment, a new file will not be created, but it will modify the existing record.

Azure Storage Account - Sync Results after new record created

When modifying exists records, the changed record will get updated in the corresponding file where it currently exists. In our case, based on the record the changes will either be in the 2021-06 or 2021-07 file.

Now that we see that the files are created, let’s go ahead and see how we can view this data in Power BI. The file thing that we are going to need is to get the Endpoint of the Data Like storage. Within your new Storage account, in the left navigation under settings, click on Endpoints. Within the endpoints page, under the Data Lake Storage section, you will see that there are a Primary and Secondary endpoint Urls for Data Lake Storage. Copy the Primary endpoint url. This will be used within Power BI. This is shown in the image below:

Azure Storage Account - Endpoints - Data Lake

Next you are going to need to get the Access key for the Data Lake Storage Gen2. In the Azure Storage Account, under the Security + networking section, click on Access keys. This will show you the page containing a list of access keys. Click on the Show keys button, and the copy the value of the first key. You will need this for configuring Power BI.

Azure Storage Account - Access Keys

In Power BI, click on the Get Data button in the Ribbon, select Azure for the source of the data, and from the available Azure source select Azure Data Lake Storage Gen2 as shown in the image below:

Power BI - Get Data - Azure Data Lake Storage Gen2

Click on the Connect button, and in the new pop up window, enter the Data Lake Storage Primary endpoint url that you copied in the previous step and paste it in this window, select the CDM Folder view Beta and then click OK.

Power BI - Get Data - Url and Data View (CDM Folde View)

In the next window, you have the option to sign in using an Organizational account or the account key. Click on the account key. This is the access key that you copied from the previous step. After you enter the access key, you will see the following window, with the available data source results.

You will then see the navigator which provides you with a view of the model that you have in Azure Data Lake. Expand the Storage Container, and then expand the cdm hive. This will show you the list of entities that you have available there as tables. See the screenshot below.

Power BI - Get Data - Navigator

Finally from Power BI, you can start adding visualizations or fields, and customize your data as needed. In the screenshot below, we add a few fields from the accounts table in the Azure Storage account.

When we configured Azure Synapse link we checked the box for creating a Synapse workspace as well. If we navigate to the Synapse workspace that we created, we will be able to query the data from our Azure Storage account container from within the Synapse Analytics workspace. There are a lot of configuration options that are available in Azure Synapse Analytics workspace, such as configuring Analytics pools, encryption and firewall configuration and more. Those can be further reviewed in the Microsoft documentation, but for our purpose, we are going to take a look at the Synapse studio. The image below shows the Azure Synapse Workspace overview page, where we can click on the Open Synapse Studio to start querying our analytics.

Azure Synapse Analytics - Overview page

When Synapse Analytics Studio opens, there are a lot of available links on how to use this Azure product, and might be overwhelming, but we are just going to review the basics on how to retrieve or query data from the data warehouse. There are a few options that you can use to create a SQL Script against Synapse Analytics. You can click on the New button on the top of the page, and choose SQL script

Azure Synapse Analytics Studio - New SQL Script from Home Page

You can click on the Data icon on the left navigation, under the Home icon, which will allow you to expand into the data structure of the data store, and from there click on the account table, choose New SQL script and then choose the Select TOP 100 rows. This will create the SQL script for you and you can start editing it from there.

Azure Synapse Analytics Studio - New SQL Script from Account Table Data

The last option is clicking on the Develop icon in the left navigation, then clicking on the + button and selecting SQL Script as shown below:

Azure Synapse Analytics Studio - New SQL Script from Develop page

Once we have selected the query option that we want to use, we can go ahead and build our query. In our case we are just going to retrieve the list of accounts as shown in the screenshot below:

Azure Synapse Analytics - Create Query for Account table

When we click on the Run button we will be able to see the results as shown below.

Azure Synapse Analytics - Create Query Results for Account table

There are a lot more possibilities and options that are available for how to use Azure Synapse Link and Azure Synapse Analytics and accessing the data from different sources. This articles provides the basic review on how to configure them and make them work for your Dataverse environment. I hope this was helpful.

The post Moving to Azure Synapse appeared first on Aric Levin's Digital Transformation Blog.

]]>
New Dataverse functionality in creation of App Users http://aric.isite.dev/dynamics/post/https-www-ariclevin-com-powerapps-post-dataverse-app-users-ppac/ Tue, 08 Jun 2021 07:22:00 +0000 https://aric.isite.dev/index.php/2021/06/08/new-dataverse-functionality-in-creation-of-app-users/ In late 2019, I wrote a blog article on how to configure oAuth authentication for Dataverse by creating an App Registration record in Azure, and the configuring the App Registration/User account in your Dataverse environment so that it can be consumed as an Application User or Service Principal.

The post New Dataverse functionality in creation of App Users appeared first on Aric Levin's Digital Transformation Blog.

]]>
In late 2019, I wrote a blog article on how to configure oAuth authentication for Dataverse by creating an App Registration record in Azure, and the configuring the App Registration/User account in your Dataverse environment so that it can be consumed as an Application User or Service Principal. The link to that article is shown below:

https://www.ariclevin.com/Azure/Post/configuring-oauth-cds

In recent weeks, I had to do that same for an additional user, but while going through the logic of implementing this, I notices some changes.

After the creation of the User account and the registration of the App in AD, when I went to create the account in my Dataverse environment. The username, full name (first and last names) and the email addresses were locked. The only setting that I was able to enter was the Client Id.

Dataverse - New Application User - Classic Interface

I even tried using God Mode so that I can enter my own User Name (for the AD account) that I specified, but when I saved the new Application User, the User Name would store whatever name was entered in the App Registration record.

This change was implemented a few months back, as Microsoft was trying to simplify the creation of App Users, so that the user can be created only be entering the Client Id. After the user account has been created we are able to modify the email address, first and last name, but the name (domain name) and the last name cannot be changed. The last name seems to be configured to what is stored in AAD as the App Registration name. Might need to play around with this a little, but if you have access to AAD, you should created this in the right way

I asked around a little bit, and it seems like a few days ago there has been a change in Microsoft Docs on how applications user should be created. The link is provided below:

https://docs.microsoft.com/en-us/power-platform/admin/manage-application-users

The new changes are that now Application Users can be created right for the Power Platform Admin Center. As a prerequisite we have to register the App in Azure Active Directory, but once the app is registered, we can add in directly by following the steps below.

Navigate to Power Platform Admin Center, select the environment, click settings, and under Users + permissions select Applications Users as shown in the image below

Dataverse - PPAC - User and Permissions - App Users

In the Application Users settings you will see a list of all the App Users that are currently configured for your dataverse environment. Click on the Command bar New app user button as shown below:

Dataverse - PPAC - Environments - Settings - New App User

This will pop up a panel where you can start creating the new App User account. Under neither the App label, click on the App an app link:

Dataverse - PPAC - New App User - Add an existing App

This will pop up an additional panel which will show all of the apps that are registered in Azure Active Directory. Select the Microsoft Dynamics CRM (Dataverse) app registration that you previously configured, and click on the Add button

Dataverse - PPAC - Select app from Azure Active Directory

Once the app registration is added, we will need to select the Business Unit and to add the security roles. Click on the pencil icon next to Security role, which will pop up an additional panel showing the list of available security roles. Select one or more roles that need to be assigned to this user, as shown below:

Dataverse - PPAC - Add App User - Select Security Roles

The final page is shown below. Click on the create button to create the app user in your Dataverse instance, and it can be used after that.
Dataverse - PPAC - Create App User - Create

This is a great step moving forward, but I still wish the User account details could be set on the creation of the App User to an actual AAD user.

The post New Dataverse functionality in creation of App Users appeared first on Aric Levin's Digital Transformation Blog.

]]>
Calling an Azure Pipeline using a Cloud Flow or Canvas App http://aric.isite.dev/azure/post/calling-azure-devops-pipeline-flow-canvas/ Mon, 22 Feb 2021 04:35:00 +0000 https://aric.isite.dev/index.php/2021/02/22/calling-an-azure-pipeline-using-a-cloud-flow-or-canvas-app/ With everything that is going on around ALM and CI/CD and how to implement an automated deployment process with Azure Dev Ops of Github actions, there are still many organizations that have a lot of work that needs to be done before they can turn the switch over.

The post Calling an Azure Pipeline using a Cloud Flow or Canvas App appeared first on Aric Levin's Digital Transformation Blog.

]]>
With everything that is going on around ALM and CI/CD and how to implement an automated deployment process with Azure Dev Ops of Github actions, there are still many organizations that have a lot of work that needs to be done before they can turn the switch over.

In this post, I will show how we can use Power Automate Cloud flows to initiate an Azure DevOps Pipeline, and in turn use a Canvas App to call the flow that will call the Azure DevOps pipeline.

There are a few prerequisite steps that must be done, and you might already have done them, but I would like to review them again in case they have not been completed.

In order to call the Azure DevOps REST API we need to have a Personal Access Token. That can be acquired by going to our Personal Account Settings, and selecting Personal access tokens as shown in the image below:

Azure DevOps Pipeline - Cloud Flow - Personal Access Token Menu

If you already have a Personal access token, clicking on the above link will show you the list of available tokens, but if you don’t you will see a screenshot saying that you don’t have a personal access token yet, and you can click on one of the New Token links to create a New Access Token.

Azure DevOps Pipeline - Cloud Flow - Personal Access Token - New Token Button

You will need to give your Personal access token a name, specify the organization and expiration date, as well as the Scope of the token. For the purpose of this demonstration, I have given Full access, but you can also provide a custom defined scope, which lets you determine whether you want to provide Read, Write and Manage access to the different objects that make up the API.

Azure DevOps Pipeline - Cloud Flow - Personal Access Token - New Token Dialog

Once you click on the Create button, you will see the Success notification showing you the Personal access token, and an icon that allows you to copy the token to the clipboard. It is important to copy it and store in a safe place, as once the window is closed, this token will no longer be accessible.

Azure DevOps Pipeline - Cloud Flow - Personal Access Token - New Token Confirmation

Not that we have the token, we will need to convert it to base 64, so that it can be used in our HTTP call in the flow, passing it as a Basic Authentication token. This can be done by using Git Bash or other utilities that allow you to convert to Base 64. Open Git Bash, by clicking on Start -> Git and then selecting Git Bash.

Azure DevOps Pipeline - Cloud Flow - Open Gitbash

Once Git Bash is opened, you will need to enter the following command in the Bash window:

$ echo -n :[PERSONAL_ACCESS_TOKEN] | base64

You will need to replace the [PERSONAL_ACCESS_TOKEN] with the real access token that you copied earlier. The following screenshot (with some blurring) shows the Git Bash command.

Azure DevOps Pipeline - Cloud Flow - Gitbash Command

Copy the result into Notepad++ or another text editor, as you will need it at a later time.

The next thing that we will need to get is the Pipeline Id that we are going to call. Navigate to Azure DevOps and click on Pipelines. This will give you the list of Pipelines as shown in the image below.

Azure DevOps Pipeline - Cloud Flow - Pipeline View

Click on the Pipeline that you want to execute from your Cloud flow or Canvas app, and let’s examine the Url. You will notice that the end of the url contains a definition Id. That is the Pipeline Id that will will need to use in order to execute the Pipeline.

Azure DevOps Pipeline - Cloud Flow - Pipeline Url (Definition Id)

Next, we will be creating a table in Dataverse, so that we can store the executions. Although not required, I like having a log of who executed these processes and when. Mostly for historic purpose.

The columns that I added are the following, but additional columns can be added.

Azure DevOps Pipelines - Cloud Flow - Dataverse Execution Table

Let’s go ahead and look at the flow that I added to the Solution. I start the flow using a Power Apps trigger, and initializing three variables containing the Pipeline Name, Pipeline Id and the User Email that is executing the flow. The image below shows these steps

Azure DevOps Pipelines - Cloud Flow - PowerApps Trigger and Init Variables

You will notice that each of the variables that are being initialized use the “Ask In PowerApps” option, so that the value is initialized from my Canvas App. The next step is to call the REST API using a HTTP Post request. The url below is the Microsoft Docs url containing the information of the Pipelines REST API:

https://docs.microsoft.com/en-us/rest/api/azure/devops/pipelines/runs/run%20pipeline?view=azure-devops-rest-6.1

Within the document, it details the actual url to be used in the HTTP request, which is:

https://dev.azure.com/{organization}/{project}/_apis/pipelines/{pipelineId}/runs?api-version=6.1-preview.1

You will notice the HTTP post request below. The blurred section contains the organization and project from DevOps within the URI parameter, and in the Header, we paste the result that we go from the conversion of our Personal access token to base 64 in Git Bash.

Azure DevOps Pipelines - Cloud Flow - HTTP Action

At this point, the execution will commence, and the final stage that I demonstrate below are really optional, but a nice to have. If I would like to write the results back to my table in my Dataverse instance, I can retrieve the Build Id and Build Url from the result of the HTTP. I can do this by using a Parse JSON request, which will give me the properties that I need.

The Parse JSON contains the Body from the HTTP request, and then a copy of the Schema. I can run the flow before adding the last two steps, and then get the JSON result to be pasted in the run results of the Cloud flow HTTP action step, by pasting them in the Generate from sample below which will generate the Schema.

Azure DevOps Pipelines - Cloud Flow - Parse JSON Action

You can find the actual needed schema for this step pasted below.

[Parse JSON Schema]

Finally the last step is to add the record to Dataverse. We have the Pipeline Id, Pipeline Name and User Email being passed from the Canvas App. The Build Id and Build Url are the results from the Parse JSON request, which are basically body(‘Parse_JSON’)?[‘id’] and body(‘Parse_JSON’)?[‘url’].

Azure DevOps Pipelines - Cloud Flow - Create Dataverse Row Action

Once the flow is execute we will see a new record in my Dataverse instance.

Azure DevOps Pipelines - Cloud Flow - Dataverse Execution Results

We will also see that the Azure DevOps pipeline is being initialized

Azure DevOps Pipelines - Cloud Flow - Azure DevOps Execution Results

Now, in order to execute this, I create a Canvas App, that will have all the Pipelines that are part of my process. This might be temporary pipelines are automated and scheduled pipelines. The app is shown below

Azure DevOps Pipelines - Cloud Flow - Canvas App Execution

When you click on the Run Pipeline button under each of the pipelines, it will call the Power Automate Cloud flow by using the following Canvas App Command.

InitializeDevOpsPipeline.Run(“PublishExportUnpackGit”, 3, User().Email)

This can be of course enhanced further, but for an initial execution of the pipelines this is a great help. It is the first step in beginning to have an ALM. Like always, I hope this was helpful to some of our community members.

I also want to give a special thanks to Paul Breuler for Microsoft for helping me out in some of these challenges.

The post Calling an Azure Pipeline using a Cloud Flow or Canvas App appeared first on Aric Levin's Digital Transformation Blog.

]]>
Calling MS Graph API from Canvas Apps by using Power Automate and HTTP Request http://aric.isite.dev/azure/post/http-request-msgraph-canvas-app-flow/ Mon, 14 Dec 2020 06:28:00 +0000 https://aric.isite.dev/index.php/2020/12/14/calling-ms-graph-api-from-canvas-apps-by-using-power-automate-and-http-request/ Recently, while working on some requirements, I noticed that one of the solutions that the company implemented was to replicate the Azure Groups and Members from AD into their Dataverse environment. This seemed to me to be unnecessary, but sometimes due to security restrictions, this might be the only way.

The post Calling MS Graph API from Canvas Apps by using Power Automate and HTTP Request appeared first on Aric Levin's Digital Transformation Blog.

]]>
Recently, while working on some requirements, I noticed that one of the solutions that the company implemented was to replicate the Azure Groups and Members from AD into their Dataverse environment. This seemed to me to be unnecessary, but sometimes due to security restrictions, this might be the only way.

After further investigation, the only requirement from the business was to check whether particular users belonged to groups, and there was no immediate requirement of having the AD Members stored in our Dataverse environment, especially due to the fact that we would have to continuously sync between AD and Dataverse.

I offered the business an alternative. What if you had an application where you could specify the name of the group and it would show you all of the users that belong to it, or even better, specify the user and it will let you know all the group the user belongs to. This seemed to be like a no-brainer, and from our point of view an easy solution especially since we finally got access to use the Graph API (for Users and Groups only).

There are other alternatives to this of course, but this was going to work for us, especially since individual users did not have access to Graph API, but we had an App Registration with Client Id and Secret.

The following section briefly explains how to set up permissions to Azure Graph Api. Login to Azure and click on App Registrations. You will need to set up the API permissions and the Client Certificate, and finally copy the information so that you can use it within your flow.

Once you get into the New App Registration and given it a name, click on the Api Permissions, and select Microsoft Graph, and choose the Application type (and not delegated). You will need to add two sets pf permissions: Group.Read.All and User.Read.All, and then make sure that you grant consent, as these permissions require admin consent.

Microsoft Graph - App Registration - Api Permissions

Next, set up the Client Secret. Click on the Certificates & secrets, select the option to add a new Client Secret. You can set the Client secret to expire in 1 year, 2 years or to never expire. After you have created the Client Secret, copy it into notepad or another program. You will need this for later. Once you leave the App Registration, you will not be able to retrieve the Client Secret, so make sure that you store it for later use.

Microsoft Graph - App Registration - Client Secret

Now that you are done, go back to the Overview page of your app registration. You will need to copy the Application (client) ID and the Directory (tenant) ID, the same way you copied the Client Secret before. The following image shows the information on the Overview page.

Microsoft Graph - App Registration - Overview

Since I don’t really like designing stuff, and prefer to take predesigned templates, I took that Org Browser Canvas App template that is available from the Create App page.

The app contains more features then what I was looking for, so I removed it to a minimal so that it just contains the home screen and search screen .

At the end, I had two screens. Let’s quickly go over these. I named the app AD Search. My home screen contains the title and logo, and two buttons: User Search and Group Search, which both redirect to the Search Screen after Setting the parameter action type either Users or Groups.

The View my profile at the bottom is still in progress. I have not yet decided what to include there.

Microsoft Graph - Canvas App - Home Screen

When the Search Screen loads, it clears any previous search results from the results collection, so it is always a new search by calling the Clear on the ADSearchResults collection.

The form displays a search control, and when the search text is entered, and the search icon is clicked, it calls Power Automate flows to retrieve the user matching the email address or the groups matching the display name of the group.

The following screenshots shows both scenarios.

Microsoft Graph - Canvas App - Search Screen

If we look at the source for the search icon OnSelect function, it will show us that we are adding the results from the GraphUserSearch flow or GraphGroupSearch flow into a new collection called ADUserResults.

If (actionType = "Users", 
ClearCollect(ADUserResults, GraphUserSearch.Run(txtInputSearchUser.Text, actionType)), 
ClearCollect(ADUserResults, GraphGroupSearch.Run(txtInputSearchUser.Text, actionType))) 

The Gallery Items points to ADUserResults, and then we show the Initial, DisplayName and Title of each person in the results of each gallery item.

Now, let’s look at the logic for Power Automate, but before in case anyone is not aware, I would like to introduce Graph Explorer which can help out with configure Graph Api requests. The Graph Explorer can be accessed via: https://developer.microsoft.com/en-us/graph/graph-explorer.

Both flows start the same way, and we can combine both of them into a single flow, but I split them for simplifying this article. Our trigger for this flow is Power Apps, and then we initialize for variables of type string. These variables are the Search String (containing the value of the search string from the Canvas App), the Action Type (containing the action from the Canvas App, which can be Users, Employees, Groups or the type of search that we will be performing), the Query String and the Search Results (containing the placeholder for the results). The image below illustrates this.

Microsoft Graph - Power Automate Flow - Trigger and Init Actions

The next part is we set the variable Query String. This will contain the Graph Api query string that will be called, as shown in the image below.

Microsoft Graph - Power Automate Flow - Query String Variable

We can take that same query string and test it out in Graph Explorer to make sure that it works before adding it to the flow. Next, we need to call the Api, using a GET request and passing the Query string that we specified in the URI parameter. We add a contentType header with a value of application/JSON, as our results will be in JSON format.

We need to provide the authentication method to retrieve the results. As we created an App Registration using a Client Secret, we will use Active Directory OAuth. This is where we will need to information that I previously mentioned you should write down.

We will provide the Directory (Tenant) Id, the Audience, the Application (Client) Id and the Client Secret. The image below illustrates the HTTP request.

Microsoft Graph - Power Automate - HTTP Request

Finally, we need to store the results in the variable we instantiated earlier (called Search Results), and then pass the Response back to the Canvas App using the Response action (of the Request Connector).

Microsoft Graph - Power Automate - Search Results and Response

The value that is entered in the SearchResults variable is the value of the body of the previous step or:
body(‘HTTP_Graph_Users’)?[‘value’]

We enter that value in the Body of the response. We also need to specify the Response Body JSON Schema, which will contain the elements that will be returned to the Canvas App. The sample below shows this text.

{
    "type": "array",
    "items": {
        "type": "object",
        "properties": {
            "id": {
                "type": "string"
            },
            "displayName": {
                "type": "string"
            },
            "givenName": {
                "type": "string"
            },
            "surname": {
                "type": "string"
            },
            "mail": {
                "type": "string"
            },
            "jobTitle": {
                "type": "string"
            },
            "userPrincipalName": {
                "type": "string"
            }
        },
        "required": [
            "id",
            "displayName",
            "givenName",
            "surname",
            "mail",
            "jobTitle",
            "userPrincipalName"
        ]
    }
}

When we want to use the same logic for Querying the Groups, the flow is similar but there are a few options that are changed. After the initialization of the variables, we need to first query the Graph Api to get the Id of the group that we are querying, and only then can we get the members of the group. This flow contains two calls to the Api. The image below illustrates the calls to the Api.

Microsoft Graph - Power Automate - Group Flow

The solution files has been posted to my github repository:
ariclevin/MSGraph (github.com)

A You Tube video is available as well:
https://youtu.be/DqqpDmdaVxc

Special shoutout to Elaiza Benitez for her episode on Microsoft Graph API authentication on What the Flow:
How to authenticate as an application with Microsoft Graph API (youtu.be)

The post Calling MS Graph API from Canvas Apps by using Power Automate and HTTP Request appeared first on Aric Levin's Digital Transformation Blog.

]]>
Deploy solutions across environments using Azure DevOps and Power Platform Build Tools http://aric.isite.dev/dynamics/post/power-platform-build-tools-azure-devops/ Sun, 29 Nov 2020 08:08:00 +0000 https://aric.isite.dev/index.php/2020/11/29/deploy-solutions-across-environments-using-azure-devops-and-power-platform-build-tools/ In this article, I review the required steps of building and deploying your Power Apps solution from your Development environment to your test and production environments. We will add a approval trigger from Test to Prod, Unpack the solution and publish your managed solution as an artifact so that it can be used to import to the higher environments.

The post Deploy solutions across environments using Azure DevOps and Power Platform Build Tools appeared first on Aric Levin's Digital Transformation Blog.

]]>
In this article, I review the required steps of building and deploying your Power Apps solution from your Development environment to your test and production environments. We will add a approval trigger from Test to Prod, Unpack the solution and publish your managed solution as an artifact so that it can be used to import to the higher environments.

Although you can use both a Username/Password authentication against your environments or a Service Principal/client secret authentication, I prefer to use the Service Principal approach. In order to use the Service Principal approach, I need to create a Service Principal account and App Registration in Azure.

You can follow the link below, which is a previous blog that I wrote to configure oAuth or Service Principal/App Registration.

https://www.ariclevin.com/Azure/Post/configuring-oauth-cds

Adding the Application User, and assigning the role to the Application User will have to be done in all environments where you want this to be configured.

Let’s first take a look at the solution that we created in our dev environment. The solution contains the Account and Contact entities, as well as a custom Lead entity that was created.

Azure DevOps - Power Apps Source Solution

Assuming that you already have a DevOps environment, let’s take a look at what we need, or what we have created.

We created a new DevOps instance called MSBizApps and within it the BizAppsALM project. We also created a new Repo for that project.

Azure DevOps ALM Process - New Project

Now, we can get started. The first step is we need to create a Pipeline that will export the solution file. Based on your configuration, there are various steps that you might take. In the screenshot below I cover all the necessary steps. I duplicated the Export Solution and the Unpack solution so that you can determine whether you will be using managed or unmanaged solutions, but the recommended approach is using managed solutions.

https://docs.microsoft.com/en-us/power-platform/alm/solution-concepts-alm

Per the documentation Unmanaged solutions are used in development environments while you make changes to your application. Managed solutions are used to deploy to any environments that isn’t a development environment for that solution, such as Test/QA, UAT, SIT and production.

You can export your unmanaged solution and unpack them, so that you will have a copy of your unmanaged solution source in your DevOps repository.

Let’s start with creating a new pipeline by selecting Pipelines and choosing the New Pipeline option. Select the Use the classic editor link on the New pipeline (Where is your code?) page, select the Azure Repos Git that you created, and in the Select a template page, click on the Empty job link. This will open up the new pipeline page. Let’s start by giving the pipeline a name. You can use any name that you would like. In our case, I used BizAppsALM-PublishExportUnpackGit.

Azure DevOps ALM Process - New Pipeline

We will need to add various tasks to get the solution ready. Let’s go over these steps and the configuration one by one.

The first task, that is a prerequisite for anything that we do with the Power Platform Tools is the Power Platform Tool Installer. There is no configuration requirements for this task.

Power Platform Build Tools - Tool Installer

The next task is to Publish the customizations. In this task too, there is not much configuration with the exception of setting the authentication type and the Connection String.

Power Platform Build Tools - Publish Customizations

If you have not set your connection string yet, you can do so by clicking on the + New button to the right of the Connection String drop down. This will ask you for the environment url, directory (tenant) id, application (client) id and client secret, as well as the name for the connection string. Once you create it you will be able to use the connection string in your code. You should create connection strings for Dev, Test and Prod (or whatever other environments you need).

The next task is to export our solution, we have to possibilities here, to export as unmanaged or managed, but before we export our solution, let’s go ahead and create a variable that will store the solution name. We do this by clicking on the variables tab, selecting Pipeline variables and creating a new variable. We will call the variable SolutionName, and enter the value with the name of the solution (as shown in the image below).

Azure DevOps ALM Process - Pipeline Variables

Now, let’s continue with the previous task of exporting the solution. In the case of exporting the solution as an unmanaged solution, we will need to provided the authentication information, the solution name and the solution output file. We should also check the option to export the solution as asynchronous operation. The screenshot is shown below. Notice that we use the variable $(SolutionName) instead of the actual name of the solution.

Power Platform Build Tools - Export Solution (Unmanaged)

The logic for exporting the solution as managed is very similar. The only differences are the Solution Output file name and the Export as Managed solution checkbox have to be checked.

Power Platform Build Tools - Export Solution (Managed)

You will notice in the two tasks above that the Solution Output file contains the Build.ArtifactStagingDirectory variable. This is a predefined variable that can be used anywhere within your pipeline. The link below shows the list of predefined variables in Azure DevOps/Azure Pipelines.

https://docs.microsoft.com/en-us/azure/devops/pipelines/build/variables

The next task, which is an optional task if you would like to add to your pipeline is running the Power Platform Checker on your solution file. You can specify the name of your solution file here and the rule set whether this is a Solution Checker rule set or a Certification rule set for an AppSource Certification. You can also specify whether this process should fail if there are errors. If this is only for your internal deployment, you should make sure that you specify the Continue on error option.

Power Platform Build Tools - Solution Checker

The next task, which is the final task that uses the Power Platform Build Tools is to Unpack the Solution. This will create individual files of all the components that make up the solution. You can unpack the unmanaged solution as well as the managed solution. This will be then deployed to source control and will contain the history of your solution deployments.

Let’s start with the unmanaged solution. We need to provide the solution name and the target directory name. Again in this case we will use some of the predefined variables to store the extracted files. The screenshot below contains the unmanaged configuration.

Power Platform Build Tools - Unpack Solution (Unmanaged)

The unpack solution for the managed solution looks exactly the same with the exception of the solutions names and the type of solution.

Power Platform Build Tools - Unpack Solution (Managed)

When you do this, you will notice that you have the ability to combine these two together, as the type of solution contains options for Unmanaged, Managed or Both.

Power Platform Build Tools - Unpack Solution (Type of Solution)

As we want to deploy this later to our higher environments, the next task would be to publish an artifact that will contain the managed solution. The Publish Pipeline Artifact task allows us to specify the location of the artifact that was used from the export of the managed solution (as this is what we will use to deploy to the higher environments.

We create an artifact name called drop which will contain the managed solution, and then publish it to Azure Pipelines.

Azure DevOps ALM Process - Publish Pipeline Artifact

The final step will be to publish this to our Source Control system. This will deploy your unmanaged or managed solution based on what you have created in previous steps. If you created both managed and unmanaged it will deploy them both to Source Control.

Azure DevOps ALM Process - Command Line Script - Commit to Git

The below script contains the text that should be used in this task:

echo commit all changes
git config user.email "user@myorg.onmicrosoft.com"
git config user.name "Automatic Build"
git checkout master
git add --all
git commit -m "solution updates"
echo push code to new repo
git -c http.extraheader="AUTHORIZATION: bearer $(System.AccessToken)" push origin master

Now that we have finished our solution we can go ahead and Queue it up and run the solution.

If we take a look at our source control after the solution has been run we will see that the our repository contains two folders which contain the source for the managed code and the unmanaged code, as seen in the screenshot below:

Azure DevOps - Power Apps Repo Solutions

The next part is to create the release pipeline. In our particular case we have three environments: Dev, Test and Prod. Dev is our unmanaged environment, and test and prod are the managed environment. This part is actually simple and straight forward.

If you didn’t configure your connections for your higher environments, you should do it now. Let’s go ahead and start with a new release pipeline.

In Azure DevOps, under Pipelines, click on Releases, and select New to create a new release pipeline. The New release pipeline window will open with a panel showing on the right hand side. In the Panel click on the Empty Job link to start with an empty job (as shown below):

Azure DevOps ALM Process - New Release Pipeline

Start by giving your release pipeline a name, and adding an artifact. The artifact will be the name of the pipeline that you finished creating. Once you select the Build Pipeline, it will default to the latest version and you can provide a Source alias (friendly name) for this artifact.

Azure DevOps ALM Process - Release Pipeline - Add Artifact

Next we will create a deployment stage to our test environment. By default one stage will already be created. Give a name to the stage, and click on the link that says 1 job, 0 tasks.

We will add two tasks to this stage. The first will be the Power Platform Tool Installer, and the second will be the Power Platform Import solution task. Before we configure this, let’s go ahead and take a look at the agent job. At the bottom of the Agent Job, under Artifact download section, we can see the artifacts that will be downloaded, and in our case the BizAppsSolution_managed.zip, which was added in one of the previous steps as an artifact.

Azure DevOps ALM Process - Release Pipeline - Artifact Download

Next let’s look at the Power Platform Import solution. Here we will have to select the connection to use, and the file or folder where the solution will reside. When we click the … next to the Solution name, it will popup a window which will allow us to select the artifact that was previously added to the pipeline.

Azure DevOps ALM Process - Release Pipeline - Import Solution Select Artifact

Once we selected the solution input file, we need to check the Import as a Managed solution option and to run the import asynchronously. 

The exact same steps will need to be follows for production environment by creating a new stage. Once we have created the stage for production as well, our final release pipeline will look like this:

Azure DevOps ALM Process - Release Pipeline - Artifacts and Stages

The only thing that is still needed is to add the approval for going from test to production. When we hover over the person icon at the right of the Deploy to Test, we will see the text Post-deployment conditions. When we click on this, we will see a panel that specifies different conditions which includes Approvers, Gates and Auto-redeploy triggers.

When we enable the Post-deployment approvals and set an approver, the approver will get an email notification specifying that they must approve this stage in order for the next stage to execute. If you have multiple environment for QA, UAT, SIT, etc… that required deployment prior to Production, you can create these approvals at any of the stages.

Azure DevOps ALM Process - Release Pipeline - Post Deployment Conditions

When the pipeline is run, and deployed to QA, it will send a notification email to the Approver specifying that an approval is required, as shown in the image below. The approver can view the approval and then approver it. Once approved, the pipeline will continue to the next stage.

Azure DevOps ALM Process - Release Pipeline - Approver Notification

After you are done with the pipeline and approvals, the pipeline will look as follows:

Azure DevOps ALM Process - Release Pipeline Completed

Now that you are done, you can provide triggers and schedules, and create the release. I hope this post was educational and provided some insights to the benefits of using ALM as part of your solution deployment processes.

The post Deploy solutions across environments using Azure DevOps and Power Platform Build Tools appeared first on Aric Levin's Digital Transformation Blog.

]]>
Using Azure DevOps for Power Platform with a Self-Hosted Agent http://aric.isite.dev/azure/post/azure-devops-self-hosted-agent/ Sun, 22 Nov 2020 21:08:00 +0000 https://aric.isite.dev/index.php/2020/11/22/using-azure-devops-for-power-platform-with-a-self-hosted-agent/ There are various organizations that for whatever reason, whether it be trust or security are not comfortable using the Microsoft-hosted agents, and would like to use self-hosted agents within their own corporate environment.

In this article, I will go through the steps of creating a self-hosted agent, and then configuring a pipeline that will move your solution between the different environments.

The post Using Azure DevOps for Power Platform with a Self-Hosted Agent appeared first on Aric Levin's Digital Transformation Blog.

]]>
There are various organizations that for whatever reason, whether it be trust or security are not comfortable using the Microsoft-hosted agents, and would like to use self-hosted agents within their own corporate environment.

In this article, I will go through the steps of creating a self-hosted agent, and then configuring a pipeline that will move your solution between the different environments.

Let’s go ahead and start by creating our agent. The first thing that we have to do is create a personal access token. A personal access token is used as an alternate password to authenticate to Azure DevOps. Personal Access Tokens are easy to create and revoke them if they are no longer required.

We start by clicking on the User Settings Icon on the top right corner of our DevOps environment, and selecting Personal Access Tokens from the menu.

Azure DevOps - User Settings

If you don’t have any tokens, click on the New Token link button

Azure DevOps - Create New Token

This will pop up the Create a new personal access token panel. Provide a name for the token, select the organization and expiration date. You can create a custom defined scope and set the appropriate permissions that the token will have, or give it Full access to your DevOps environment. For the purpose of this article, we will provide it with Full access, but you can also select Agent Pools (read, manage) and deployment group (read, manage). You can click on the Show all scopes at the bottom of the panel to see all available authorization scopes for the personal access token. The image below shows the basic configuration.

Azure DevOps - Create New Access Token

Once the personal access token is created (and before you close your window), copy the token. You are going to need it later on. Microsoft does not store the token, so you will not be able to see it again.

Azure DevOps - New Token Success Confirmation

Next, we will go ahead and configure the Security for the Agent Pool. Azure DevOps organization owners and server administrators have permissions by default.

Navigate to the Agent Pools by clicking on the Azure DevOps logo on the top left, select Organization Settings on the bottom left, and then in the left hand navigation, click on Agent Pools. This will display the list of Agent Pools in the center pane. Select the pool that you want to configure, and click on the Security tab in the newly opened pane. If the user account that will be used is not shown, then make sure that you or an administrator can add that user account. You cannot add your own user account, but you should be able to add a group that you are a member of.

Azure DevOps - Agent Security

Now click on the Agents tab, and let’s create our agent. If there are no available agents, you will see a button on the screen (New agent) to create your first agent.

This will pop up a dialog where you can select your operating system and download the agent.

Azure DevOps - Download Agent

Now, that you have downloaded the agent, we can go ahead and install it and configure it. Extract the agent to a directory of your choice, and run the config.cmd command. In our case we extracted the agent files into the C:DevOpsAgent directory.

Azure DevOps - Extracted Agent Source Files

Once you run the config.cmd batch file, you will be asked for the Azure DevOps services url, which is https://dev.azure.com/{organization_name}, and then enter or paste the personal access token that you created previously.

Azure DevOps - Agent Configuration PowerShell Script

To determine whether you want to use the agent in Interactive Mode or Service Mode, you can click on the link below and check the differences on the Microsoft Docs website:

https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser#interactive-or-service

Before we activate the Agent, let’s go ahead and create our first pipeline. We will make this pipeline simple that will include publishing the customizations in our dev environment, exporting the solution as managed and unmanaged and deploying this to our test environment as a managed solution.

Let’s go ahead and create a few pipelines to test this process.

Let’s go ahead and create our first pipeline. The first step is to connect to the Repo. We will use the classic editor to create a pipeline without YAML.

Azure DevOps - Connect to Source Code Repository

In the select source page, we will select our Azure Repos Git, and select the Team Project, the Repository and the Default branch that will be used for our manual or scheduled builds.

Azure DevOps - Select Repo Source

In the select template page, we will start with an Empty job.

Azure DevOps - Select Configuration Template

Next we will start configuring the pipeline and adding the tasks to be used with the agent. The image below shows the test pipeline that we created. The pipeline only uses two tasks, which is the Power Platform Tool Installer and the Power Platform Publish Customizations.

Azure DevOps - Test with MS Agent

Of course this is not a very realistic pipeline, as we usually need something that does more than just publish customizations, but the purpose is just to test that this is working with a Self-Hosted Agent. Just to make sure the pipeline works, we will first try it with the MS Agent.

I will go ahead and click the Save and Queue button and run this pipeline. After about 60 seconds, we get the results that the pipeline executed successfully.

Azure DevOps - Successful Test with MS Agent

Next, I will go ahead and edit the pipeline, and change the agent pool to the self-hosted agent pool that I created, click on the Save and Queue and then run the process.

Azure DevOps - Test with Self Hosted Agent

This should have been straight forward, but it seems like there were some issues with running this we encountered a few issues with the Tool Installer.

Azure DevOps - Failed Test with Self Hosted Agent

To get past this, (and after contacting Microsoft Support) we had to make some modifications to the pipeline, which included the installation of Nuget package and a Powershell script to Unregister the Package Source before running this.

Azure DevOps - Test with Self Hosted Agent including Fix

This issue that is addressed is related to the agent itself and not the PowerApps task. The following two links talk about this in further detail:

https://github.com/microsoft/PowerApps-Samples/issues/123#issuecomment-641053807

https://powerusers.microsoft.com/t5/Power-Apps-Pro-Dev-ISV/PowerPlatform-tool-installer-gives-me-a-nuget-error-on-private/m-p/607181/highlight/true#M2863

After making these changes, we were able to execute the package correctly.

Azure Devops - Successful Test with Fixed Self Hosted Agent

Even though this solves our problem, and enables us to go ahead and create a pipeline using Power Tools, the underlying problem still exists.

I received an update from Microsoft (Power Apps support) last week, that they were able to reproduce the issue using a local build agent, and based on the diagnostics log, what seemed to happen is that the machine where the agent was installed (a Windows 10 machine) had a PackageManagement and PowerShellGet in the older 1.0.0.1 versions, but the hosted AzureDevOps has more modern versions, 1.47 and 2.2x respectively.

Once Microsoft removed the newer versions they were able to reproduce this issue. As a temporary workaround, we were told that we could update these two modules for the build agent user and then retry. The following are the steps that are needed to follow:

  1. Install-PackageProvider -Name “NuGet” -Force -ForceBootstrap -Scope CurrentUser -MinimumVersion 2.8.5.208
  2. Install-Module -Name PackageManagement -MinimumVersion 1.4.7 -Scope CurrentUser -Force -AllowClobber
  3. Install-Module -Name PowerShellGet -MinimumVersion 2.2.5 -Scope CurrentUser -Force -AllowClobber

Since this issue was reported, the timeline for a fix is supposed to be coming early this week (Ween of 11/23/2020), when a new version will be published to the Visual Studio marketplace which will check and update the package providers.

Hopefully everyone which will be working with self-hosted agents can benefit from this update. I will update this post was the change has been implemented.

The post Using Azure DevOps for Power Platform with a Self-Hosted Agent appeared first on Aric Levin's Digital Transformation Blog.

]]>
Update Data in your CDS using Azure SQL Server, Azure Functions or Connectors from SQL Data http://aric.isite.dev/dynamics/post/cds-azuresql-azurefunc-connector/ Mon, 14 Sep 2020 00:30:00 +0000 https://aric.isite.dev/index.php/2020/09/14/update-data-in-your-cds-using-azure-sql-server-azure-functions-or-connectors-from-sql-data/ Recently, while working on a project that needed to update the Exchange rates in CDS, I was tasked at finding a solution that would be able to retrieve data from a SQL Server hosted on an Azure Virtual Machine. There were so many different approaches, and security was the main one, but I decided to do a deep dive and testing out how Power Automate will be able to accommodate these requests.

The post Update Data in your CDS using Azure SQL Server, Azure Functions or Connectors from SQL Data appeared first on Aric Levin's Digital Transformation Blog.

]]>
Recently, while working on a project that needed to update the Exchange rates in CDS, I was tasked at finding a solution that would be able to retrieve data from a SQL Server hosted on an Azure Virtual Machine. There were so many different approaches, and security was the main one, but I decided to do a deep dive and testing out how Power Automate will be able to accommodate these requests.

The first thing of course was the creation of the SQL Server database and add the data to it. I created a table in SQL Server that would be updated on a regular basis containing the Exchange Rates that I needed. For the purpose of this post, I used Azure SQL Server, and not a managed instance or a SQL Server hosted on Azure. The image below show the table and values that were added to the SQL Server database.

Exchange Rates Solution - SQL DB Design

You can see in the image that we store the Exchange Rate Code, The Exchange Rate Date (in Date format and string format) and the Exchange Rate value on that particular date.

Next, I created an entity in CDS to store that information. I called the new entity Transaction, and added fields of the Transaction Amount, Exchange Rate and the Transaction Amount in US Dollars. When adding a new record to the Transaction table we will only add the Currency and the Transaction Amount.

The first test that was to be performed was to create a Power Automate flow, that would get triggered on the creation of a new Transaction record, retrieve the Code Name of the currency, and then call Azure SQL Server (using Username and Password connection), to get the rows that correspond to the query that we provided.

I then initialized a CurrentDate variable that would contain today’s today in the format of the Date String that I created in the database. The formula for that date was:
formatDateTime(utcNow(), ‘yyyyMMdd’)

I used the Get rows (V2) action, adding a Filter Query which would specify a condition where the Exchange Rate Code is the code I previously retrieved and the Date is the current date, which was set in the previous step. The Select Query would return the value of the Exchange Rate value that I needed.

The image below shows the steps to get here.

Exchange Rate Solution - Power Automate Flow #1

Next, although I know I will have a single result, I still did an Apply to Each as I used the Get Rows for my SQL Server connection. I add the value property to the Apply to each action, and then add a Compose Step to calculate the amount in US dollars. The calculation here is slightly more complex, but still considerably easy if you know the names of your actions and items. This uses the mul to multiple two values, and conversion of each of the values to float to allow this multiplication to happen.

mul(float(items(‘Apply_to_each_3’)?[‘ExRateValue’]), float(triggerOutputs()?[‘body/nyc_transactionamount’]))

The final step here is to update the Transaction record and passing the ExchangeRateValue from the SQL query results and the Transaction Amount in US dollars. The image below shows the logic that is built in the Apply to each section.

Exchange Rate Solution - Power Automate Flow #2

Next we save the flow, and we need to test this logic. In order to test this, we navigate to our Model-driven app, and provide the Currency and the Transaction amount, as shown in the below image. As the flow was created to trigger on create of a new transaction record or update of the total amount, this will execute. The image below shows the information that was entered in the record before the flow executed. You will notice that the transaction number, transaction exchange rate and transaction amount (USD) are left empty.

Exchange Rates Solution - New Transaction record (CDS)

You can wait approximately 5-10 seconds and then be able to see the results in the flow as shown below:

Exchange Rates Solution - Completed Transaction record (CDS)

Now that we saw that we could do this by connecting to SQL Server directly, I was going to test a few other alternatives. I created an Azure function that would connect to the SQL Server and retrieve the same value for me. The main purpose is that this Azure function would be available across the board, even if you did not have access to SQL Server. You will need to add the System.Data.SqlClient nuget package to your Azure function project and to the header of your file. Your function code will expect two values as part of a query string, the currency code and the formatted date. The code is not fully optimized, but you can see the code below:

Exchange Rate Solution - Azure Api function (Code)

Now that we created the Azure function, let’s take a look at how the modified Power Automate flow will look like. The trigger and the first three action will be the same. Instead of the Get Rows, we would have an HTTP request, and we would pass the formatted URL, containing the code and the date parameters that we need to pass to the Azure function. We would use a compose action, similarly to what we did before to multiply the result from the Azure function (Exchange Rate) and the total amount of the base currency: mul(float(body(‘HTTP’)), float(triggerOutputs()?[‘body/nyc_transactionamount’]))

Finally we would update the record in CDS with the values of the Exchange Rate and the result of the multiplication. The image below shows the final flow:

Exchange Rate Solution - Power Automate flow calling Azure Api function

I ran the same tests in CDS and the results were the same, only using an API, so that it could be called from anywhere.

Now that I saw that this works, I wanted to add another option to this. I build a custom connector that would call the Azure API, so that it could be accessed directly from flow. This worked nice as well. The image below shows the test that I ran within the connector builder, and can be added to the Power Automate flow easily.

Exchange Rate Solution - Custom Connector

Although this entire process is doable, and somewhat cumbersome as you have to update your Exchange Rates on a daily basis and make sure that the data in your database is synched, there are APIs that are available for use that will allow you to just call a custom API and retrieve the data that you are looking for.

One of these solutions is called XE Currency Data, and for $800/year you can get 10,000 API requests per month. Additional requests are available of course at a higher price. I have not reviewed other vendors of a similar api, but this seems to be something that should be readily available and maybe even available for free for smaller request numbers.

The post Update Data in your CDS using Azure SQL Server, Azure Functions or Connectors from SQL Data appeared first on Aric Levin's Digital Transformation Blog.

]]>
Using the Maker Portal to Export to Data Lake http://aric.isite.dev/azure/post/cds-export-azure-data-lake/ Sun, 06 Sep 2020 06:46:00 +0000 https://aric.isite.dev/index.php/2020/09/06/using-the-maker-portal-to-export-to-data-lake/ In my current role the matter of using a data lake has come up, and I wanted to get ready in case this implementation will become a need. I followed instructions from some of the published Microsoft blogs, and this blog post provides step by step instructions on how to configure the Azure Storage account to enable the Export to Data Lake feature as well as running the same process from within the Power Apps Maker Portal

The post Using the Maker Portal to Export to Data Lake appeared first on Aric Levin's Digital Transformation Blog.

]]>
In my current role the matter of using a data lake has come up, and I wanted to get ready in case this implementation will become a need. I followed instructions from some of the published Microsoft blogs, and this blog post provides step by step instructions on how to configure the Azure Storage account to enable the Export to Data Lake feature as well as running the same process from within the Power Apps Maker Portal

First thing is first, and that is creating the Azure Storage account. Assuming that you already have a resource group created, we will use the default settings after we click the New button on the Storage accounts pages. There we will select the Subscription and Resource Group, provide a name to the new storage account that we will be creating, select a location (which has to be in the same location as your Common Data Service instance), select the performance (Standard should be fine for testing), the account type should be set to StorageV2 (general purpose V2), the for replication you should be good to go with RA-GRS or Read-access geo-redundant storage. Leave the Blob access tier as Hot. The image below shows the settings that we have used for this Storage account.

CDS Export to Data lake - Create Storage account (Basics)

Make sure that you don’t press Create and Review button yet, as there is another important setting that has to be set before you create the Storage account. Navigate to the Advanced tab, and under the section Data Lake Storage Gen2, there is a settings called Hierarchical namespace. Set the setting to enabled, as shown in the image below.

CDS Export to Data lake - Create Storage account (Advanced)

You can now go ahead and create the Storage account. Once the Storage account creation has been completed, click on the Go to resource button to navigate to your Storage account, and the under the Settings group of your Storage account click on configuration. Verify here that under the Data Lake Storage Gen2 section, the Hierarchical namespace is set to Enabled (this setting cannot be changed after it has already been created). See the screenshot below for the Configuration screen.

CDS Export to Data lake - Storage account (Configuration)

The next part is to configure the Export to data lake from within your Power Apps Maker portal. Navigate in your browser to make.powerapps.com (within the same tenant as your Azure subscription in case you have multiple). Expand the data section in your left navigation area and click on Export to data lake. Your will see the screenshot below.

CDS Export to Data lake - Export (Home)

Click on the New link to data lake button to start the configuration. On the first page of the configuration, you will need to select the Storage account in Azure that we just created. Notice that before the selection you see the message that specifies where your environment is located and that you can only attach storage accounts within a particular location or locations. This is the reason that we mentioned earlier make sure they Storage account you created is within the same region as your Power Platform organization.

Select the Subscription, Resource group and Storage account as shown in the image below, and then click on the next button.

CDS Export to Data lake - Export (Select Storage account)

Next, you will need to select the entities that you want to add to your Azure Data lake. You can select all entities or only a subset. In our case, we selected a subset of entities, as this was only a test run. When you have selected the entities that you want, click on the Save button (as shown in the image below).

CDS Export to Data lake - Export (Add entities)

NOTE: The process started and I received a 503 (Forbidden) error. I tried to look online for any encounters of this error, but could not find anything concrete. I clicked on the Back button, and followed the process again, and this time it was successful.

One the process completed, your will see linked data lake in the list, you can click on the More Options (…) and select the Entities link to see the status of your synchronization.

CDS Export to Data lake - Export (Linked data lake)

CDS Export to Data lake - Export (Linked entities sync status view)

When the synchronization is complete, if you want to add more entities, you can click on the More Options of the Linked Data lake view, and select Manage entities. This will give you the screen to add additional entities. You can also add entities to your Data lake directly from within the entities view under Data. Simply click on the entity name, click on the drop down arrow next to the Export to data lake on the command bar and select the name of the data lake that you previously created. This is shown in the image below

CDS Export to Data lake - Add new entity to existing export

Now that we have finished the synchronization process, and all the entities have been synchronized with Azure Data lake, let’s go back to Azure and see the results. In Azure, go back to your Storage account that you created for the Data lake, and click on the Storage Explorer (preview). You will see there a few groups of available Azure Storage options, which are Containers, File Shares, Queues and Tables. Expand the Containers within the tree, and select the Container name that has the instance name that you used for the Data lake. The name will be in the format: commondataservice-environmentName-org-Id. In the screenshot below, you can see that we have the account, contact, lead and opportunity folders, which are the entities that we selected for the Azure Data lake sync. You will notice the model.json, which contains the schema of all your Data lake entities. You can download the schema, unminify it and view the data that is available in it. The image below shows the screen of the Azure Data lake container.

CDS Export to Data lake - Storage Account Container View

If we click on one of the folders within the Azure Data lake Storage account, we will see a csv file that contains the data from that entity as well as a snapshot folder which will contain a point in time view of the data before any changes are made based on create, update or deletion of data within our Common Data Service. The image below shows items within the account folder.

CDS Export to Data lake - Storage Account Folder View

I hope that this provides some insights to anyone that has plans to implement synchronization with Azure Data Lake.

The post Using the Maker Portal to Export to Data Lake appeared first on Aric Levin's Digital Transformation Blog.

]]>
The Road to modern Virus Scanning http://aric.isite.dev/azure/post/road-virus-scan/ Thu, 02 Jul 2020 05:34:00 +0000 https://aric.isite.dev/index.php/2020/07/02/the-road-to-modern-virus-scanning/ I have been working in the Government space for a few years now, and most implementations of the Dynamics and Azure tenants and environments are hosted in the Government Cloud. This means that there are a lot of restrictions that we have to deal with, not only from Microsoft but also from the internal IT policies.

The post The Road to modern Virus Scanning appeared first on Aric Levin's Digital Transformation Blog.

]]>
I have been working in the Government space for a few years now, and most implementations of the Dynamics and Azure tenants and environments are hosted in the Government Cloud. This means that there are a lot of restrictions that we have to deal with, not only from Microsoft but also from the internal IT policies.

A few years back we launched our first Dynamics application where one of the requirements was the ability to scan files that were uploaded by end users, whether from the Dynamics application or from Dynamics Portals. These documents would be uploaded to an Azure Blob Storage Container, and as they were uploaded copies to a separate quarantine container until they would be scanned.

At the time our options for a Virus Scan solution were limited. We had an On-Premise Virus Scanning McAffee appliance that was available to us, and we ended up with a solution that would check every few minutes if there were pending quarantined uploads, we would scan them and then move them from the quarantine container back to the clean container.

The below diagram is the high level solution that was implemented.

Virus Scanning Solution - Scheduled Run

This solution worked fine when the traffic was not high, but we did experience at time of high traffic that it would not complete processing the files in the allocated time, and needed a separate solution. The heavy traffic was mostly experienced in the last few months during the COVID-19 pandemic where the amount of applications that we received were substantial higher.

We needed to find a solution to have a quicker turnaround. We have used the Azure Service Bus in previous projects to pass information between our Dynamics environment and our On-Premise servers, so this should work. We would change the process to handle this in real time. As the file is uploaded to our Azure Storage Container, we would immediately fire the Azure Service Bus.

I have written a few posts about Azure Service Bus in the past, so if you are interested in that implementation, click here.

This solution would call the Azure Service Bus listener as soon as the file is uploaded and sent to the Virus Scanner. We could also bypass the need of the quarantine container immediately and only send it there after the Scan if the file was infected. The diagram below shows the new solution.

Virus Scan Solution - Real Time (Azure Service Bus)

As I mentioned that we are in GCC and there are a lot of limitations both from the list of available connectors and the implementations that can be done, but I wanted to address this as if it was done in a Commercial Cloud.

I noticed a couple of weeks ago that Microsoft announced the availability of a new Virus Scanner connector called Virus Total. I was not aware of other options but when I did some searching I encountered the availability of three connectors that have the capabilities of scanning documents (or document streams): Virus Total, Cloudmersive Virus Scan and Microsoft Defender ATP. This was great, it would simplify this logic.

Regardless of which Virus Scanner you are using, you will need to get an API key from the vendor of the Virus Scanning connector in order to establish a connector. Depending on your load of scanning the your cost can be free or cost you some money. I think most of these vendors offer about 4000 free scans a month.

If you are using Dynamics Portals or Power Apps Portals, you can upload your documents to either an Azure Blob Storage Container or SharePoint. The following flow executes when a new filter is uploaded to a SharePoint folder, scans the file for Virus and creates a CDS record with the Status of a Successful or Unsuccessful scan. Let’s review this step by step.

The first part is going to be our trigger. When a new document is uploaded to SharePoint (or Azure Blob) the flow will be triggered to get the content of that document. In case of SharePoint, the single step will provide us with the Content of the Document. If using Azure Blob, we will need an additional step to get the content of the blob based on the path of the file.

Virus Scan Solutiion - Power Automate Trigger (SharePoint or Azure Blob)

Next, we will call the Scan file for Viruses. In this case we used the action from Cloudmersive, but any of the connectors should work just fine for this.

Virus Scan Solution - Scanning via ISV Connector

After the scanning is complete we will add a condition to our flow that looks for the result from the Scan. The CleanResult will return True if there are no Viruses and False otherwise. We can then determine what action we want to do. Delete the file, move to quarantine container or folder, write a record, etc… Your choice. In our case I just wrote it to CDS.

Virus Scan Solution - Post Scanning

That is basically it. You can add additional logic as you see needed, but this is all it takes. Again if you are in Government Cloud or your IT is blocking certain connectors this might not be the solution for you, but if you are able to implement this, it might save you a lot of trouble and headaches.

The post The Road to modern Virus Scanning appeared first on Aric Levin's Digital Transformation Blog.

]]>