Fine-Grained Access Control: Building Custom Roles in Azure via Terraform

Fine-Grained Access Control: Building Custom Roles in Azure via Terraform

Table of contents

No heading

No headings in the article.

When it comes to Azure roles, I need to mention early days of Azure, when access was governed by three roles: Account Administrator, Service Administrator, and Co-Administrator. Over time, Azure introduced a more modern approach to role management through Azure Role-Based Access Control (Azure RBAC). With 70 built-in roles, including Owner, Contributor, Reader, and User Access Administrator, Azure RBAC offers a comprehensive system.

While a detailed discussion on Azure RBAC could span an entire article, I'll focus on specific aspects here. Beyond basic access control (managing users and groups), you can customize roles and define scopes. Microsoft Entra ID roles, formerly known as Azure Active Directory roles, such as Global Admin, Application Admin, and Application Developer, are also noteworthy.

For a detailed understanding of scopes, refer to this link. Additionally, Microsoft provides a helpful topology to visualize these roles."

The mentioned roles are built-in and provided by Azure for assignment. However, what if these roles don't offer the needed access? In such cases, custom roles become essential. Whether enforcing specific access restrictions or adhering to Azure RBAC best practices like the principle of least privilege, custom roles allow fine-grained control.

Now, returning to our primary objective, Azure RBAC is built on Azure Resource Manager, which provides access to Azure resources like storage, computing etc, via SDK, Powershell, Azure CLI or Portal, we will utilize it to deploy our custom role to our Azure environment via Terraform so that will be our bridge to the cloud.

I'm tackling a scenario that spotlights the implementation of a specific custom role—let's call it data-factory-reader. The aim is to grant a select group of engineers in the tester Azure AD group access to certain data factories within our Azure subscriptions. Their mandate? Reading data, and nothing more. Let's zoom in on the code snippet for this functionality

data-factory-reader" = {
      description = "This role allows to read data factory instances and their child resources"
      permissions = {
        actions = [
        not_actions = [
        data_actions = []
        not_data_actions = []

In the snippet above, the first two lines set the name and description, which are mandatory. Moving forward, we specify actions that are allowed and not allowed. This custom role is defined within the file, with the primary implementation residing in the file or any other naming convention that would suggest that you are building a role here.

Considering that the built-in roles offer more permissions than needed for our case (Data Factory has only the contributor role), we've tailored our role. To achieve this, we've explored the Azure Resource Provider Operations page, pinpointing operations used in built-in roles and selecting only the components that align with our requirements

In this context, our focus lies in defining a role with precision and also with a set of permissions specified within properties. The diverse range of properties encompasses actions, not actions, data actions, and not data actions. For a detailed breakdown, refer to the documentation here.

Delving into specifics, our purview includes data factories, linked services, data pipelines, datasets, and tables. However, the primary objective is to safeguard against inadvertent deletions, especially concerning data factories and associated credentials. Imagine assigning this custom role to an Azure AD group for testers, granting them read access while reserving management responsibilities for another team, such as the DevOps AD group.

For a graphical representation of these properties, consult the visual guide below.

Now actions and not_actions are quite easy to grasp, you are defining what is allowed or what is not allowed on the resource level, so right after Microsoft.DataFactory/ we have datafactories/ and the last one is the action itself, it can be read, write, or delete, so depending on the use case we will choose the desired action. Regarding data_actions and not_data action, the idea is the same, however, we are going one level deeper so we are defining what we are allowing or not with the data that resides within the resource. Since we are managing only top-level resources or Control plane to be more precise, we have defined it within the Action property. Our main goal here is to give our engineers access to specific parts of data-factory so that they can read it, however, data should not be altered, hence we are not going into data_action parameters area where these data can be touched.

There is a question that arises, if we have a reader role for example assigned to a user on a subscription level which is inherited all the way down to the resource level, why do we need to do all of this? Well this role has basic reading functionality and that is why we need to extend it with specifics, hence we are doing all of this. If you want to see datasets within datafactory or data-factory resource health, we need to have these specifics in place. Datafactory is one of the resources in Azure that has an entire ecosystem of subservices within.

In our journey of creating custom roles in Azure using Terraform, we've come across a crucial player in this process – the azurerm_role_definition. If you're curious to delve deeper into its intricacies, you can find more details here.

All in all, this article delves into the significance of crafting custom roles, addressing diverse motivations, such as operational needs and compliance requirements, within large organizations. Understanding the timing, purpose, and continual communication with role assignees are pivotal aspects, emphasizing the importance of both technical implementation and responsiveness to organizational needs.