Unhealthy event: SourceId=’FabricDCA’, Property=’DataCollectionAgent.DiskSpaceAvailable’, HealthState=’Warning’, ConsiderWarningAsError=false. The Data Collection Agent (DCA) does not have enough disk space to operate. Diagnostics information will be left uncollected if this continues to happen.

You will often see this error pretty much right away when your Service Fabric cluster comes up if you are using a VMSS with VMs having smaller temporary disk sizes (d:\).

So what’s going on here?

By default Service Fabric’s reliable collections logs for reliable system services are stored in D:\SvcFab, to verify this you can remote desktop in to one of the VMs in VMSS where this warning is coming from. Most people will only see this warning in primary node type as the services you as a customer create are generally stateless and hence no stateful data logs are present on the non primary node types.

Default log size for replicator log (reliable collections) in MB is 8192 so if your temporary disk is 7GB (Standard_D2_v2) for example you will see the warning message in the cluster explorer as below-

Unhealthy event: SourceId=’FabricDCA’, Property=’DataCollectionAgent.DiskSpaceAvailable’, HealthState=’Warning’, ConsiderWarningAsError=false. The Data Collection Agent (DCA) does not have enough disk space to operate. Diagnostics information will be left uncollected if this continues to happen.

How to fix this?

You can change the default replicator log size by adding the FabricSetting in the ARM template named “KtlLogger” like highlighted below, this file size does not change once configured (grow or shrink)-

“fabricSettings”: [
{
“name”: “Security”,
“parameters”: [
{
“name”: “ClusterProtectionLevel”,
“value”: “EncryptAndSign”
}
]
},
{
“name”: “KtlLogger”,
“parameters”: [
{
“name”: “SharedLogSizeInMB”,
“value”: “4096”
}
]
}
]

 

For VM temporary disk sizes and specs, see here- https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-general

More info around configuring reliable services\manifest is here-

 

Service Fabric to Cloud Services, Why?

These are some interesting benefits of using Service Fabric (SF) (over Cloud Services and in general)-

  1. High Density- Unlike cloud services you can run multiple services on a single VM saving you both cost and management overhead, SF will re-balance or scale out the cluster if resource contention is predicted or occurs.
  2. Any Cloud or Data Center- Service Fabric cluster can be deployed in Azure, on-premise or even in a 3rd party cloud if you need to due unforeseen change in company’s direction or regulatory requirements. It just runs better in Azure, why? Because certain services are provided in Azure as a value addition e.g. upgrade service.
  3. Any OS- Service Fabric cluster can run on both Windows and Linux. In near future, you will be able to have a single cluster running both Windows and Linux workloads in parallel.
  4. Faster Deployments- As you do not create a new VM per service like in cloud services, new services can be deployed to the existing VMs as per the configured placement constraints, making deployments much faster.
  5. Application Concept- In microservices world, multiple services working together forms a business function or an application. SF understands the concept of application than just individual services which constitutes a business function. SF treats and manages application and it’s services as one entity to maintain the health and consistency of the platform, unlike cloud services.
  6. Generic Orchestration Engine- Service Fabric can orchestrate both at process and container level should you need to. One technology to learn to rule them all.
  7. Stateful Services- A managed programming model to develop stateful services following  the OOPs principle of encapsulation i.e. keeping state and operations as a unit. Other container orchestration engines cannot do this. And of course you can develop reliable stateless services as well.
  8. Multi-tenancy- Deploy multiple versions of the same application for multiple clients side by side or do a canary testing.
  9. Rolling Upgrades-  Upgrade both applications and platform without any downtime with a sophisticated rolling upgrade feature set.
  10. Scalable- Scale to hundreds or thousands of VMs if you need to with auto scaling or manual.
  11. Secure- Inter VM encryption, cluster management authentication/authorization (RBAC), network level isolation are just a few ways to secure your cluster in the enterprise grade manner.
  12. Monitoring- Unlike cloud services SF comes with a built in OMS solution which understands the events raised by SF cluster and take appropriate action. Both inproc and out of proc logging is supported.
  13. Resource Manager Deployments– Unlike cloud services which still runs in a classic deployment model, SF cluster and applications uses resource manager way of deployments which are much more flexible and deploys only the artefacts you need.
  14. Pace of Innovation- Cloud services is an old platform, still used by many large organisations for critical workloads but it is not the platform which will get new innovative features in future.

More technical differences are here.

Service Fabric Log

Notes from the field on Azure Service Fabric (ASF) and some less known facts-

FAQs-

  1. Why do we need a durability level Silver or Gold? Silver and Gold tier allows SF to integrate with underlying VMSS resulting in the following features-
    • You can scale back the underlying VMSS after scaling it out and ASF will recognise this change and will not mark the cluster as unhealthy. Also note, you cannot scale back the nodes to anything below 5 even though when you create a cluster a Silver/Gold node type has only 3 nodes.
    • Allows ASF to intercept and delay VM level actions requested by the platform or cluster admin to allow stateful services to maintain the minimum replicaset/quorum at any point in time.
  2. Can I change durability tier of the existing cluster/node type?
    • Yes you can upgrade from lower levels to higher and from Gold -> Silver.
  3. Why do we need minimum of 5 nodes in primary node type? Because-
    • To maintain the quorum, you need majority of the nodes running at any point in time in the primary node type. If you are upgrading ASF binaries to the new version and Microsoft decides to update the host machine which hosts one of your 3 VMs then in this situation you are 2 VMs down out of 3, so this will impact the stateful system services. If you had 5 VMs, taking out 2 of 5 will still have 3 (majority) available.
  4. Does Microsoft support ASF cluster spanning across the multiple Azure DCs?
    • Generally speaking, it is not supported.
  5. Can I add/remove Node Types after the cluster is created?
    • Yes.
  6. Can I scale ASF?
    • Yes, via VMSS auto/manual scale mechanism only at present.
  7. Can I make unsecure cluster a secure cluster?
    1. No
  8. Can I scale stateful services?
    1. Yes, by partitioning the data which allows multiple parallel service type instances receiving the requests for their respective partitions.
  9. Each application instance runs in isolation, with its own work directory and process.
  10. Service Fabric process runs in kernel mode hence applications running under it will not be able to crash it easily.
  11. By default, the cluster certificates are added to the allowed Admin certificates list hence the template here secures both node-node and client-node comms. You can though add separate client certs for readonly and admin cluster roles.
  12. Scaling out VMSS causes other nodes in the same VMSS to change to stopping state. This is a superficial UI bug and will be fixed soon, VMs in the VMSS do not stop in reality.
  13. Can I create a SF cluster with small size VMs?
    • Yes, you can but please bear in mind when you do that cluster may start to throw the warnings, see this post to remove those warnings.

vNet Peering with ExpressRoute

Recently, I was working with a large healthcare provider to design their public facing mission critical web platform. The customer already had ExpressRoute connection with a 50mb line, not super fast but it doesnt need to be either always.

Given the design divided the environments in their respective vNets, we ended up creating multiple vNets around 9 in Azure. Connecting each vNet to on-premise network via ExpressRoute would mean consuming 9 connections\links out of 10 (20 for premium sku). This was sub optimal as that will not leave any room for the future projects to utilise the same circuit on-premise connectivity, plus, expansion of additional evironments in future on the platform will be limited as well.

So how could you avoid this problem?

VNet Peering comes to rescue here, following diagram dipicts the topology which can be used to achieve the above design-

vnetpeering

Other points-

  1. You can also use transitive vnet (‘transitive’ is just a given name) for NVAs, implementing hub and spoke model.
  2. vNet Peering does not allow transitive routing i.e. if three vnets are peered in a chain, vnet1 ,vnet2 and vnet3 then vnet1 cannot talk to vnet3.
  3. vNet Peering is always created from both vNets to work, hence above diagram has two arrows for each vNet Peering.
  4. As all the vnets are connected to single VPN Gateway via ExpressRoute, bandwidth between on-prem and Azure vNets will limited to the VPN Gateway SKU selected for the Expressroute connection.

 

RESTful World Starting to Make Sense

As many of you, I’ve come from a classic SOAP world and not having the standardisation in the restful world gives me shivers. But that’s not the case anymore, it’s all hanging together rather nicely now.

When you start to work on the RESTful design of your APIs, the following questions arise very quickly-

  1. How do I describe my API to the external parties without having to maintain the documentation all the time which can go out of sync very quickly?
  2. How do I share the contracts only with the other development teams, allowing them to work independently in a decoupled manner.
  3. How do I validate the request-response from/to the client?
  4. How do I create test harness with load tests automatically?

So here are the answers which will hopefully give you some comfort-

Describe your API-

Swagger is the WSDL of RESTful world. Swagger has been recently adopted by Open API Inititaive (OAI) for its OpenAPI Specification making it a vendor nuetral industry standard. OAI is backed by the key industry players like APIGEE, IBM, Google and of course Microsoft. Notably, AWS is not the member yet but it’s API Gateway is compatible with Swagger.

Type System-

Json Schema is the Xsd of the RESTful world. OAI specification (fka swagger) uses standard type system/schemas language defined by Json Schema Draft 4, atm. Json schemas are backed by IETF, again making it vendor nuetral industry standard.

Tooling-

Json schema website provides a bunch of links to the popular/useful tools here. If you are a .net shop then Newtonsoft Json Schema SDK is your best option, you can’t go wrong with it.

In Azure world, Swagger is fully supported as well. As an example- API Management, API Apps and Logic Apps, they all support Swagger (now known as OpenAPI Specification).

Equally, ARM Template Schemas use Json Schema Draft 4 specification to describe the resource templates, making it open and vendor nuetral to integrate with any external tooling. This is a great example as well if you wanted to see production grade schema and it’s design e.g. types, constraints etc.

That’s said, it is inevitable to avoid any further improvements/refinements in this space but I am confident, the direction of travel will not change much anymore, so it’s safe to use the above (in my opinion) for your RESTful design and evolve together as an industry.

Can’t Create VM/Resources in Resource Group

My customer recently ran into this problem, which will come up when you try to configure your environment properly i.e. create a resource group and give only the required access to the resources in your organisation, following the principle of least privilege. The structure looks like below-

RGSubcriptionIssue

What’s going on here?

Objective: Anthony is a subscription admin and he wants to ensure a role based access control in applied to the resource groups. He takes the following steps to achieve this-

  1. He creates a resource group called A and give a ‘contributor’ access to the user called ‘Ben’.
  2. He then informs Ben to go ahead start using the resource group for the project.
  3. Ben logs into the portal with his credentials and try to create the resource.
  4. Resource creation fails with the error which looks like below- Registering the resource providers has failed. Additional details from the underlying API that might be helpful: ‘AuthorizationFailed’ – The client suneet.xxx@xxx.com’ with object id ‘b8fe1401-2d54-4fa2-b2dd-26c0b8eb69f9’ does not have authorization to perform action ‘Microsoft.Compute/register/action’ over scope ‘/subscriptions/dw78b73d-ca8e-34b9-89f4-6f716ecf833e’. (Code: AuthorizationFailed)

This will stump most of the people as expected. Why? because if you have the contributor access to a resource group, surely, you can create a resource e.g. a virtual machine. What went wrong here- Carefully at the error message and focus on ‘Microsoft.Compute/register/action’ over scope ‘/subscriptions/dw78b73d-ca8e-34b9-89f4-6f716ecf833e’. What does this say? it’s not the authorisation error to create a resource, it is the authorisation error to register a resource provider. This is expected, we don’t want a resource group level identity to register/unregister the resource providers at the subscription level. So how do we solve it? Option 1

  1. Log into Azure with an identity which has a subscription level access to register a resource provider e.g. admin/owner.
  2. Using PowerShell (PoSh) register the resource providers you need at the subscription level. You can also see which providers are available and registered already. Sample script is given below-
Login-AzureRmAccount

$subscriptionId= "<Subscription Id>"
Select-AzureRmSubscription -SubscriptionId $subscriptionId

#List all available providers and register them
Get-AzureRmResourceProvider -ListAvailable | Register-AzureRmResourceProvider

Options 2

  1. Let the subscription admin/owner create the resource e.g. a VM.
  2. This will implicitly register the resource provider for the resources created.

Hope this was helpful.

I’ll be talking to the engineering to see if we can improve this user experience.

Azure, a Safe Innovation Environment

Microsoft Azure allow segmentation of applications, their tiers and the respective staging environments using the controls already built into Azure. Segmentation can be achieved at both network and user access level in Azure.

Segmentation is introduced for the following primary reasons-
Security
Performance
Deployment/Releases Management
Isolated/Safe Innovation Environment

Security
To secure the platforms/applications it is important to-
Separate the application tiers in different Azure virtual network subnets with NSG (Network Security Group/Firewall).
This helps mitigating breach impact as only a limited access is provided to the layer down the stack. Also, it provides a safe container for the components within a tier/subnet to interact with each other, for example a SQL/MongoDB cluster.

User Defined Routes (UDR) in Azure can enforce traffic to route via a virtual intrusion detection appliance for enhanced security.

Employ the principle of least privilege (POLP) using Azure ARM.
This helps ensuring only a minimum required access is provided to the users for supporting the application/platform. For example, only infosec team will have access to manage the credentials, applications will only have read access and will not store credentials on the file system at any time. This also limits the impact of breach in any tier.

Performance
To ensure each application tiers individually and the application themselves provide a guaranteed performance and quality of service (QoS), it’s important to-
Implement SoC (Separation of Concerns) principle and avoid mixing different workloads on the same tier/VM.
Understand disk IOPS thresholds and segment the storage accounts accordingly.
Understand networking/bandwidth thresholds and separate production traffic from dev\test to maintain network QoS.

Deployment/Release Management
Azure fully supports agile methodology natively including the concepts of continuous integration/deployment, blue-green deployments , A/B testing and Canary Releases. By clearly segmenting and demarcating the boundaries of the services\APIs and their environments, following microservices principles, we can deploy and upgrade the services with minimal impact on the platform. Azure natively support microservices architecture and provides a fully managed platform for running a highly sophisticated microservices (ServiceFabric) in cloud.

Isolated/Safe Innovation Environment
To ensure developers, testers and release management teams get a secure environment to deploy applications and platform, it’s important to implement the above mentioned security concepts i.e. NSG, UDR, ARM RBAC/Policies. A well designed environment provides developers a safe environment to try out new technologies without any hindrance to continue to innovate and deliver business value.

Segmentation

Migrate from AWS RDS/SQL using ADF

Recently, I came across a requirement from my customer to migrate the data from AWS RDS/SQL service to Azure for some Big Data Analysis. Obvious choice for this sort of activity in Azure is to use Azure Data Factory (ADF) feature. Now there are many examples of ADF on MSDN with various different data sources and destinations except for some and one of which is AWS RDS.

So how do you achieve it? Simple, treat AWS RDS/SQL as an on-prem SQL Server and follow the guidance for this specific scenario using Data Management Gateway.

Essentially you need to do the following from a very high level perspective-

  1. Create an instance on EC2 in AWS and configure relevant firewall rules (as specified in guidance)
  2. Deploy Data Management Gateway on the above instance.
  3. Test the RDS/SQL access via Data Management Gateway tool from the above instance.
  4. Create ADF factory to read from SQL Server linked service via Gateway.
  5. Do the mapping of data.
  6. Store it in the destination of your choice (e.g. Blob storage)

Adding Authentication via ARM for API Apps/Gateway

API Apps Preview 2 has changed the auth model defined below, please refer here for details about what’s changed]

This one was left out for a long I must admit. Since I joined Microsoft, I was keeping very busy learning about my new role, organisation and the on-boarding process. Today is the first weekend I have some breathing space to revisit this but in the in meanwhile I had some excellent pointers from Gozalo Ruiz (Lead CSA in my team) on this which led me to resolve this faster than I would have otherwise.

Here’s the problem, I had a fully automated ALM pipeline configured to build, test and deploy API App to Azure from VS Team Services (previously known as VS Online) except that there was no easy way to configure authentication identity for the gateway. For those who don’t know how API App authentication works (this is set to change now, gateway will not be requirement in future), each API App is fronted by a gateway which manages the authentication for each API App within the same Resource Group. I had a need to secure my API via Azure AD so I used Azure Active Directory as a provider in the gateway (See this post if you want to learn a bit about authentication mechanism in API Apps, its a topic in itself though).

Here’s the screenshot of the configuration which the gateway should have been populated with via ARM deployment.

GatewayWithIdentityAuth

Solution is simple, populate the relevant appSettings for this configuration when you create the API App with Gateway but it wasn’t easy to find these (wish it was) but here they for your use. Refer to the complete template here

"appSettings": [
 {
 "name": "ApiAppsGateway_EXTENSION_VERSION",
 "value": "latest"
 },
 {
 "name": "EmaStorage",
 "value": "D:\\home\\data\\apiapps"
 },
 {
 "name": "WEBSITE_START_SCM_ON_SITE_CREATION",
 "value": "1"
 },
 {
 "name": "MS_AadClientID",
 "value": "21EC2020-3AEA-4069-A2DD-08002B30309D"
 },
 {
 "name": "MS_AadTenants",
 "value": "mycompany.com"
 }
]

If you are using other identity providers than AAD, you could use the one of these instead (I’ve not tested these ones but should work in theory)

MS_MicrosoftClientID
MS_MicrosoftClientSecret

MS_FacebookAppID
MS_FacebookAppSecret

MS_GoogleClientID
MS_GoogleClientSecret

MS_TwitterConsumerKey
MS_TwitterConsumerSecret