0 Comments

Running performance audits on a public-facing website is essential, in the past the audits was conducted manually. Recently, I have been asked to propose a solution for generating the Google Lighthouse report automatically. 


What is lighthouse?

Lighthouse is an open-source tool that analyzes web apps and web pages, collecting modern performance metrics and insights on developer best practices. you can find the repo here.

It mentioned in its document that you can run the report automatically with Node CLI.  Great start!  Yet,  I can run it on my machine, but how I share the reports with other people i.e. business as well as integrating with powerBI for reporting purpose?


After googling around, I didn’t find anything useful. so I decided to come up with my own solution. 


Proposed solution

Boom! here is the proposed solution.

Building the report on azure build agent, and publishing the report into blob storage. Simple right?!  With this approach, there is no dedicated node server required. In addition, storing report in blob can be simply shared with stakeholders and integrated with PowerBI. 

Brilliant, the completed architectural diagram as shown below. it’s a small implementation, but it still follows Well-Architected framework. 


image


Operationally Excellence

To trigger generating the reports via Azure devOps allows me to setup a scheduled pipeline.  it provides insight about when pipeline is being triggered and sent notification if it fails.  with code as infrastructure mindset, all code are managed in Azure git and deployed via CI/CD pipeline.


Security

Integrating with Azure AD for authentication, and using RBAC for segregating duties within the team for performing the jobs i.e. update pipeline, setup scheduling.


Reliability

Microsoft guarantee at least 99.9% availability for Azure devOps service and using self-hosted agent as failover plan for high availability.


Performance Efficiency

A single blob supports up to 500 requests per second. Since there will not have massively requests for my project, so I’m not worry about the performance at all.  Yet, if you want to tuning the performance for your project, you can always use CDN (content delivery network) to distribute operations on the blob. or you can even use block storage account, which provides a higher request rate, or IOPS.


Cost Optimization

Comparing with VM solution, I believe this solution deliver at scale with the lowest price. Storage only costs AUD $0.31 per GB.


Hopefully you like this solution or share your thoughts if you have better options. All comments/suggests are welcomed.

0 Comments

The challenge in the past is that every time you are developing a new webapp or bot which requires authentication you will go through all the steps i.e. creating service principle, grant permissions, set credentials, store credentials on resources, rotate credentials, and etc..   Now there is a better solution: Managed identities for Azure resources.


One of the examples where you can adopt the managed identities when you want to build an application using web application that accesses Azure blob storage without having to manage any credentials.


How to create

Managed identities is using service principle under the hook. Once you created the user assigned identity in Azure portal the same way as you creating other Azure resources,  you can now going to the target resources i..e blob storage and assign the permission i.e. contributor role to the user assigned identity you just created.   Now, last step is to go to the Azure resource where you want to access the target blob storage, i.e. azure function, and in Identity blade, you can add the user assigned identity that you just have created.


Below is the example demonstrates authenticating the BlobClient from the Azure.Storage.Blobs client library using the DefaultAzureCredential with a user assigned managed identity configured.

DefaultAzureCredential discovery mechanism allows you to run the code with your signed in account when you are using the code locally and automatically switching to use the user assigned identity when the code is deployed in azure. Please be noted that your local account will require to have the same permission as the user assigned identity in Azure.

Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure Active Directory. Using a managed identity, you can authenticate to any service that supports Azure AD authentication without managing credentials. You can find the list of available services here.

0 Comments

If you are learning AZ-303, and changes are you will encounter the same error when following the Exercise – Create an NVA and virtual machines (unit 5-7) by the time i’m writing this document (11/07/2021).


I have reported this issue with the Exercise, but it might not be fixed anytime soon.  so just in case you need help with completing your Exercise. I tried to document the issue in Exercise, and the solution how you can resolve them.

 image

What’s the issue

The requirement is to create vnent with 3 subnets

  • 10.0.0.0/24
  • 10.0.1.0/24
  • 10.0.2.0/24

image


in Unit 5 of 7 , it provides the code as below for creating the 1st VM in subnet dmzsubnet.  The problem with the below command is it doesn’t specify the subnet address prefix, therefor by default, it will be 10.0.0.24.  You won’t yet get any error at this stage, as it’s the first subnet. Although it’s already not matching the design.

image

You will then get an error when you are following the Exercise along the way in Unit 6 of 7 with below code.  you can’t create subnet with conflict address.

image

What’s the Solution

Adding  “--subnet-address-prefix” when you create VMs. There are more optional parameters for creating VM you can find here.

here is the example code you can use

az vm create \
     --resource-group learn-4bd2e66c-7759-446c-9a49-071b27237a7f \
     --name public \
     --vnet-name vnet \
     --subnet publicsubnet \
     --image UbuntuLTS \
     --admin-username azureuser \
     --no-wait \
     --custom-data cloud-init.txt \
     --subnet-address-prefix 10.0.2.0/24 \
     --admin-password <changeme123>