Free Weekly Newsletter
The Newsletter for AI Governance Professionals

ClariGovNewsletter

Clarity on AI governance. Every Tuesday.

Most AI governance content tells you what happened. It does not tell you what to do about it. ClariGov Newsletter takes one development that matters each week — a regulatory update, a real governance failure, a framework shift — and gives you the analysis you need to understand it, explain it, and use it in your next client conversation or governance review.

Five to seven minutes. Every Tuesday. Free, and always will be.

Every Tuesday
5–7 min read
Free always
No sponsored content

No spam. Unsubscribe any time. Read by practitioners in the UK, UAE, and India.

THE SIGNAL PROBLEM

There is no shortage of AI governance content. There is a severe shortage of useful AI governance content.

You already know this. You have subscribed to things that turned out to be vendor marketing in a newsletter format. You have read regulatory commentary that summarised the headline without touching what it requires you to actually do. You have followed accounts that produced confident takes on frameworks they had not applied in practice.

ClariGov Newsletter exists for the practitioners who are done with that.

01

Vendor publications have an agenda

Every platform producing AI governance content has a product to sell. The analysis is shaped by what serves the vendor, not what equips the practitioner. This newsletter has nothing to sell you. It has a perspective, and the credentials to back it.

02

Regulatory commentary is written by lawyers, for lawyers

Understanding what a regulation says is different from understanding what it requires you to actually do. Most commentary stops at the first question. ClariGov Newsletter answers the second — the one you need answered in client meetings, board papers, and governance reviews.

03

LinkedIn commentary confuses opinion with analysis

Hot takes are easy to produce and popular to consume. They are not the same as structured analysis from someone who has applied the framework inside a real organisation under real regulatory scrutiny. The difference is apparent the moment you try to use either in practice.

04

Volume does not equal value

A newsletter that covers five developments shallowly is less useful than one that covers one development completely. ClariGov Newsletter is built on the second model. One topic. Covered properly. Every week.

COVERAGE

Seven domains. One issue that moves you forward.

Every issue draws from the seven domains of AI governance mastery. The domain rotates each week — foundations, risk, regulation, standards, frameworks, organisational practice, and the professional craft of doing this work. Nothing is covered for its own sake. Everything connects to what a practitioner needs to do their job better.

Domain 01

AI Foundations for Governance

What you need to understand about how AI systems work — LLMs, agentic AI, the model lifecycle — to govern them with credibility. Not engineering. The governance-relevant understanding that lets you hold your own when a developer pushes back, or when a CRO asks whether the hallucination risk has been assessed.

Domain 02

AI Risk — What Failure Looks Like

Real incidents, structured risk taxonomy, and how to assess risk in situations without a playbook. The IBM AI Risk Atlas and MIT AI Risk Repository, applied to current developments — so you understand risk as a discipline, not a list of worries.

Domain 03

Regulations and Law

EU AI Act. UK regulatory guidance from the FCA, ICO, and PRA. UAE frameworks. Not what the regulation says — what it requires you to do, in your jurisdiction, for your sector, in plain language that holds up when you are the one explaining it to a board.

Domain 04

Standards and Certifications

ISO 42001 in practice. What auditors actually look for. How standards interact with regulatory requirements — and the honest assessment of what certifications prove and what they leave entirely unaddressed.

Domain 05

Frameworks and Principles

The NIST AI RMF, the OECD AI Principles, the EU HLEG Trustworthy AI framework, and the A.C.E. Framework — applied to real situations rather than described in the abstract. Governance architecture as something you use, not something you cite.

Domain 06

Organisational AI Governance

How to build governance that works inside a real organisation with real constraints. Use case evaluation, shadow AI management, policy development, board reporting, and stakeholder engagement. The domain most courses neglect entirely. The one practitioners find hardest.

FORMAT

One issue. One thing understood completely.

Each issue arrives on Tuesday. Five to seven minutes. One development in AI governance that matters right now — analysed from first principles, connected to the regulatory and framework landscape, closed with a specific question that moves your practice forward.

No roundups. No link dumps. No 'ten things that happened in AI this week.' One topic. Covered properly.

THE DEVELOPMENT

The specific regulation, incident, framework update, or capability shift at the centre of this issue — stated clearly, with the context a practitioner needs to place it in the broader landscape.

THE ANALYSIS

What this development means. For organisations deploying AI. For the regulatory picture. For your work. Not a summary of what happened. An analysis of what it changes — and for whom.

THE FRAMEWORK CONNECTION

How this development connects to the relevant standards, frameworks, and regulations. And how to use those connections in conversations with clients, colleagues, or the board — so the analysis is immediately usable, not just interesting.

THE CLARIGOV QUESTION

The one question this issue leaves you with. To take into your next client conversation, your next governance review, or your next cycle of applied learning. The point at which reading becomes doing.

THE PRIMARY SOURCE

The regulation text, standards document, or original research this issue draws from. So you can go to the source yourself. Always the source. Never a summary of a summary.

SAMPLE ISSUE

What a typical issue looks like.

The following is an excerpt from a recent issue. Subscribe to read the full archive.

ClariGov Newsletter · Issue 14Tuesday, 11 March 2026

What Article 13 of the EU AI Act actually requires — and what most deployers are missing

Article 13 of the EU AI Act requires deployers of high-risk AI systems to ensure that the systems they operate are sufficiently transparent for users to interpret outputs correctly and use them as intended. Most commentary on this requirement has focused on what providers must disclose about their systems. Considerably less attention has been paid to what deployers must do with that information once they have it — and the gap between the two obligations is precisely where most mid-market organisations are currently exposed.

The deployer's obligation under Article 13(3) goes further than simply passing through whatever documentation the provider supplies. It requires the deployer to ensure that users — the humans interacting with the system in practice — can actually interpret what the system is telling them. That is an active obligation. It requires the deployer to assess whether the system's outputs are genuinely interpretable in the context of their specific use case, and to implement additional measures where they are not...

EU AI ActDeployer ObligationsTransparency
Subscribe to read in full.
WHAT YOU GET

What reading this consistently actually does for your practice.

Forty-eight issues a year. Five to seven minutes each. That is four to six hours of structured, practitioner-grade analysis annually — at no cost, with no agenda, from someone who applies this knowledge in real organisations every week. Here is what it compounds into over time.

01

You stay current without the noise

Every relevant regulatory development, standards update, and framework shift — filtered for what actually matters to a practitioner in a regulated environment. Nothing that does not change how you work makes it into an issue.

02

You build a structured knowledge base

Each issue connects to the seven domains of AI governance mastery. Over time, the issues build into a coherent body of applied knowledge. Not a folder of disconnected articles. A map you can navigate when a client or a board asks you something you did not expect.

03

You speak with authority in the right rooms

The analysis in each issue gives you what you need to brief a board, advise a CRO, or assess a client's regulatory exposure — with the primary sources to back every position you take. Authority comes from preparation. This is preparation, delivered weekly.

04

You apply what you read, not just consume it

Every issue closes with the ClariGov Question — the one application prompt that turns reading into doing. If you cannot use it in practice, it is not knowledge. It is noise. Every issue is written with that test applied from the first word.

WHO THIS IS FOR

Serious professionals. Any background. One shared commitment.

ClariGov Newsletter is written for experienced professionals building genuine capability in AI governance — not for people collecting credentials or tracking the AI news cycle. The background varies. The commitment to doing this properly does not.

Experienced professionals from technology, risk and compliance, data governance, GRC, or enterprise architecture who are building AI governance expertise and need structured signal, not aggregated noise.

AI governance practitioners and consultants who advise regulated organisations across financial services, healthcare, professional services, or public sector — and need to stay a step ahead of the developments their clients will ask them about.

Risk and compliance professionals in regulated industries building AI governance capability into existing frameworks — and navigating the junction between traditional model risk management and the new demands of generative AI.

Heads of AI governance, responsible AI directors, and AI risk managers who need to stay current without reading everything that crosses their screen.

Senior leaders — CROs, CTOs, Heads of Risk — who need practitioner-grade analysis rather than executive summaries written from 40,000 feet.

Professionals in the UK, UAE, and India navigating the specific regulatory landscapes of those jurisdictions — where the overlap between domestic frameworks and international standards creates obligations that generic commentary routinely misses.

WHO IT IS NOT FOR

If you want a weekly roundup of AI news, this is not it. If you are at the very beginning of understanding what AI is, the AI Foundations resources on this site are the right place to start before subscribing. ClariGov Newsletter assumes you already work in or near AI governance. Its job is to deepen and sharpen that work — not to introduce it.

ON WHY THIS IS FREE

ClariGov Newsletter is free. It will stay free. In the Sikh tradition of Dasvandh — giving 10% of what you have — a substantial part of what I build is given openly. Not as a lead magnet. Not as a conversion tactic. As a genuine expression of the responsibility that comes with having knowledge that others need. If it is useful to you, it has done its job.

FROM THE AUTHOR
GD
Gurpreet Singh DhindsaAI Governance Practitioner
  • Responsible AI Director, Aligne
  • Co-founder, Altrum AI
  • IBM Subject Matter Expert (6 years)
  • ISO 42001 Implementor Certified
  • AIGP · CIPP/E — IAPP
  • AI Ethics, Oxford Saïd Business School
  • Author, Trusted Intelligence (2026)

"I spent three years learning AI governance the hard way — navigating fragmented regulations, abstract frameworks, and certifications that measured what I could recall rather than what I could do in a client meeting. I built a system to get through it. This newsletter is part of that system, made available to anyone who is serious about doing this properly."

I am a practitioner, not a commentator. I advise regulated enterprises across the UK and UAE on AI governance every week. I hold the ISO 42001 Implementor certification, the AIGP, and the CIPP/E. I spent six years as an IBM Subject Matter Expert. I have been inside the organisations where governance fails quietly — before it reaches the regulator, before it becomes a headline, when it is still the kind of problem that gets minimised in internal meetings because nobody wants to be the one who raised it.

What I write in this newsletter comes from primary sources and applied experience. I have navigated the EU AI Act, the FCA's SS1/23, the UAE's AI governance frameworks, and ISO 42001 with real clients under real regulatory pressure. That is not a credential to display. It is the context that makes the analysis in each issue usable rather than just credible.

The one test I apply to every issue: if you cannot use it in practice, it is noise. Every sentence is written with that test in mind. Not knowledge for its own sake. Applied wisdom — which in Sanskrit and Punjabi is called Pragya. That distinction is the whole point.

gurpreetdhindsa.com · London · Dubai · India
SUBSCRIBE — IT'S FREE

One issue, every Tuesday. No noise. No agenda. No cost.

Join practitioners in the UK, UAE, and India who read ClariGov Newsletter every week to stay current, build structured knowledge, and do their work with more confidence.

No spam. Unsubscribe any time. Read in 5–7 minutes every Tuesday.