17 April 2026

The mad science of Ai: risk, responsibility and reality

A few months ago, I missed an online meeting of local AI professionals, but thankfully, I was sent the AI-generated recap via email. When I clicked the session link (yes, I do have link verification security in my inbox), a Chrome window opened, and before watching the meeting, I was prompted to accept the AI use request (in this case, the AI application is called read.ai). I watched the meeting and followed the AI notes. All good so far.

A few days later, I held a meeting with a client using Microsoft Teams. The read.ai AI agent joined us, and since I hadn’t subscribed to it, I thought it belonged to the other party — until the meeting was over, and I received a recap email from the agent via my domain. Odd.

And then again, in a few days, the same scenario. Time to look a little deeper.

I learned that the app belongs to an approved Microsoft 365 third-party provider. When I clicked OK in the email to review the recorded meeting, I was also inadvertently granting this app permission to add itself to my tenant without Microsoft prompting me for approval, as it does with other applications I add. Researching how to remove it seemed straightforward: locate it in Teams Apps, right-click, and select uninstall. Easy enough… or so I thought.

It reappeared, requiring more research.

OK, so now I must find it in the Teams admin portal and remove it from there too. Done… nope, it reappeared again — more research.

In the end, I learned that I also had to sign in to the Azure Portal (all with administrator-level access), navigate to Enterprise applications, find it there, and delete it. It had access to my environment that I didn’t think was possible.

So, what’s the risk here?

An application that I did not approve added itself to my MS365 account, with the following core permissions:

Data access: It can access personal information on an active message, such as phone numbers, addresses, URLs, and may send this data to a third-party service. However, it cannot access other mailbox items (i.e. sub-inbox folders).

Meeting access: It can add itself to any Teams meeting it is invited to (it says that it cannot join meetings on its own. Personal experience doesn’t always line up with this. It adds itself but waits for permission. This can occur when another attendee clicks yes to the consent-to-record notification).

Meeting information: It can provide real-time transcripts and engagement metrics.

Data sharing: It can send meeting summaries and other data to other platforms and can create custom automated workflows. I don’t know who the company may be sharing my information with (see the example below).

Granular permissions: It does not have carte-blanche access, but it did add itself to some critical areas in my tenant.

A real-life example of how this can go wrong

A viral phone-call recording app, Neon, recently demonstrated the risk. The company pays users to let its AI app record live conversations and then sells that audio to companies training AI models. They promised anonymisation — the app was downloaded 75,000 times in one day — but on 25 September 2025, a flaw was discovered that allowed anyone to access the phone numbers, call recordings, and transcripts of any other user. The app went offline immediately.

Could this happen to read.ai? Or any other app of its kind?

What if I were to share a document from one of my SharePoint folders? Could it then have access to my SharePoint space? Could it share that information with others without my knowledge?

Perhaps.

Who is responsible for what happened?

In my case, there were four parties accountable:

  • I, and the host, as end-users, for a lack of due diligence.

  • Microsoft, for allowing third-party vendors to have such abilities without deeper ethics checks.

  • The vendor, for producing a product with such high-level permissions.

  • And fundamentally: me again — because at the end of the day, it’s my data, and my responsibility.

I’m usually careful about such matters. Lesson learned.

The bigger picture

The reality is that clever AI products are being produced at an incredible pace. We all face the risk of taking on something too fast because it looks amazing and valuable for our organisations. We all have the responsibility to perform due diligence before we give away too much power to AI.

I am certified in ISO 27001 Cybersecurity and ISO 42001 AI implementation. Please feel free to contact me if you want to discuss how I can help you define risks within your organisation in these areas.

Finally, I want to thank Damon Harvey for his commitment to the Hawke’s Bay business community. Damon and The Profit magazine are consistent advocates for our region, and I appreciate everything that he does.

I want to wish you all a pleasant holiday season, and I look forward to seeing you all in 2026.

Tom is the owner of Govern. He has over 18 years in the cybersecurity and IT industry at management level, and for the past 6 years has been a lecturer in cybersecurity at the Eastern Institute of Technology. He has earned certifications in ISO 27001 Lead Auditing, Lead Implementation, SOC2, and Ethical Hacking. These certifications are considered the international gold standard for business security.

Search

Like Us On Facebook

Recent posts

Verified by MonsterInsights