What is measurement?

Singapore model for AI governance You can watch this post on video if you want to see if my bonsai tree is still alive.

A while back we talked about who determines negligence, and mentioned some of the most well-known governing bodies or agencies in the United States around governance: the department of Health and Human Services, the Federal Trade Commission, and so on.

It doesn’t happen all the time, but right now we have some brand new technologies where governance is just emerging. Artificial Intelligence in it’s various forms is one of those areas.

Recently the Personal Data Protection Commission of Singapore has put together a proposed model governance framework for Artificial Intelligence. And, they are seeking feedback from people working in Artificial Intelligence that would be affected by these new rules. If your organization is using artificial intelligence it’s worth looking at this framework and sending in your comments, let them know what you think.

The guiding principles here are that any decisions made by artificial intelligence should be explainable, transparent, and fair. And the AI systems should be human-centric, which sounds important - but how would you even make a ruling on that? The site has a PDF of the whole framework, take a look and weigh in on your framework. The pilot is running for a few months.

Personally I speculate that this is the beginning, and we will see increased difficulty with AI behaviors that become so problematic a new field emerges: Forensic Psychology for AI.

Want to get the latest analysis and open source tools we publish?

It's so easy for experts to put their head down and work without ever sharing lessons learned with the rest of the world. We publish all our best ideas, analysis, and latest open source tools and techniques by email every week.

    We won't send you spam. Unsubscribe at any time.

    Powered By ConvertKit