It was a fascinating Data Science and AI Meetup this month. I was giving a couple of book reports. It is a really good way to cement my reading. I am also a sucker for an audience. We also had a fascinating review of a software category of which I was not aware.
I did AI, A Guide for Thinking Humans. I had a few nodding heads from the audience, so others must have also read it, and agreed with my summary.
I also did The Future of the Professions, because that has been in the news a lot recently as the on ramp for young professionals has been eaten up by large language models already.
The main event was lead by Elvira Perez Vallejos, Professor of Digital Technology for Mental Health at the University of Nottingham. The main event was itself boken into 2 sections. Kudos to Professor Vallejos for bringing the work out for public review.
There was a section on responsible AI. Professor Vallejos ran through how a Risk Assessment might be conducted. She also showed some prompting cards that might help a brainstorming session. It reminded me a bit of Edward De Bono's 6 Thinking Hats. (1)
I think there is a danger in fragmenting the thinking on the risk assessment. SSC has been following and developing expertise in the NIST framework for AI Risk Assessment and how that is playing into international frameworks like ISO 420001 and the EU AI Act. There is a danger that an academic venture might consider that it fulfilled its obligations only to find that it runs foul of laws and frameworks when it enters the commercial landscape.
Professor Vallejos then went on the reiterate that we have to examine whether all possible futures are futures that we wish to participate in or impose on our children.
The centerpiece of Professor Vallejos's presentation was on the subject of Deathtech. It was claimed to be a £100 billion market.
While there are many subcategories within the Deathtech market: Posthumous Avatars, EmpathyBots it was the Virtual Grief Counselor that was the center piece of the demo.
We discussed and explored this as an outshoot of virtual therapy. I have seen evidence that some people prefer interacting with a virtual therapist. I can see that you may be able to get answers to problems without having to expose yourself to judgement. (2)
However, there were many questions from the floor regarding how the study would be conducted, how success would be measured, over what sort of period would studies be conducted, how you prevent bad actors from preying on vulnerable people.
I feel we are bringing a very sophisticated device into a setting that normally requires a trained therapist. If this interaction was happening through the skin, it would attract protocols for evaluating a medical device. Because this is happening through our eyes and keyboard we do not consider it medical. This seems especially true as it seems to have included a "spiritual dimension".
My last thought on this was that the grieving process is a quintessentially human process (species of corvids notwithstanding). (3) To introduce self guided machinery into this process seems fraught with unintended consequences.
References:
1) Six Thinking Hats - Wikipedia
3) https://www.animalsaroundtheglobe.com/13-wild-animals-that-mourn-their-dead-2-320652/?