I saw this as a throw away line in a Forbes article. It actually stopped me in my tracks.
I do a lot of work in analytics at the moment.
I started off as a Management Accountant. Accounting records have been there for centuries.
There was limited data outside of finance because systems were not automated. When systems get automated, you at least get data.
To do cost accounting, the transactions created in shop floor automation were very valuable. Time spent on job, quantity of raw material issued.
Now we have sensors on the machine tools telling us the angle of the tool, the torque on the spindle and the vibration of the workpiece.
As we have gone from a papacy of data, to a deluge of data, the visual domain has risen in importance. When there is so much data that I am overwhelmed, having someone visually organize so I can see the signal in the noise is very useful. However, I do think our visual systems are tied to quite elementary parts of our psychology. Wired in to our lizard brain. We like visuals that make action obvious. Evade predators. Hunt prey.
However when communicating financial results, you always had to provide some narrative. The patterns that might be obvious to the person that had prepared the reports, might need to be spelled out for busy executives.
A more human level that is fashionable in data science now is storytelling. Again, I don't think this is that new. Whenever designing analytics systems, we would always have to highlight a salient point at high level, but ensure that the drill to explanation was straight forward. And the stories in financial analysis might be more than aggregation. Variance analysis lets us tell a story of responsibility, accountability and ability to plan effectively.
However, I feel that the conversation as interface is genuinely something new. It is a new type of interaction that I have not been providing for executives until now. I think I have gotten a better appreciation for the possibilities in talking to consultant doctors for how they interact with junior doctors. My own clinical experience is limited to Saint John Ambulance. We have recently transitioned to electronics patient report forms. I presume the record keeping is for insurance purposes. It has some possibilities as a checklist and also as a coach, but the interface is primarily an ex-post record of events. Your have to remember the training that you receive in your unit while under the stress of patient interaction.
When I hear my consultant friends describe the interaction with junior doctors, I see the potential on a larger scale. A consultant needs to impart knowledge and experience, but needs to ensure that the junior doctor feels in control and responsible for the outcome. A consultant is keen to ensure that the diagnosis is progressing in established patterns, but is also grateful of the counter arguments and other perspectives that another professional can give. Cautions are given for what to avoid. Tips for situations that might occur are imparted. Progressing as a conversation means that assumptions are made explicit, checks are performed. In the private setting colleagues can challenge each other without diminishing each other. This seems a very different part of human cognition. Something much more recent. This is not our lizard brain.
At first, I thought the clinical world and the executive office were bad analogues. I was thinking "First, do no harm". However, my doctor friends observed, "you don't want to go bankrupt"
We have had vision of executives in the pre-analytics age making bold decisions based on gut feel. Where analytics have focused until now is on making a course of action clear and uncluttered. What is the role of the executive if the course of action is prescribed? Machines are better than humans where there is that much knowledge.
Where humans will be needed is in formulating the problem, probing, working through, finding blind alleys, finding out how it sounds when I say it out loud. This is real human stuff, but maybe conversation as an interface can be bicycles for the higher brain.
SSTC has started working on "conversations" within an Enterprise Applications context: Constructing a Financial Plan, Developing a Marketing Plan, Interpreting Financial Results, Negotiating with a Supplier, Setting up a project. The model is having a coach at your side, guiding you, probing you and testing your assumptions. If you would like to be involved in our developments, please use the contact form or email info@softwareStrategyConsulting.co.uk.
For me, the first big opportunity is simply the possibility of using vast amounts of clinical data which AI could turn into the equivalent of expertise / experience. I mentioned that in the past we would tidy up medical records by throwing stuff (data) away as doing so was the only way a human could deal with what would otherwise be overload. Now we have everything digitised but our software is largely about focussing our gaze on what we choose, rather than trying to use the data. In a hospital episodes we have vast numbers of assessments , often detailed and frequent, made by huge numbers of people, in a somewhat evolving and iterative way. I am sure there is ‘signal’ here but it’s largely seen as noise.
That said, without a naturalistic language interface, any likely messages emerging from the analysis of such data would be very difficult to impart to junior (or even senior) doctors in real time and sufficiently persuasively to influence real time decision making. In this way, the ai system would act as coach /mentor for a junior, adding to or replacing what a senior doctor tries to do. I can imagine the professional issues here - does ai replace the clinician or support her? Where does accountability lie when a decision is made with an adverse consequence?
Btw, most of the medical heuristics or aphorisms are not absolute. “First do no harm” rules out surgery where the first thing the surgeon does is take a knife to someone (usually after an aesthetist has assaulted them with a brain poison). Ai might be able to handle the juggling of these many things more empirically. We know AI is at risk of bias depending upon the training data sets, but that’s exactly what humans do (except the data sets are likely to be much smaller and the inferential process far more prone to excessive pattern fitting and prior expectations for the poor feeble brained human.