Australia’s chief scientist on enabling public trust in AI
Public trust in AI depends on explainability, data safeguards and cooperation, according to Australia’s chief scientist.
Speaking virtually at the ATxSG conference, Cathy Foley stressed the need for understanding how decisions are made by AI and machine learning.
She also emphasized the importance of maintaining the quality of AI algorithms – opining that larger tech firms hold “an extraordinary amount of power” due to the vast sums of data they can obtain.
And another key point of contention Australia’s chief scientist warned of was the ability for users to access AI systems remotely via the cloud.
Regulation is going to be needed to create a more even playing field, she said. But should Australia turn to regulation to resolve the above issues, Foley suggested legislation should attempt to balance both protection of the public from harm and not stifling innovation.
Foley said getting it wrong could lead to failure in public trust – pointing to attempts to introduce genetically modified foods as an example. A 2001 regulatory scheme in Australia for GM foods failed to explicitly cover marketing and trade issues so individual states were specifically left with the authority to regulate GM crops for market purposes only.
To read the complete article, visit IoT World Today.