One is with the Australian Information Commissioner at the state level. Ordered The State of Victoria’s Child Protection Agency will stop using Generative AI services. According to the Information Commissioner, agency staff entered enough personal information into ChatGPT to prepare a report on the risks a particular child faces if he or she lives with a parent who is an alleged sex offender. were
The Information Commissioner explained that by using ChatGPT, the staff reduced the risks to the child. For example, the report depicted baby dolls — used by fathers for sexual purposes — as parents’ efforts to ensure their child had “age-appropriate toys.” The Information Commissioner has ordered the Child Protection Agency to implement Internet Protocol blocking and/or domain name server blocking to prevent its staff from using AI until 5 November 2024. It does not include the generative AI tools that search engines feature. This means staff will still be able to access tools like Google’s AI Review.
Why does this matter?
As indicated in the Information Commissioner’s order, the agency’s use of ChatGPT is “a real example of the privacy risks associated with GenAI”. This increases the harm that can arise when someone relies on these tools and uses personal information inappropriately. AI tools are probabilistic in nature, which means that if someone types something like “Better late than…” as a prompt, the AI will end the sentence with an output like “never”. Because there is a high probability that “never”. Nikhil Pahwa, founder editor of Media Nama, is the result the user was hoping for. Explained During an event earlier this year.
This means that the AI is not looking for accuracy when giving answers, but makes decisions based on what the user wants. Thus, if government agencies rely on AI models, their decisions may be inaccurate, or potentially harmful, especially when dealing with sensitive personal information or critical situations involving vulnerable individuals. In addition, government use of AI tools also poses privacy risks. If a government agency feeds people’s personal information into an AI chatbot, that information (such as a child’s personal information in the Australian case) will end up as part of the AI companies’ training dataset.
Government guidance for public service use of AI
i November last yearAustralia has come up with guidance on how to use AI tools in the public sector. This guideline identifies two golden rules:
- Public service organizations should assume that any information they feed into an AI model can be public. They should not reveal any sensitive, personal, or otherwise sensitive information.
- Public service organizations must be able to explain, justify and take ownership of their advice and decisions.
In addition, the Australian government also proposed that public service organizations should clarify when their decisions are based on AI models. They must also recognize the biases inherent in AI tools, and ensure that their decisions are fair and meet community expectations if they rely on AI-generated output. Must be pass process. So did New Zealand Also advised Public service organizations should not use generative AI for any sensitive data. It also advised them not to enter personal data into GenAI tools if they are outside the public service body’s network.
Advertisements
India too Submit a discussion paper Highlighting Responsible AI Principles in 2022 The paper states that AI systems must be reliable and have built-in safeguards to protect stakeholders. They should also treat people in similar situations equally and should not discriminate against individuals. Further, the Principles state that all individuals’ personal data must be protected and protected, with access only to authorized personnel. However, these principles do not contain specific guidelines for government agencies in India, which highlights differences in the country’s policy framework.
Also read: