ÁðÁ§ÉñÉç

Skip to content

'Invasion of privacy': Youths more distrustful of AI data systems, B.C. study finds

Developers and youths have different views when it comes to risks and benefits of artificial intelligence
ajayai-img_7202
Ajay Shrestha, a computer science professor at Vancouver Island University, has been researching youth perspectives on artificial intelligence. (News Bulletin file photo)

New research from a B.C. university indicates that youths and AI developers view AI systems very differently – and a greater portion of youths than anticipated don't trust it. 

The study, called 'Safeguarding Tomorrow’s Data Landscape,' examined how teens and young adults think about artificial intelligence, collecting survey responses from educators, parents, AI professionals and youths aged 16-19. 

"Overall, we can say the risk basically outweighs the benefits in their case," said Ajay Shrestha, computer science professor at Vancouver Island University and the study's researcher. "They don't fully trust." 

The survey included a five-point scale to determine responses. The area with the greatest variation among the groups was the perceived risks and benefits. AI professionals, parents and educators ranked privacy concerns at 4.3, with youths ranking their concern at 4.1. Where the findings differed was the benefits of AI, with professionals rating the data-sharing benefits at 4.5, compared with parents at 3.9 and youths at 3.5. 

For trust in system data, AI professionals expressed the highest level of trust at 4.2, followed by parents and educators at 4.1, while youths were "more skeptical" at 3.1.

"There's kind of a disconnect between AI developers and young digital citizens and that was striking. AI professionals … they view the AI profiling or AI tracking sort of thing as kind of like a technical issue that can be fixed with better data management, but with youths, they see all of them as a direct invasion of their privacy."

Shrestha said he was surprised by how uncomfortable the surveyed youths are with sharing their data. In a literature review ahead of the study, Shrestha found that many youths in previous reports seemed unconcerned with the data collection itself, but were concerned about poor methods of communication. This includes companies slipping the details of how the user's data will be used in frequently skipped-over terms and conditions, and closed-source coded AI that can't be reviewed by general users.

When it came to this study, Shrestha said a significant portion of the youths expressed a resistance to sharing their data with AI systems. Many of the younger users also expressed they were unaware how their data will be processed. 

"They are the future users and decision makers in AI governance, so we want to understand how they feel about AI privacy in general, what concerns they have and what they want to see."

The study includes a series of recommendations for best practices in ethical AI, including informed consent and age-appropriate notices, clear opt-ins, multi-format engagement to improve comprehension, and prompts to review settings or data-sharing preferences.

Shrestha said one of the problems is that modern AI data-sharing consent forms are long, opaque, and are required for website access.

"It is affecting their trust among the system," he said.

Some of the other recommendations include collection of only essential data points needed for AI functionality, encryption of data at rest and in transit, limited access, privacy dashboards for easy data deletion or download, anonymization, privacy impact assessments and more. 

Moving into the future, Shrestha said it is "imperative" AI developers listen to the concerns expressed by youths around ethical AI and earn trust through transparency. 

"The AI system should be built in a way that the young people know exactly what their data is being used [for], how it is being processed and with whom it is being shared."

Education is another key factor, said the professor, adding that schools have a role to play. The study recommends educators and school administrators review and vet AI-powered software for adherence to privacy laws and best practices. Other recommendations are parental or guardian consent before introducing AI tools in the classroom, lessons teaching students about responsible data sharing and AI ethics, and periodic check-ins with students about their interactions with AI software and any privacy concerns.

"Schools should teach digital literacy in their curriculum. There should be components like on how AI works, what happens to personal data when shared online and how to manage all of the privacy settings effectively," Shrestha said. "If young people understand all the privacy risks, they can advocate for their own digital rights."

Safeguarding Tomorrow’s Data Landscape has received about $87,000 through the federal Office of the Privacy Commissioner of Canada. People can find out more online at .



Jessica Durling

About the Author: Jessica Durling

Nanaimo News Bulletin journalist covering health, wildlife and Lantzville council.
Read more



(or

ÁðÁ§ÉñÉç

) document.head.appendChild(flippScript); window.flippxp = window.flippxp || {run: []}; window.flippxp.run.push(function() { window.flippxp.registerSlot("#flipp-ux-slot-ssdaw212", "Black Press Media Standard", 1281409, [312035]); }); }