A recent study delved into the perspectives of non-profit and grassroots organisations on disclosing the usage of artificial intelligence tools, uncovering varying attitudes influenced by operational contexts and beneficiary vulnerability. While some deem disclosure unnecessary for internal tasks, others emphasise the importance of maintaining transparency and trust with larger audiences. The study emphasised the delicate balance organisations must strike between disclosure practices and building trust amidst AI integration.

A recent study explored various perspectives regarding the disclosure of artificial intelligence (AI) tool usage within non-profit and grassroots organizations. The findings revealed significant divergence in attitudes towards AI disclosure, largely dependent on the organization’s operational context and their beneficiaries’ vulnerability.

38% of organizations believe disclosure is irrelevant, especially when AI is embedded in existing software or used for internal activities. For low-risk tasks like idea generation, many deem disclosure unnecessary. Conversely, organizations focused on public campaigning and policy are more inclined to disclose AI use to maintain trust and transparency with their larger audiences.

Participants from different organizations shared their insights:

  1. A member from a community-building organization, with no AI policies, compared AI tools to everyday aids like spell-checkers and alarms, suggesting that their use did not necessitate disclosure.
  2. An interviewee from an immigrant and refugee support organization, also without AI policies, felt their minimal AI influence (1%) didn’t require disclosure.
  3. A local grassroots organization member mentioned that disclosing AI use could undermine their efforts, as they heavily edit and oversee the AI-produced content.

Additionally, organizations working with vulnerable populations often avoid disclosure to prevent fear and resistance to technology among their beneficiaries. In contrast, organizations engaged in public-facing roles fear that non-disclosure could damage their credibility and trust.

The study highlighted a strong relationship between trust and disclosure, emphasizing that non-disclosure is often seen as an act of care, not deception. It acknowledged the diverse operational contexts of non-profits and the tough decisions they face regarding AI use, suggesting that the future of disclosure will need careful consideration as AI becomes more integrated into organizational tools.

Share.

Jaimie explores the ethical implications of AI at AI WEEK. His thought-provoking commentary on the impact of AI on society challenges readers to consider the moral dilemmas that arise from this rapidly evolving technology.

Leave A Reply

Exit mobile version