Weighing the Power of AI Against Its Impact
Want the latest trends, research, and more delivered right to your inbox? Subscribe to the Johnson Center’s email newsletter.
In the three years since generative artificial intelligence (GenAI) tools became widely available to the public, we have seen an extraordinary rate of growth and change.
Alongside technological advancements, the market growth of AI has accelerated. In September 2024, Bain & Company estimated that the total market share for AI-related hardware and software would grow between 40% and 50% annually, reaching between $780 billion and $990 billion in 2027 (Crawford, Wang & Singh, 2024).
Gallup reported in 2025 that regular AI use is growing rapidly among U.S. workers. In the past two years, the percentage of U.S. employees who have used AI in their roles a few times a year or more has doubled from 21% to 40% (Pendell, 2025).
AI’s upward surge is visible in the nonprofit sector, as well. A 2024 report on nonprofits’ adoption of GenAI tools cited that 58% of global survey respondents indicated their organizations used GenAI in day-to-day operations. The top use cases included marketing and content creation, fundraising, and program management (Google.org, 2024).
The Johnson Center discussed the AI revolution and its potential implications for philanthropy in 2024, stating, “With cost no longer the primary barrier to entry, nonprofits and foundations who invest the necessary time and capacity into exploring these new tools and commit to using them responsibly will benefit the most from this emerging technology” (Bauer, 2024). This statement still rings true today. While it remains difficult to predict the future implications of AI, philanthropy can and should remain committed to its responsible use.
The responsible use of AI cannot be undertaken without first understanding its current limitations. In 2023, Scientific American released an article explaining how personal information is being used to train GenAI models. Author Lauren Leffer described how developers turn to publicly available information on the internet to build large, GenAI models. Data that are viewable in a search engine, such as personal blogs, LinkedIn profiles, images, videos, and personal data from social media sites are incorporated in training models for AI tools. AI companies, such as OpenAI, also fine-tune their models based on user interactions with their chatbots (Leffer, 2023).
Consequently, GenAI results are only as valuable or reliable as the data used to train them. Inaccurate data, personal and societal biases, and private information all play a role in contributing to the results GenAI produces. Here are two examples of related risks:
“GenAI results are only as valuable or reliable as the data used to train them. Inaccurate data, personal and societal biases, and private information all play a role in contributing to the results GenAI produces.”
Institutions are trying to provide guidance for AI users, in the hope that transparency and caution will become working norms when people interact with this technology.
As GenAI tools are used more in philanthropy, organizations must exercise caution and acknowledge the potential pitfalls in day-to-day operations.
Alongside addressing issues with data accuracy, biases, and privacy, it is important for organizations to consider the environmental impacts of AI. Harvard Business Review authors Ren and Wierman (2024) write that
the training process for a single AI model, such as a large language model, can consume thousands of megawatt hours of electricity and emit hundreds of tons of carbon. This is roughly equivalent to the annual carbon emissions of hundreds of households in America. (para. 3)
AI energy demand is predicted to increase by a multiple of 10 from 2024 to 2026 and will cause an increase in air and water pollution, production of solid wastes, and a potential increase in higher levels of ozone and particulate matter (Ren & Wierman, 2024, para. 4). Especially for funders focused on addressing the impacts of climate change and sustaining overall healthy ecosystems, in-office AI use may be directly working against the organization’s mission. And, as Ren and Wierman state, “In many cases, adverse environmental impacts of AI disproportionately burden communities and regions that are particularly vulnerable to the resulting environmental harms” (para. 11). As office cultures shift to adopting AI tools, funders with explicit community and human services ties will need to be cognizant of this paradox.
“Especially for funders focused on addressing the impacts of climate change and sustaining overall healthy ecosystems, in-office AI use may be directly working against the organization’s mission.”
Funders will also need to be aware of the potential impacts that investments in AI will have on grantee partners and nonprofit ecosystems. Sara Herschander, writing in The Chronicle of Philanthropy, states that “only a tiny sliver of philanthropy today goes to such investments in tech governance, even as many major foundations encourage their grantees to experiment — if not outright embrace — new corporate A.I. tools” (Herschander, 2025).
Many funders and nonprofits are already working to respond. Humanity AI was created by a coalition of 10 different philanthropic organizations to “[make] sure people and communities beyond Silicon Valley have a stake in the future of artificial intelligence (AI)” (Omidyar Network, 2025). The coalition’s funders committed to a $500 million, five-year investment in five priority areas related to AI — labor and economy, humanities, security, democracy, and education (Omidyar Network, 2025).
Thoughtful AI use policies can go a long way towards sorting out these challenges. Relevant policy elements could include allowable and unallowable use cases, acceptable and unacceptable AI tools, how to cite AI usage, how to work with vendors and partners that may also have AI policies in place, and how to disclose AI usage in contracting documents.
The Technology Association of Grantmakers (TAG) found in its 2024 State of Philanthropic Tech survey that only 30% of foundations report having an AI policy in place (TAG, 2024). TAG offers a helpful framework on responsible AI adoption that includes ethical considerations, organizational considerations, and technical considerations (2023).
The MacArthur Foundation also posted its AI use policy, stating, “The Foundation wishes to harness and responsibly use AI tools to improve creativity, efficiency, and productivity in furtherance of our mission while recognizing its limitations” (MacArthur Foundation, n.d.).
As we look to the future, philanthropic organizations that have policies in place will be in a better position to harness opportunities with AI and prevent irresponsible use. Further, as Rasheeda Childress writes in The Chronicle of Philanthropy, “it’s crucial to start with organizational values; focus on key concerns like privacy, bias, and transparency; and remember that humans, not the technology, should be top of mind in all the work” (Childress, 2025).
Bauer, K. (2024, January 17). The artificial intelligence revolution arrives in philanthropy. 11 trends in philanthropy for 2024 Dorothy A. Johnson Center for Philanthropy at Grand Valley State University. https://johnsoncenter.org/blog/the-artificial-intelligence-revolution-arrives-in-philanthropy/
Childress, R. (2025). How to use A.I. effectively and protect your organization’s reputation and values. The Chronicle of Philanthropy. https://www.philanthropy.com/article/how-to-use-a-i-effectively-and-protect-your-organizations-reputation-and-values
Crawford, D., Wang, J., & Singh, R. (2024). AI’s trillion-dollar opportunity. Bain & Company. https://www.bain.com/insights/ais-trillion-dollar-opportunity-tech-report-2024/
Gomstyn, A., & Jonker, A. (n.d.). Exploring privacy issues in the age of AI. IBM. https://www.ibm.com/think/insights/ai-privacy
Google.org. (2024). Nonprofits and generative AI. https://services.google.com/fh/files/blogs/nonprofits_and_generative_ai.pdf
Herschander, S. (2025). How philanthropy built, lost, and could reclaim the A.I. race. The Chronicle of Philanthropy. https://www.philanthropy.com/news/how-philanthropy-built-lost-and-could-reclaim-the-a-i-race/
Huff, C. (2024). The promise and perils of using AI for research and writing. American Psychological Association. https://www.apa.org/topics/artificial-intelligence-machine-learning/ai-research-writing
Kempe, L. (2024). Navigating the AI employment bias maze: Legal compliance guidelines and strategies. American Bar Association. https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-april/navigating-ai-employment-bias-maze/
Leffer, L. (2023). Your personal information is probably being used to train generative AI models. Scientific American. https://www.scientificamerican.com/article/your-personal-information-is-probably-being-used-to-train-generative-ai-models/
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7), Article 100779. https://doi.org/10.1016/j.patter.2023.100779
MacArthur Foundation. (n.d.). Use of artificial intelligence. https://www.macfound.org/about/our-policies/artificial-intelligence
Omidyar Network. (2025). Humanity AI launches $500 million commitment to mobilize civil society in shaping the future of AI. https://omidyar.com/update/humanity-ai/
Pendell, R. (2025). AI use at work has nearly doubled in two years. Gallup. https://www.gallup.com/workplace/691643/work-nearly-doubled-two-years.aspx
Ren, S., & Wierman, A. (2024). The uneven distribution of AI’s environmental impacts. Harvard Business Review. https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts
Technology Association of Grantmakers. (2024). 2024 state of philanthropy tech: A survey of grantmaking organizations. https://www.tagtech.org/wp-content/uploads/2024/07/2024-PhilTechSurvey-Final.pdf
Technology Association of Grantmakers. (2023). Responsible AI adoption in philanthropy: An initial framework for grantmakers. https://www.tagtech.org/wp-content/uploads/2024/01/AI-Framework-Guide-v1.pdf