Spencer Foundation AI Policy

Policy on the Use of Generative AI at the Spencer Foundation

Resources

This policy outlines the responsible and ethical use of generative artificial intelligence (AI) technologies across the Spencer Foundation's operations, programs, and activities.

Policy on the Use of Generative AI at the Spencer Foundation 
 

1. Introduction

This policy outlines the responsible and ethical use of generative artificial intelligence (AI) technologies across the Spencer Foundation's operations, programs, and activities. The Spencer Foundation recognizes the transformative potential of AI in advancing education research as well as our own philanthropic goals, but we also acknowledge the need for responsible governance to ensure ethical, transparent, and equitable uses and outcomes. We understand that the use of generative artificial intelligence comes with certain risks such as plagiarism, the reproduction of misinformation, and the perpetuation of systemic inequalities. We have developed the policy guidelines below to appropriately balance the potential benefits of artificial intelligence with the potential risks.

2. Purpose

The purpose of this policy is to:

  • Ensure that the Spencer Foundation—inclusive of our applicants, grantees, reviewers, staff, contractors, both formal and informal partners and collaborators—is engaging with generative AI technologies in a manner that aligns with our equity values and goals.
  • Establish guidelines to govern the use of AI to prevent misuse and unintended harm.
  • Foster transparency, accountability, and trust in the Foundation.
  • Promote the ethical application of AI across the Foundation’s proposal submission and grant review processes.

3. The Spencer Foundation’s Stance on the Use of Generative AI  

  • It is the policy of the Spencer Foundation to allow the use of generative AI by applicants, grantees, staff, contractors, and collaborators in pursuit of efficiencies, productivity, and innovation when it can be done so in accordance with the following guidelines, permitted uses, and restrictions detailed below.
  • It is also the policy of the Spencer Foundation to strictly prohibit the use of generative AI by reviewers when assessing proposals and constructing feedback to applicants. See section 9 for more details.

4. Ethical Guidelines 

All applications of AI will adhere to the following principles:

  • Transparency: The use of generative AI should be clearly disclosed by all parties—in citations, references, footnotes, appendices or through an explanatory note to the Foundation (see sections 6 & 7 for additional details). 
  • Accountability: Those who choose to use generative AI in their work remain accountable for both the accuracy and originality of the work they share with, or submit to, the Foundation. 
  • Non-Exploitation: The Foundation is committed to the equitable use of generative AI. In line with this commitment, generative AI should not be used in a way that exacerbates inequities, exploits vulnerable communities, or undermines human dignity. Generative AI tools should not be used to replace or diminish the role of individual or community judgment and human empathy. 
  • Privacy and Data Protection: Parties who use generative AI should be sensitive to privacy as it pertains to themselves, their collaborators, or their study participants. Any information that is uploaded to generative AI systems should be anonymized or deidentified wherever possible to protect individual and community privacy.

5. Permitted Uses of Generative AI

Generative AI may be used in the following areas:

  • Grant Submission and Administration: Applicants and grantees may use generative AI as a resource to support the production of content across all aspects of the grant process including letters of intent, pre-proposals, full proposal submissions, and interim and final progress reporting. Applicants and grantees may not submit verbatim drafts of content generated by AI, rather generative AI may be used as an assistive tool rather than as a substitute author for applicants and grantees.
  • Research and/or Evaluation Activities Funded by Spencer: Generative AI may be used to support the production of literature reviews, the development of study designs, collection of data, analysis of data, and the summation/generation of insights from these activities. 
  • Outreach, Communication, and Dissemination: Generative AI tools may be used to assist with producing written, audio and/or visual educational materials developed in connection with Spencer-funded research projects in pursuit of raising awareness of project findings through scholarly, practitioner, or general public outlets/venues.

6. When to Disclose Use of Generative AI

Artificial intelligence tools exist on a spectrum in terms of their capabilities, and how users choose to engage with these tools. For disclosure purposes, we make a distinction between Assistive AI and Generative AI.

  • Assistive AI: Applicants and grantees may use artificial intelligence programs to enhance the grammar, spelling, and punctuation of their proposals. Assistive AI is commonly applied to the revision and editing process to improve readability. Disclosure of assistive AI is not required. However, applicants and grantees should be aware that many spell-checking and word-processing tools are increasingly incorporating generative AI features.
  • Generative AI: Generative AI is used whenever a program or application produces written content, visuals (such as photos, tables, and graphs), audio, or videos based on a prompt or outline. It is commonly applied during the writing phase to generate content that applicants can incorporate into their letters of intent and proposals or that grantees can incorporate into their progress reports. Applicants and grantees must disclose the use of generative AI.

7. How to Disclose Use of Generative AI

Applicants and grantees are required to disclose the use of generative AI within their letters of intent, pre-proposals, full proposal submissions, and progress reports. When submitting these documents through the Foundation’s online submission system, applicants and grantees will find an AI disclosure checkbox which states “I have read and understood the Spencer Foundation's AI policy and I assert that I [did/did not] use generative AI to produce this letter of intent, pre-proposal, proposal, and progress report.” Applicants and grantees who indicate that they have used generative AI to produce any part of their letters of intent, pre-proposals, proposals, or progress reports are required to provide a brief summary of how and where generative artificial intelligence was used within the documents they have submitted. Applicants and grantees will be prompted to enter their summary, using the below text as a guide, after checking “yes” on the AI checkbox.  

During production of this work, the author(s) utilized [NAME OF TOOL], in order to help with the creation of this [LETTER OF INTENT, PRE-PROPOSAL, PROPOSAL OR PROGRESS REPORT]. Generative artificial intelligence was used to [DESCRIBE WHERE, HOW, AND WHY GENERATIVE AI WAS USED]. The author(s) reviewed the created content produced by this generative AI Tool and assert the content within this document is factually accurate and free of plagiarism. The author(s) take full responsibility for the submitted document. 

To reiterate, generative AI tools cannot be listed as an author. As a result, applicants and grantees are held responsible for the accuracy of all content within their submitted letters of intent, pre-proposals, proposals, and progress reports. Generative AI may create information that is based on inaccurate, outdated, or copyrighted sources. Applicants and grantees will be held accountable even in the case of unintentional plagiarism and/or the unintentional inclusion of false information. The Spencer Foundation reserves the right to reject proposals that display substantial evidence of the use of generative AI, particularly in the cases where there has been no disclosure during the submission process. 

8. How Disclosure Will Be Used

Applicants and grantees who disclose the use of generative AI will not be penalized in the review process for doing so, unless the content generated is used inappropriately, produces inaccuracies, or otherwise negatively impacts the proposal. The Spencer Foundation reserves the right to use their discretion in determining whether generative AI tools have been used responsibly by applicants, grantees, and other Spencer partners and collaborators. The disclosure of the use of generative artificial intelligence will be used for data tracking purposes. Tracking the use of AI allows the Foundation to understand how often and for what purposes applicants and grantees incorporate the use of AI. 

9. Restrictions on the Use of Generative AI

The Spencer Foundation will not use, and will strictly prohibit others from using, generative AI in high-stakes contexts such as the grant review process. The Foundation recruits reviewers based on their expertise within the field of education and based on their ability to provide topical or methodological feedback on a given proposal. As a result, reviewers are not permitted to use generative artificial intelligence to summarize, analyze, or otherwise assist in evaluating proposals. Uploading or copying and pasting an applicant’s unpublished proposal into an open-source AI tool compromises their intellectual property. It also has the potential to expose applicants’ personal data (e.g. home addresses, phone numbers, salary details, etc.). Reviewers found using generative artificial intelligence in their review process may be barred from reviewing. In some cases, they may also be barred from applying for a Spencer grant at the discretion of the Foundation.  Reviewers who are found to utilize generative artificial intelligence in their assessments of proposals or in the feedback they provide to applicants will not be invited back as reviewers—and they will be barred from applying for a Spencer grant for a period of five years. 

10. Policy Review and Updates

This policy will be reviewed periodically, at least annually, and updated as necessary to reflect changes in technology, legal requirements, and the Foundation’s goals. Any major changes to the policy will be communicated through this document.

11. Conclusion

Generative AI presents new opportunities to advance the mission of the Foundation, but it also requires careful consideration and oversight to ensure it is used ethically and responsibly. This policy provides a framework for maximizing the benefits of AI while safeguarding against potential risks, ensuring that all AI applications align with the Foundation’s values and objectives. 

 

Browse Our Resources and Tools