This guide was created to inform Whitman students, staff, and faculty about the growing world of generative AI. For everyone, it can help you learn more about how to use it, ethical considerations, and some of the different conversations happening about generative AI, particularly in academia. For students, it can provide some guidance on when it may or may not be appropriate to use AI in your coursework; and for faculty, guidance on how to navigate creating AI-related assignments or how to avoid AI usage in the classroom.
Artificial Intelligence is the capacity of computers or other machines to exhibit or simulate intelligent behaviour; software used to perform tasks or produce output previously thought to require human intelligence, esp. by using machine learning to extrapolate from large collections of data.(OED). You already interact with it in a number of ways, including Digital Voice Assistants (Alexa, Siri, Google Home), Google Maps (traffic reports, weather conditions), Grammarly, Recommendations (Netflix, Facebook Ads, Amazon), and in the library, Sherlock and its relevance ranking.
A lot of this is predictive AI, using machine-learning model that forecasts patterns, trends, events based on large amounts of data, ex. Social media feeds. Generative AI, which has been getting more attention in the last few years, depends on machine-learning models that is trained to create new data such as images or audio, ex. ChatGPT or DALL-E. This is more what we are talking about today and what a lot of discussion has been about for the last 18 months.
Large Language Models explained with a minimum of math and jargon.
Falsification of data; misrepresentation of another’s work as one’s own (such as cheating on examinations, reports, or quizzes, and purchasing material from the web); misrepresenting the methods used to produce material, including unacknowledged and unauthorized use of technology; plagiarism from the work of others; or the presentation of substantially similar work for different courses (unless authorized to do so), is academic dishonesty and is a serious offense. Unauthorized access, assistance, or collaboration are also forms of academic dishonesty, as is knowingly helping other students cheat or plagiarize.
As with many commercial products, AI tools commonly capture user information including one's usage and other data, which may go into further training data for the AI. The black box nature of many AI tools, meaning we do not know the data used and how it is utilized, also means it can be unclear what data is collected and it's usage.
Some open source tools may give users opt-out or opt-in options, though they may require account creation.
Further Reading:
Algorithmic bias has been a known concern in technology development for a long time, and the black box nature of generative AI further exacerbates this problem. Two main factors in how bias enters algorithms are the data used to train the artificial intelligence. We know that many tools are based on scraping large, open websites, where it can be hard to removed inaccurate, malicious, and biased information and content. Algorithms are also affected by the people who create them and their own biases are present in how they design the tool to utilize and present information.
Further Reading:
With such large datasets, AI tools can provide inaccurate information, which is often called "hallucinations." This can come from combining discordant sources, or utilizing information to create an output that doesn't exist, such as citing a source that sounds like a journal article someone wrote, but is a combination of titles of actual articles that person has written. Analyzing the risk of using data that may contain hallucinations depends on the use of that output, such as if it was used for medical or legal purposes. It is always good to confirm the information you receive from a generative AI tool.
Further Reading
© 2014 Whitman College Penrose Library |