Generative AI Guide for Students

Contents

Click on the links following to navigate directly to each topic.


What is Artificial Intelligence?

Artificial intelligence (AI) is the broad concept of machines being able to carry out tasks in an intelligent manner. It is a field made up of many different technologies like machine learning, neural networks, robotics, and natural language processing. AI is the overarching idea that allows machines to mimic human thought and behavior.

Here are some of the ways that it might impact your daily life:

  • Smart assistants like Siri and Alexa help with tasks
  • Apps that know your tastes and give suggestions
  • Home devices that self-adjust temp and lights
  • Social media feeds curated just for you
  • Chatbots that answer questions instantly
  • Maps and GPS for optimized driving routes
  • Online shopping sites that recommend purchases
  • Email spam blockers learning what’s unwanted

Generative AI refers to a type of artificial intelligence that is designed to create new and original content, such as images, music, or text. It uses complex algorithms and models to generate these creations based on patterns and examples it has been trained on.

Watch this introductory video by Wharton Interactive’s Faculty Director Ethan Mollick and Director of Pedagogy Lilach Mollick (01:42 to 05:34):

To learn more about AI, see: BBC News. (2023, July 24). What is AI? A simple guide to help you understand artificial intelligence – BBC News. News. https://www.bbc.co.uk/news/resources/idt-74697280-e684-43c5-a782-29e9d11fecf3


Limitations and Ethical Concerns

As AI capabilities rapidly advance, what potential concerns and pitfalls should give us pause?

AI Hallucinations

What is an AI hallucination? According to Tim Keary, Technology Specialist at Technopedia, “An AI hallucination is where a large language model (LLM) like OpenAI’s GPT4 or Google PaLM makes up false information or facts that aren’t based on real data or events.” In a nutshell, generative AI often will make stuff up and confidently output it to you as if it were true. So, what do you as a student and critical thinker do about this?

To responsibly and ethically use AI generated content, you need to be able to evaluate the validity of the content. This is where your own knowledge and expertise in the subject matter comes into play, so you can evaluate if the output is correct and adequately answer the prompt you gave it. Some LLMs allow you to check the citations. For example, the GPT-4o model can be prompted to search the internet when answering your prompt and provide you links to the websites it used.  You can also compare the information it provided you against other reputable sources, such as your textbooks, journal articles, trusted websites, people with expertise in that area, such as your instructor.

Read more from Tim Keary on AI hallucinations: Keary, T. (2024, September 3). AI hallucination: What it is and how to avoid it. Techopedia. https://www.techopedia.com/definition/ai-hallucination

Bias

As generative AI tools were created by humans, both those who developed the models and those who trained the tools, inevitably they contain and may perpetuate bias. For example, ChatGPT was developed by an American company, OpenAI, and trained on English data sources (Anders, p. 80). OpenAI even acknowledges inherent issues with bias stating that its training data, culled in part from information publicly available on the internet, reflects the perspectives of its users, many of whom live in highly developed and wealthier nations. Further, it acknowledges that these connected populations are “mostly U.S.-centric” and skew young and male (Openai GitHub). Therefore, its output likely reflects such data that trained it. 

This may be hard to visualize until you see it in action. For example, Daniel Stanford has provided examples of bias in image-generation tools, which demonstrate that unless explicitly told otherwise, such tools can default to “white/lighter skin tones” when asked to provide illustrations of the following professions: nurse, doctor, pilot, and professor. Also, in response to a now-deleted Buzzfeed article showing how AI would imagine Barbie from around the world, The London Interdisciplinary School (LIS) created this video to show how AI image generators can reflect “extreme forms of representational bias” and as such may perpetuate racist stereotypes and ultimately, cause harm. , loss of human agency, embedded biases, and lack of accountability. A thoughtful approach requires acknowledging AI’s limitations, proactively addressing ethical implications, and prioritizing human responsibility and integrity.

Such examples inevitably invite conversation on the ethics of using such tools, which may be an additional consideration as you evaluate if and when to use AI. As always, be on the lookout for language or imagery that appears to reflect a narrow or single perspective on a topic or openly demonstrates bias.

Intellectual Property and Privacy

Many generative AI systems are trained on massive datasets culled from across the internet, which may include users’ personal information without their knowledge or consent. This raises legitimate privacy concerns. As discerning consumers and citizens, we must carefully consider how our data is collected, used, and potentially exposed through these technologies. When using AI systems:

  • Be selective about sharing personal details.
  • Do not upload other people’s personal information or intellectual property to generative AI.
  • Utilize privacy settings whenever possible.
  • Closely review privacy policies and exercise caution with invasive services, erring on the side of protecting your information.
  • Read the Terms of Service. Look for key information in the agreements, such as:
    • How your personal information is collected and how it’s used, shared, and protected.
    • Choices for your personal information, such as: opt-out, delete, and access.
    • User obligations regarding rules, IP (intellectual property), and reporting.
    • Consequences of violations, such as loss of access, liability, or legal action.
  • To stay safe, follow the Reddit Rule: if you wouldn’t post it on Reddit anonymously, don’t put it in an AI tool or system.

Other Ethical Considerations

Here are some other ethical considerations you may want to consider when evaluating how you want to engage with AI. Please note that this list is not exhaustive, and as AI continues to develop, new ethical implications will also arise.

AI and Environmental Impact

Research indicates that training AI systems like GPT-3 can consume vast amounts of clean freshwater and generate high greenhouse gas emissions, raising concerns about sustainability and the responsibility of developers to address the ecological consequences of their work.

Read more about the environmental impact of AI: 

AI and Labor Concerns

AI technologies are disrupting labor markets in a variety of ways. One example is through the “Paradox of Self-Replacing Workers,” where people train the very systems that could replace them, leading to job displacement despite increased productivity. Additionally, many AI firms rely on low-wage, outsourced labor to perform tasks essential to automation, a phenomenon called “ghost work.” While automation aims to cut costs, it often shifts skilled tasks overseas, raising labor rights concerns.

Learn more about AI and Labor:

AI and Warfare Acceleration

AI systems have the potential to accelerate warfare by enhancing military capabilities and decision-making processes. The use of AI in military applications raises alarms about the indiscriminate use of force, the erosion of accountability, and the potential for escalation in conflicts, highlighting the urgent need for regulatory frameworks to manage these risks.

Learn more about AI in Warfare:


How to Use AI in Your Learning

When to use AI

Knowing the vast capabilities and current shortcomings of Generative AI, it’s vitally important to understand when it is and isn’t appropriate to use such tools. As you’ve read, AI can bring with it a number of ethical and moral implications. In order to determine if you should use AI, it’s important to first think critically about what you’re using it for and why.

This flowchart, adapted from “Using ChatGPT in Medical Education” (Ratliff, Nur Sya’ban, Wazir,  Haidar, & Keeth), originally conceived by Aleksander Tiulkanov, is a good place to start:

Notice how the above flowchart inserts questions and checkpoints to help you to determine if a tool is safe to use. For example, if you are using a tool for self-assessment or personalized learning, very likely you would want to know that any output generated by the prompt would need to be true in order to accurately test your knowledge of a topic or concept (see “Does it matter if the output is true?” above). Additionally, as we’ve already discussed, evaluation of an output is a key step in determining the quality of the output, so it’s important to have a foundational knowledge of a concept in order to determine accuracy, particularly since we know that AI is prone to hallucinations (see “Do you have expertise to verify that the output is accurate” above). If you don’t have that expertise, say you are a student and new to the topic, but you need to rely on a tool as a reliable study aide, then it may be best to avoid it until you know for sure that it is reliable OR you have access to a person with that expertise who can help you ascertain the accuracy of the output (e.g. your instructor). 

The risks of over-reliance

As with any technology, it is important to be mindful of judicious use and be wary of over-reliance. In fact, OpenAI, the company behind ChatGPT, has even warned users about the risks of over-reliance, which it defines as excessive trust and dependency that can lead to “unnoticed mistakes and inadequate oversight.” (OpenAI, GPT-Technical Report, p. 59).

As we have discussed, having some foundational knowledge of a subject is important for evaluating the accuracy of an output. If as students, you haven’t yet acquired the expertise to recognize if a response is correct and you become highly dependent on such tools to gather information, it may become difficult to discern between fact and misinformation. Moreover, with over-reliance, you may fail to develop the skills needed to work through a problem that these tools can solve for you in seconds. 

When determining when and how to use such tools, you will want to consider how that tool will serve you now and in the future. Is it useful for performing a burdensome and time-consuming task that will enable you to concentrate on more complicated and productive work? Is it giving you an idea springboard from which to generate your own ideas and further discoveries? Or, is it robbing you of time you may need to spend to truly understand the complexities of a problem? 

Altogether, as we’ve discussed, generative AI can be a very useful tool, but it is only one tool in your toolbox and shouldn’t be used to supplant your own critical thinking nor the skills you will need to acquire in order to be successful academically and professionally. 

AI and Academic Integrity 

In some cases, use of Generative AI will be inappropriate and even prohibited. For example, if your instructor has prohibited its use on one or more assignments or in the class as a whole, then you should not use such tools when completing these assignments. Failure to comply could result in consequences outlined in USM’s Academic Honesty Policy and be subject to disciplinary action. This is because the ability to learn the specific course content, learning outcome, or skill would be negatively affected if AI does some of the learning tasks for you. As mentioned earlier, a level of expertise to evaluate any AI output is key to safely and effectively using generative AI, which often means learning the skills and knowledge to gain that expertise without using AI first. 

However, your instructor may allow the use of Gen AI tools under certain circumstances. For example, your instructor may allow use of a tool like ChatGPT for the purposes of idea generation or as a part of a class activity. If this is the case, you should comply with all requirements for its use, including providing proper citation. Some style guides are already providing guidelines for this: ChatGPT Citations | Formats & Examples

If you are unsure as to whether or not you can use these tools to complete work in your course, reach out to your instructor and ask. You might ask questions like the following:

  • I would like to use ChatGPT, Bing, Bard (or another LLM) to generate ideas for a paper or project in this class. Is that okay? If so, how should I cite it? What are the appropriate parameters for its use on this assignment?
  • I sometimes struggle with writing and grammar, and I’m interested in using an AI tool to get proofreading assistance for my paper(s) in this class. Is that acceptable and if so, which tool is allowed? Should I cite the tool and how?
  • I asked ChatGPT a question about a concept we are covering in class, but the answer was different from what is provided in our class resources. Could you help me understand why?

As discussed, these are only a few sample questions. Your questions and concerns may be different.

Remember, it’s always best to be transparent and ask if you don’t know. These are new technologies, and we are all still learning! Very likely, other students in your class will have the same question and a discussion on the topic would benefit everyone.

Tips for getting the best outputs

With Generative AI, the response or output you receive is only as good as the prompt you enter. Thus, prompt writing is a skill like any other that requires practice through repetition and experimentation to get the desired outcome and also relies on a set of sub-skills to generate good results. For example, when using language models, it’s not only important to know how to ask the right question, but to pose it clearly and succinctly so that the tool can recognize what you are asking of it. Even using the word “please” can be helpful! Who knew netiquette would also be a valuable skill in this context?

Additionally, having foundational knowledge of a topic is another sub-skill toward being able to ask a good question of the tool, as well as being able to evaluate its output. After all, the more you learn about something, the better and more precise your questions get. 

It’s a lot like learning and as you know, asking the right questions is key to learning any new subject. In fact, we’d suggest that the best way to become proficient with Generative AI is to play. Login to ChatGPT, Bard, Bing or another tool and ask it a question. Then, evaluate the response. Did the tool generate the outcome you desired? Did it respond correctly and appropriately? Trial and error will help you determine that, but it’s always a good idea to begin through play and experimentation.

infographic showing a prompt moving along an arrow labeled "input" into the Language Module, followed by and arrow form the Language Module labeled "output" moving to Generated text.

In their AI in Education module for students, the University of Sydney  identified and defined the 4 key elements of a good prompt.

ComponentDefinition
Roledefines the part played by the model
Taskwhat you want the model to do
Requirementsthe conditions set for the task to be performed
Instructionshow to complete the task

Now, let’s see how each of the four elements play out in a prompt:

Example prompt. Each element is identified in the paragraph following. Prompt reads: You are a nursing student at Rush University and in your first year of the MSN program for non-nurses. Define perfusion and list examples of perfusion. Provide the response in paragraphs  and highlight key concepts. Write with academic style writing.

The above prompt defines the role (“nursing student…”), identifies the task (“define perfusion and list examples of perfusion”), tells the model how to perform the task (“provide the response in paragraphs…”), and outlines the requirements for the output (“write with academic style writing”). 

After the initial prompt, it is your job to evaluate the response. Did it generate something appropriate for your use? Is the formatting correct? Are there any factual errors or hallucinations? Is the tone appropriate? You can then continue to refine the prompt by adding additional context, such as more background information or adding or correcting details in the topic, or clarifying formatting or tone details. Most of the time, the best output occurs after many rounds of prompt and response to refine the text output. 

What if the output gets worse? The AI may have simply taken an earlier detail down the wrong path and now is a little stuck. In these cases, the best option may be to open a fresh chat window and begin again with your prompt, now improved and revised though this earlier back and forth process. 

Watch this short video clip for tips on prompt engineering (01:54 – 04:05):

Practical Applications to Try

PurposePrompt Examples
Feedback and self-assessment“Provide feedback on the following writing sample: [insert sample]. Specify how well the writing sample meets the following criteria [insert criteria/rubric] and give me suggestions for how I can improve my writing.”
Personalized tutor“I am a college student trying to improve my knowledge of [subject/task]. As a tutor, ask me a question about [subject/task/topic] and provide feedback on the accuracy and quality of my answers. Follow up by asking additional questions and offering explanations and examples to help me improve.”
Explain/summarize information at different levels of understanding“Explain the concept of genetic mapping as if I am in fifth grade.”
Proofread writing“Proofread my writing above. Fix grammar and spelling mistakes. And make suggestions that will improve the clarity of my writing.” 
Idea generation

“Act as an expert academic librarian. I’m writing a research paper for a college-level introductory Sociology course and I need help coming up with a topic. I’m interested in topics related to climate change. Please give me a list of 10 topic ideas related to that”  (National American University, 2024).

Read more on using AI for idea generation at National American University. (2024, August 14). Generate topics. National American University. https://national.libguides.com/artificial_intelligence/generate_topics

References:


Creative Commons LicenseThis work is licensed by the University of Southern Maine Center for Academic Innovation, adapted from the RUSH University AI Literacy Module for Students by the Center for Teaching Excellence and Innovation (CTEI, Rush University) under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

css.php