AI is in Your Classroom Even if You Didn't Know It
Brief remarks at the first GW faculty conversation about AI and education: "AI is in Your Classroom Even if You Didn't Know It"
Video on YouTube: https://youtu.be/IzoatfXc28Q
or Vimeo (starts at 15:16): https://vimeo.com/790955091#t=916s
January 18, 2023 (GW calendar entry)
Join colleagues from across GW to learn more about how recent advancements in Artificial Intelligence (AI) technologies (such as chatGPT) are now being used in university classrooms, labs and offices. From AI that write original papers, essays and poems to those that create art or write computer code, these technologies are quickly impacting many aspects of higher education. In this initial faculty conversation, we will discuss what each of us should know about these recent advancements and how we can grapple with the multiple implications for our teaching, research and service.
The event is a collaboration of colleagues in humanities, social sciences and STEM disciplines and will focus on the promises and perils of AI in higher education as the first of an on-going series at GW.
[slide 1] Title
In this definition of AI, we acknowledge that we previously thought that human intelligence was required in many tasks that we now offload to machines. We emphasize that computers are different from humans, and “artificial intelligence” is somewhat of a misnomer, because computers are not really “intelligent” or “smart,” despite the product descriptions out there.
If you see a definition of AI as “the simulation of human intelligence by computers” or that it develops “computer systems that can think, learn and act like humans,” take that with a healthy dose of skepticism. These metaphors and analogies—preferred by the popular science headlines—are unhelpful and exaggerated. The “awesome thinking machine” myth or any suggestion that machines are thinking and becoming human-like should be avoided.
Quote found in:
This text was generated by ChatGPT and is not that bad.
“Perception” in AI includes computer vision, for example, with applications such as image recognition, path planning for automated vehicles, object detection, or face recognition. These systems are already deployed broadly.
“Reasoning” is used in the sense of drawing inferences, and AI capabilities are mostly limited to data-based or statistical inference. (True reasoning involving more than inference is still a challenge for AI.)
Many times you will read definitions of machine learning that say something to the effect of “Field of study that gives computers the ability to learn without being explicitly programmed”—this is not entirely correct, because the computers are indeed explicitly programmed, implementing an algorithm that optimizes some function to fit the data, or finds parameters to a model, or otherwise solves a data-based problem.
The key metaphor about learning is the idea of turning information into expertise or knowledge. But we should be careful not to take the metaphors too far into anthropomorphism of the machines.
Generative AI uses algorithms to generate new content in the form of text or images, for example.
These models are trained on large amounts of data. They are then able to output new synthetic content that is unique, but consistent with the patterns learned from the data. They can also be fine-tuned for a specific content domain using a new, smaller set of data.
Automatic translation machines are an example application.
Large language models are neural networks that have been trained on a large dataset of text, from books, articles, and websites. The goal is to obtain a model that can analyze input text and generate natural language output. After training on massive amounts of text data, they can be fine-tuned on specific tasks, such as language translation, question answering, and text summarization.
Examples are GPT-3 by OpenAI, and BERT by Google. (There are probably a dozen or more models, many are not yet public.)
Some limitations of Large language models are:
– lack of context: they are trained on very large data sets, but may miss the context of the text it sees; this can lead to incorrect output
– no domain knowledge: the training data is broad, and may not be sufficient for specific domains, also leading to inaccurate or irrelevant output
– lack of creativity: the output can be just average
– bias: the biases contained in the training data persist in the model
– technical limitations: currently it takes huge amounts of computer power to train the models, and it is very expensive to run them
All these limitations are, of course, being worked on.
Of course the use of these models in education should be accompanied by proper monitoring and guidance from the teacher, and also with clear policies and guidelines for students on the use of these models. By doing this, the benefits outweigh the negative and could be a valuable tool for education.
But as Ray Schroeder said in today’s Inside Higher Education: “our learners, as they pursue careers, will do so in an AI-rich environment” so we need to “ensure that our learners have experience with the technologies as well as develop effective practices for their optimal use”
Terence Tao is professor of mathematics at UCLA, Fields medalist (a.k.a., the Nobel Prize of mathematics), winner of the Breakthrough Prize in Mathematics, and a MacArthur Fellow. He is sometimes regarded as one of the greatest living mathematicians
Link to post: https://mathstodon.xyz/@tao/109543141003492779