figshare
Browse
ChatGPT-GW_jan2023.pdf (17.05 MB)

AI is in Your Classroom Even if You Didn't Know It

Download (17.05 MB)
presentation
posted on 2023-01-19, 12:01 authored by Lorena A. BarbaLorena A. Barba

Brief remarks at the first GW faculty conversation about AI and education: "AI is in Your Classroom Even if You Didn't Know It"

Video on YouTube: https://youtu.be/IzoatfXc28Q

or Vimeo (starts at 15:16): https://vimeo.com/790955091#t=916s

January 18, 2023 (GW calendar entry)

https://go.gwu.edu/ai4ed


Description

Join colleagues from across GW to learn more about how recent advancements in Artificial Intelligence (AI) technologies (such as chatGPT) are now being used in university classrooms, labs and offices. From AI that write original papers, essays and poems to those that create art or write computer code, these technologies are quickly impacting many aspects of higher education. In this initial faculty conversation, we will discuss what each of us should know about these recent advancements and how we can grapple with the multiple implications for our teaching, research and service.


The event is a collaboration of colleagues in humanities, social sciences and STEM disciplines and will focus on the promises and perils of AI in higher education as the first of an on-going series at GW.


Presenter notes

[slide 1] Title

[slide 2]

In this definition of AI, we acknowledge that we previously thought that human intelligence was required in many tasks that we now offload to machines. We emphasize that computers are different from humans, and “artificial intelligence” is somewhat of a misnomer, because computers are not really “intelligent” or “smart,” despite the product descriptions out there.


If you see a definition of AI as “the simulation of human intelligence by computers” or that it develops “computer systems that can think, learn and act like humans,” take that with a healthy dose of skepticism. These metaphors and analogies—preferred by the popular science headlines—are unhelpful and exaggerated. The “awesome thinking machine” myth or any suggestion that machines are thinking and becoming human-like should be avoided.


Quote found in:

https://www.forbes.com/sites/peterhigh/2017/10/30/carnegie-mellon-dean-of-computer-science-on-the-future-of-ai/?sh=524ad1152197


[slide 3]

This text was generated by ChatGPT and is not that bad.

“Perception” in AI includes computer vision, for example, with applications such as image recognition, path planning for automated vehicles, object detection, or face recognition. These systems are already deployed broadly.

“Reasoning” is used in the sense of drawing inferences, and AI capabilities are mostly limited to data-based or statistical inference. (True reasoning involving more than inference is still a challenge for AI.)


[slide 4]

Many times you will read definitions of machine learning that say something to the effect of “Field of study that gives computers the ability to learn without being explicitly programmed”—this is not entirely correct, because the computers are indeed explicitly programmed, implementing an algorithm that optimizes some function to fit the data, or finds parameters to a model, or otherwise solves a data-based problem.

The key metaphor about learning is the idea of turning information into expertise or knowledge. But we should be careful not to take the metaphors too far into anthropomorphism of the machines.

https://doi.org/10.1147/rd.33.0210


[slide 5]

Generative AI uses algorithms to generate new content in the form of text or images, for example.

These models are trained on large amounts of data. They are then able to output new synthetic content that is unique, but consistent with the patterns learned from the data. They can also be fine-tuned for a specific content domain using a new, smaller set of data.

Automatic translation machines are an example application.


[slide 6]

Large language models are neural networks that have been trained on a large dataset of text, from books, articles, and websites. The goal is to obtain a model that can analyze input text and generate natural language output. After training on massive amounts of text data, they can be fine-tuned on specific tasks, such as language translation, question answering, and text summarization.

Examples are GPT-3 by OpenAI, and BERT by Google. (There are probably a dozen or more models, many are not yet public.)


[slide 7]

Some limitations of Large language models are:

– lack of context: they are trained on very large data sets, but may miss the context of the text it sees; this can lead to incorrect output

– no domain knowledge: the training data is broad, and may not be sufficient for specific domains, also leading to inaccurate or irrelevant output

– lack of creativity: the output can be just average

– bias: the biases contained in the training data persist in the model

– technical limitations: currently it takes huge amounts of computer power to train the models, and it is very expensive to run them

All these limitations are, of course, being worked on.


[slide 8]

Of course the use of these models in education should be accompanied by proper monitoring and guidance from the teacher, and also with clear policies and guidelines for students on the use of these models. By doing this, the benefits outweigh the negative and could be a valuable tool for education.

But as Ray Schroeder said in today’s Inside Higher Education: “our learners, as they pursue careers, will do so in an AI-rich environment” so we need to “ensure that our learners have experience with the technologies as well as develop effective practices for their optimal use”


[slide 9]

Terence Tao is professor of mathematics at UCLA, Fields medalist (a.k.a., the Nobel Prize of mathematics), winner of the Breakthrough Prize in Mathematics, and a MacArthur Fellow. He is sometimes regarded as one of the greatest living mathematicians

Link to post: https://mathstodon.xyz/@tao/109543141003492779


[slide 10]

https://twitter.com/ibleducation/status/1605626553446141952

History

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC