10 Generative SoTL: Exploring AI in Inquiry

Elisa Baniassad

Introduction

The rapid advancement of artificial intelligence (AI) has introduced both challenges and opportunities in education. As SoTL scholars, we are at a crossroads, facing existential questions about our roles, methodologies, and the very relevance of university education. In this chapter, I explore the ways AI intersects with SoTL, from enabling research methodologies to reimagining educational roles and responsibilities. Rather than reacting passively to technological shifts, we must take this opportunity to ask the questions at the heart of education: Who are we? Why are we? Ultimately, this chapter will argue for a proactive and critical engagement with AI in education, ensuring that SoTL remains at the forefront of shaping the future of learning.

This All Feels Familiar: Stories From Software Engineering Education

AI is changing every field. A fundamentally new capability has been introduced in fields that had not been disrupted for a very long time. Yes, technology sometimes comes along that provides assistance, but the assistance was always nicely contained to helping—not to thinking. Now, there is something that will think for you, if you ask it properly. And most fields are not used to that kind of new thinking helper.

But actually, this all feels familiar to me, as a computer scientist. In the most basic terms, computer scientists program computers. We do a lot of things, but that is one thing we do. In the beginning, programming a computer meant moving levers, finding literal bugs, then punching cards, then writing simple commands, and then writing complex commands. Computer scientists refer to these changes as the introduction of levels of abstraction and, sometimes, the introduction of indirection—the programmer issues a command, and something indirectly happens. It is looking like AI is introducing our next level of programming abstraction.

To the outside observer, these levels of abstraction and indirection might just feel like tech progress—the way that a robot can replace a human at some simple tasks like moving rocks around a plane. But, in fact, in providing programmers with new levels of abstraction, these tools were doing some thinking too—they were making choices and saving us from having to make those choices. The robot is not only moving rocks around—it is playing chess! Where we used to train programmers to be really good at software optimization, a task that involved a lot of thought, understanding, decision making, and judgment, a computer would now do it, and we, for the most part, would not have to worry about it any more. If the newly introduced abstraction was a big enough disruption, with enough staying power, typically a Turing award went along with it.

What that meant was that programming was in a constant state of “what does this mean for us?” Now that we no longer had to worry about some things, what did that free us up to worry about instead? And this is exactly where many people, including those of us in education, and in particular in SoTL, find ourselves. What does this new decider/creator mean for us?

As we have seen since AI emerged in popular use, people feel a tremendous amount of concern. In software engineering, this concern always accompanied new abstractions. The software engineering fear had two distinct and very different angles. The first is highly personal: will my skills be out of date, meaning I will lose my job? We are seeing waves of this in many fields, and especially programming, right now with AI. The other fear was very practical and grounded in the work itself: loss of command and control of the functioning of the machine. In delegating to the machine for some of the judgment and decision making, programmers give up some of our granular control. You can ask for something new, but that also means you cannot get into the weeds of how the command is carried out the way you used to be able to.

So, with new abstractions there is always this tension—you can speak in bigger structures, and you also cannot speak in smaller components any more. Where once you were able to say “I would like to book a flight at this time, from this airport, to this airport, with no stops in between,” you are now forced to stay at the level of “I would like to go from City A to City B.” There was always real fear that the new power would become a straitjacket or that it would lead programmers down the garden path and, years later, have us all regretting our choices as we looked at systems that were difficult to understand, difficult to maintain, and slow. The academic response was to write viral research papers entitled catchy things like “____ considered harmful!”[1]

Objects were one of the most Turing-winning abstractions and brought in a whole new field of thought called object-orientation—it introduced a higher level of abstraction than the even highest level of abstractions it overlaid. Objects and the associated thought paradigm were first introduced in the 60s and people immediately wondered “what does this mean for us?” Some met the new abstraction with excitement, while others were concerned. “Objects considered harmful” papers proliferated. Certainly, there was a sense that objects could take over and resistance was needed. In fact, it took decades for objects to catch on, and they went through many alterations of their own before they really took hold as usable technology in programming. It took until the late 90s for objects to be practical enough to enter broad industrial use, then another decade for people to realise they did have their drawbacks. Now, I say hesitantly, we are at a good place with objects, more than 50 years in, using the useful parts and ignoring the less efficient aspects that always felt hokey. The associated software processes have also changed and matured, becoming more general and maybe a little less eager and goofy.

All the while, and still, these new abstractions had and continue to have impacts in software engineering education—while the field was grappling with what it meant to be a programmer now, educators struggled to figure out what to teach and how to teach it. Was it the responsible thing to teach objects even though barely anyone in industry was using them yet? Or should we have our students leave the hall of congregation with all the cool new tech under their belts and ready to embrace the new abstraction and process landscape? Imposter syndrome and insecurity abounded: teaching students usually means an educator has a pretty good understanding of the topic—in the case of new abstractions, this is just impossible.

Engaging in discipline-based software engineering education research and SoTL within software engineering was a real challenge, with the fear being that your reflection and inquiry might provide you findings about something that is no longer relevant a few years down the road! Why bother studying the right way to teach an abstraction that nobody is going to keep using?

With AI, the pace of adoption in the field and of worker obsolescence are significantly accelerated. AI came into popular use around two years ago, and we are already at the 30-year mark if going by the object-orientation timescale: We, at this moment, are struggling to figure out how AI fits into software engineering curricula and how to study its impacts and applications in education without our results going stale almost immediately.

But because of our past in software engineering education, we know how this will likely go, albeit way way faster. To some extent, the phases follow the path of the stages of grief, which, in a way, makes sense. We are grieving the loss of the solidity of the old practices and norms, and we are forced to embrace a new reality that we did not choose or create.

The first phase is denial, or the sense of “this is likely nothing and will go away.” We are done with that step. We can see that it is in fact something, and that it is not going away. Industry is investing in it, laying off people because of it, and slowly and increasingly requiring our graduates to know how to use it.

The second stage is not exactly anger, but maybe more irritated bargaining—we are here. SoTL’ers in this stage might be tempted to take the position of publishing papers along the lines of “good news, learning the old way is still better.” Yes, learning objects is neat, but learning structured programming (the thing that came before) is still a better bet. The SoTL equivalent of “AI considered harmful.” A less software-specific example might be “are AI-assisted student essays better or worse?” This phase can last a very long time, if not forever (when interleaved with the next stage). In the case of objects, some bargaining is still happening, more than 50 years in, even through the phases when we taught it with enthusiasm and now with more maturity, perspective, and restraint. The problem for SoTL and AI is that the technology is so new and unformed that it is hard to ground enduring questions in it, besides using it as a lens through which to examine the act of learning. SoTL’ers at this stage may even hold off looking at the new tech in teaching and learning because there is just not enough to go on.

The third stage is embracing, but badly: we will teach it, but we will get it somewhat wrong. Maybe this maps to a depressed/depressing acceptance. We will pick the wrong angle on the abstraction, choose the wrong metrics of success, and teach people skills that do not quite end up being the ones that are required long term. Maybe something like “how to prompt the AI to generate good software tests?” (which is a really good question at the time of writing, but may seem quaint, naive, and even moot in a year). We will then perform SoTL and DBER associated with the not-quite-future-proof material. In this phase of object-orientation, software engineering will have to adjust its pedagogical structures and associated DBER to accommodate these shifts in abstraction by introducing courses on objects and associated practices. Papers in this stage will often have the new technology right in the paper title to distinguish the work as part of the new era and paradigm, even while the prior one is alive and relevant. Conferences will be named for the technology that arrives on the scene.

Some educators have entered this third stage, but most of us need to wait and see how our disciplines pick up this new technology in order to drive curriculum to meet downstream needs. This is the stage where students start asking irritable questions on forums such as “why aren’t we using the new tech in class?” and, painfully for many of us, pointing out, “my instructor has no idea how to use the new tech.” This is a very perilous time for SoTL because the easiest questions to ask are things like “how to mitigate hallucinations when teaching students programming?” But what happens to that result in six months when those specific hallucinations are no longer appearing? In fact, many of the most obvious questions will have a very short horizon for generalizability. Perhaps the role of SoTL in this stage is to keep asking interesting questions, and not ones that are tinged with irritation about the new thing or have a built-in expiry date. Asking “do hallucinations help or hinder students learning to code, and what does that say about coding skill acquisition?” is a better offering than “experience report on why hallucinations ruined my students’ experience.” Yes, there is always the fear that students will just not need to learn to write code at all in a couple of years, but at least we will understand something about the way students think and learn that can be transferred to whatever the new reality might be. In this phase, we need to position SoTL to interrogate how best to integrate AI-driven methodologies across all fields, even while those technologies and methodologies are changing.

The final stage is acceptance. We are definitely not here yet, and it may still be several years away, because the prior phase will continue as long as the technology continues to drive forward at its current breathless pace. In the acceptance phase, the dust has settled and the scope of use in the disciplines has been more or less established. That solidity affords a mature view on what to teach, so we can better inquire into how to teach it. We can start asking questions like whether students’ sense of belonging is influenced by the technology and whether this technology changes anything about the way classes should be set up in the first place. In software engineering, we are now writing papers about things like “student belonging in a second-year programming class” without mentioning that the technology being taught in the class is object-oriented. Ultimately, we will stop mentioning the technology in our paper titles because it will have become just the way things are done.

The path from denial to acceptance is not linear—it is now thought of as a tangle, and its course is very unique to each individual travelling it. SoTL as a field is currently tangled in all of the phases of denial, irritation, and embracing.

SoTL’s New Abstraction: The Question

Perhaps the earliest obvious upside for SoTL, and maybe motivating the first steps into acceptance, is the role AI can play in analysis. AI-tools are already pretty good (the stage before robust) for extracting themes from qualitative data. Recently, my colleagues at the Institute for the Scholarship of Teaching and Learning at the University of British Columbia (UBC) and the Centre for Teaching and Learning at UBC used an AI tool to extract themes from anonymous open-ended responses. They calibrated the tool against data they had already analysed and found that the AI approach resulted in more complete themes than their manual efforts. Furthermore, the AI performed the task in a day, whereas their work had taken weeks.

Agility has always been a huge blocker to truly interesting questions—we were hindered in our scope and breadth by the sheer effort that qualitative analysis entailed. Because of that outlay, we would bound our questions to what was doable in certain timeframes:

  • What can one ask in a semester?
  • What can one ask in the month during summer when we have a moment for SoTL?
  • What analysis can one do across the duration of a PhD?

This brings me back to programming—my grandfather-in-law did his PhD in the 70s in math using computers. The computer was the size of a room, and he was trying to show that you could use computers to do a certain kind of calculation. There were a lot of reasons this work was hard to do: computers were changing, and the technology was clunky and very slow. He had to book computer time, and the waitlists were long. As such, he had to ask a very specific question about a single calculation because he could not ask a more general question—it would have taken too long! At some tragic point, he tripped and dropped his entire thesis program on the floor—hundreds of punched cards fell in a jumble. It set him back weeks sorting them all back out. But now, nobody is dropping their computer programs on the floor, so nobody will have that risk. Nobody needs to book computer time. We can just use computers. And with advances in computing, computer scientists have been able to ask more and more complex questions, and even invent AI, something that will answer all our questions for us, and maybe start coming up with questions of its own!

SoTL is being fast-tracked from holding a rack of punch cards to having lightning-fast computers always at the ready. You can now speak in the abstraction of questions, instead of having to get into the weeds of determining your answer. With the right data to hand, we can now instantly ask questions like “do students seem more stressed in classes with more granular assessment schedules?” or “do students with higher scores in early assessments engage more, and more meaningfully, on the discussion forums? And if so, what characterises the difference in their engagement?” If we get the prompt right, we might get our answer back in a few minutes. That leaves us open to thinking about the implications of the answer and working through our pedagogical response to those implications. For instance, a current research initiative within a databases course is investigating whether AI can evaluate tone in student discussions and correlate sentiment with learning outcomes. Effectiveness is looking promising. This abstraction-elevation shifts the paradigm from merely studying student engagement to actively uncovering hidden patterns in their learning experiences.

AI in SoTL practice is gaining ground. Child Trends reports that education researchers are utilizing AI to predict student outcomes, analyse qualitative data, and conduct literature reviews (Kelly et al., 2024). While these tools offer efficiency, concerns about accuracy, security, plagiarism, and ethics persist. Meanwhile, a systematic review by Ogunleye et al. (2024) highlights the lack of agreed-upon guidelines for using generative AI in higher education. The study calls for interdisciplinary research to develop frameworks and policies that ensure effective and ethical AI integration in teaching and learning .

Of course, AI’s ability to generate insights is not synonymous with understanding—SoTL scholars are still the primary thinkers and question askers in the loop. There is still, and will likely always be, significant expertise needed to know which prompts to give to the machine. We can ask the tool to make suggestions about which analyses to apply, but it still feels tenuous to trust its advice without expert insight.

We are likely to see further proliferation of papers on AI for SoTL, sharing what works well and what does not work yet, when performing SoTL at this new level of abstraction. As AI facilitates new forms of analysis, and speeds up the old ones, it is imperative to robustly refine SoTL methodologies and peer-review the advances.

Common Cause; Unique Impacts

While AI is reshaping education broadly, its effects are not uniform across disciplines. Each field will experience AI-driven disruption in different ways, requiring tailored pedagogical responses from SoTL scholars. Understanding these varied impacts is essential for ensuring that AI integration supports disciplinary needs rather than imposing a one-size-fits-all approach.

Library and information sciences, for instance, will likely undergo a transformation akin to the digital revolution that redefined their field with the advent of computers. Just as library science had to reimagine cataloging, information retrieval, and research support when digital systems became dominant, AI now presents another seismic shift, automating aspects of knowledge organization and discovery. This raises fundamental questions about the role of librarians as curators of knowledge and the skills that will remain uniquely human. Discipline-specific questions might include “how does AI-driven automation affect students’ ability to engage in independent research?” or “what role should librarians play in guiding AI-assisted inquiry?”

Computer science faces its own challenges, as AI increasingly encroaches on traditional programming tasks. With large language models capable of generating functional code, debugging, and optimizing algorithms, educators must reconsider what foundational skills students need. Should introductory courses focus less on syntax and more on systems thinking, software architecture, and ethical AI deployment? The role of computer scientists may shift from code authorship to higher-level system design and oversight. We have already discussed the kinds of questions that might be asked in the sub-field of software engineering, but computer science broadly is also shifting. SoTL scholars in computer science might end up engaging with questions like “hw should programming curricula evolve when AI can generate and debug code?” and “should courses shift toward systems design, AI ethics, or human-computer interaction?”

The sciences, particularly those driven by quantitative methods, are being accelerated by AI’s ability to handle vast datasets, perform combinatorial analyses, and model complex phenomena with unprecedented efficiency. In fields such as biology, chemistry, and physics, AI is streamlining research workflows, suggesting new hypotheses, and automating experimental design. How do students interpret AI-generated scientific models? How can AI best be integrated into laboratory education without undermining traditional scientific rigor? This shift requires educators to place greater emphasis on interpreting AI-generated findings, ensuring students understand the assumptions and limitations behind algorithmic analysis.

Social sciences are experiencing a different kind of disruption. AI’s generative capabilities have profound implications for fields like psychology, sociology, and political science, where qualitative assessment and textual interpretation play a central role. AI tools can simulate human behaviour, analyse vast corpora of text, and even generate new qualitative data. This may force SoTL scholars to ask:

  • How do we teach critical thinking and methodological rigour when AI can produce plausible but flawed narratives?
  • How do we ensure that AI-driven research does not merely reflect the biases of its training data?
  • How do students critically engage with AI-generated qualitative data?
  • What safeguards are needed to ensure AI does not reinforce biases in sociological and psychological research?

The SoTL community can come together as a whole to help ask these questions—some answers will generalise across disciplines, and some will tilt into the discipline specific. Maintaining interdisciplinarity will help SoTL remain robust to the weaknesses introduced by silos that can plague communities and that can slow down the transfer of advancement and insight.

Exploring Existential Questions about Higher Education

As we watch AI emerge into something we can trust to take over large swaths of our executive functioning, we have to ask—where will it end? Can AI do a better job of teaching, or at least a cheaper job, than we educators can?

In the book The Diamond Age: Or, A Young Lady’s Illustrated Primer by Neil Stephenson (2000), a little girl named Nell has parents who are plagued with the problems of those in poverty and could not secure a solid education for her. As such, Nell is left up to her own devices (literally) for any education she should want to achieve. Through a series of unlikely but tremendously fortunate events, Nell finds herself in possession of a tablet called a Primer, described as an infinite interactive book—Nell can ask the Primer anything, explore anything, and learn anything. The Diamond Age was written in 1995—20 years before tablets and 30 years before GenAI. We still do not quite have what Nell had, but it is getting eerily close—it is not that the tech cannot do what the Primer did in The Diamond Age, it is just that application developers have not quite had the time to do it…yet. By the end of the book, Nell has emerged as a leader and is intelligent and intellectually strong. As SoTL scholars, it behooves us to ask: Would human teachers have done a better job?

The Diamond Age strikes at the heart of a lot of the questions we are currently asking about education in the face of AI. Nell is only able to be educated to a sophisticated extent because of theft and luck (Stephenson, 2000). If education is tied to expensive delivery technology, will there be a divide between the haves and the have nots? The Primer is completely self-paced and uncurated—Nell is an uncommonly inquisitive child and seems to have an uncanny ability to ask the right next question to lead her to useful and usable outcomes. Can we assume that the same level of wisdom is present in most students? Or is there a risk that choose-your-own-adventure education would lead students deeper and deeper into conspiracy theories a la social media’s self-reinforcing algorithms?

Recent studies have explored various applications of AI in education, highlighting both its potential benefits and challenges. A report by the U.S. Department of Education (2023) emphasizes that AI can automate actions to support student learning, but educators must ensure these actions comply with laws such as the Individuals with Disabilities Education Act. Similarly, research from the Center on Reinventing Public Education indicates that AI usage in U.S. classrooms is currently limited, with more advantaged suburban districts leading in adoption, raising concerns about equitable access to AI’s benefits in education (Lake, 2024).

We have no data about how students engage with an immersive AI educator, but we have some clues. Students did not flock to massive open online courses (MOOCs) the way we had expected, and MOOCs did not replace educators the way we feared. Why not? I can think of a few reasons, just drawing on my own experience:

  • MOOCs are not choose-your-own-adventures, meaning a student cannot delve into areas of deeper interest or chart their own path through the material.
  • Students cannot circle back if they are still unsure.
  • Students cannot stray out of the lane of the material and explore adjacent topics.

MOOCs are a closed and finite world. AI, by contrast, is infinite, open-ended, and choose-your-own-adventure. But there are some drawbacks of MOOCs that AI may share. MOOCs seem to require a mature commitment from a learner with strong time management skills—ultimately, the students have the freedom to just walk away. And learning in a MOOC can feel lonely—if the online community in the MOOC is quiet and there is no sense of cohort, then a student can feel like they are on their own with the material. We still do not know whether long-term tutelage from an AI would begin to feel closed and lonely and whether students would just walk away as they could and often did with MOOCs. Do we need human connection to deeply learn? Or to keep learning? Do we need someone who will be sad we do not show up to class to keep us showing up to class? These are SoTL questions.

New Forms of Dissemination: Beyond the Paper

Traditional academic dissemination methods—conference presentations, journal articles, and books—are being challenged by AI-driven approaches that allow for new forms of engagement with research findings. One emerging possibility is the use of AI-powered conversational agents, such as custom GPT models, to allow researchers, educators, and the public to explore findings dynamically. Rather than reading static reports, users could engage with a dataset, ask questions about the research, and generate customized insights in real time.

Imagine a research project on student engagement where users could interact with an AI model trained on the dataset, posing questions such as:

  • What were the key themes that emerged?
  • How did students with different backgrounds respond?
  • What factors were most correlated with learning gains?

Instead of flipping through appendices or supplementary materials, users could receive direct, nuanced responses tailored to their inquiries. The portal could allow consumers to fully explore the extent of the researcher’s inquiry but would also catalyse follow-up exploration. One could command “perform their analysis but with my dataset and report on whether we see the same effect” or “given their findings, could you get at this inference they didn’t think of?”

This would likely deconstruct, to a disturbing extent, the boundaries of what would be considered one’s own contributions to the field. That deconstruction will feel very uncomfortable. Our academic intellectual property has been protected up until now by the natural barrier of the sheer difficulty of doing the work. If the work is easy, then what does that mean? What do tenure and promotion cases look like? What metrics would be used? Would we need a different calculus to measure impact?

However, this shift also raises significant short- and medium-term challenges. While the technology is in flux, how do we ensure that AI-generated responses accurately reflect the nuances of the research? How can we maintain academic rigour in a system where responses may be generated probabilistically? These are critical questions that SoTL scholars must address as AI becomes an integral part of the research dissemination process.

Conclusion

SoTL must not simply react to AI. The time for denial or even passive adaptation is over. It is imperative that SoTL scholars lead conversations about AI’s role in education, ensuring that pedagogical values and ethical considerations are not sacrificed to efficiency, cost effectiveness, or confusion. As AI reshapes the educational landscape, SoTL scholars must be at the forefront of defining what teaching and learning will become and invest heavily in interrogating the range of its possibilities. We must deliberately chart a course that preserves what is essential, innovates where possible, and asserts a vision for education and inquiry that ensures AI serves human learning rather than replaces it. We can rise above asking questions based in cynicism or skepticism to those helping us understand ourselves, our learners, the role of education, and the very form it can take.

References

Lake, R. (2024, May). AI is coming to U.S. classrooms, but who will benefit? Center for Reinventing Public Education. https://crpe.org/ai-is-coming-to-u-s-classrooms-but-who-will-benefit/

Kelly, C., Holquist, S., Kelley, S., & Aceves, L. (2024, December 10). Promising applications of AI in education research. Child Trends. https://www.childtrends.org/publications/applications-ai-education-research

Ogunleye, B., Zakariyyah, K. I., Ajao, O., Olayinka, O., & Sharma, H. (2024). A systematic review of generative AI for teaching and learning practice. Education Sciences, 14(6), 636. https://doi.org/10.3390/educsci14060636

Stephenson, N. (2000). The diamond age: Or, a young lady’s illustrated primer. Spectra.

U.S. Department of Education. (2023). Artificial intelligence and the future of teaching and learning [PDF]. https://www.ed.gov/sites/ed/files/documents/ai-report/ai-report.pdf


  1. The first of these offerings was the famous essay “Go To Statement Considered Harmful” by computing’s prominent thinker, Edsger W. Dijkstra, who I am proud to say was my PhD supervisor’s supervisor’s supervisor’s supervisor. Ironically, Dijkstra was arguing for more abstraction in his essay, not less. But because the title was so catchy, people tended to adopt it to argue against new abstractions.