AI in Software Engineering

10 May 2025

I. How we use AI

Despite being the result of many different projects and research teams over time, advancing AI that accepts prompts from users has only become publicly available at a more widespread level within the past few years. Before starting to be more regulated in universities such as University of Hawaii and its campuses, students were using it to bypass many traditional academic responsibilities such as essays or memorization. Whether this was to their advantage in the longer term or downfall by missing out on manually exercising key skills is still yet to be seen and currently being debated. However, in pace with the students using it as a shortcut, educators have been developing creative ways to incorporate it into classrooms, as well as teaching about its basics and ethics of how to use it most efficiently without sacrificing essential learning elements. For example, in our software engineering course, we were encouraged to register for Github student account benefits, which was largely pushed by instructors with our access to integrating Github Copilot into an IDE of choice in mind. It was intended as a tool to save time rather than writing repetitive lines of code. Rather than directly negatively impacting our learning, it would boost our productivity by suggesting addition of contextually appropriate lines to potentially complete whatever thoughts we would have for a small chunk at a time.

II. Half-out, half-in

To preface, my experience with AI over the course of the semester in software engineering followed a defined timeline due to the sequence of events relating to my access to certain AI tools. In the beginning of the semester, I requested for the Github student account but did not obtain its benefits until approximately halfway through the semester. Because of this gap, I seldom used AI tools until then, with the exception of an occasional question or two to Gemini when I was scrambling around under the time constraint (usually in WODs or last minute assignment submissions). Once I finally could enable the Github Copilot extension for VSCode, it became half-helpful but also half-intrusive since toggling it on and off constantly would be even more tedious. It would give both accurate and unnecessary suggestions, as well as offer formatting corrections and fixes for bugs with their explanations.

There were a plethora of opportunities to use AI throughout ICS 314, as it was indirectly promoted to increase students’ efficiency. The following examples illustrate how well or not I could integrate these tools into each task:

1.

Aside from the Copilot extension once I got it, I mostly did not see a need to seek AI assistance for the experience WODs since I would prefer to learn to do it myself before fully leaning into the tools. Granted, the extension was constantly trying to do and be the most in the background, so I would passively accept suggestions only if they matched what I intended to write anyway. I have never use the “Fix using Copilot” option, since I did not trust that it would avoid inserting nonsense code into my existing work.

2-3.

For both in-class practice and regular WODs, I would be under more pressure from the time constraint, so using the features of the Copilot extension mentioned above was a more embraced action in these contexts. I used “Explain using Copilot” if I was in a pinch and needed to understand the cause and possible fix ideas for unforeseen bugs. I would then test those proposed solutions to see if it a) made sense to successfully resolve the problem and b) matched anything we learned in class. I also loosely used Gemini for debugging before obtaining the Github student pack, prompting it with “debug [my code here]”.

4.

I did not use AI to write any of my essays because this is not a personal habit of mine. Because I have been strongly deterred by specific professors in the past with the threat of expulsion, I never reached for the tools in this purpose despite knowing they existed. Comically enough, my personal writing style has faced AI-llegations over time from writing professors, peers, and professionals with reading-intensive careers so many times to the point of having a running joke that I am a living and breathing AI. Crazy, right?

5.

For our final project, other than the basic existence and routine minimal usage of Github Copilot, I did not write any prompts to create code or debug. My most heavy-handed use of AI in the project was to create documentation for elements in our Prisma Schema. After reviewing its output, I determined that I couldn’t have summarized or explained our code any more clearly or precisely, so I included it in our final project page’s guide.

6.

No AI tools specifically gave me tutorials, but after incorporating the Copilot extension I did periodically request for explanations of certain errors. These were helpful about 50-70% of the time, and when they were I was able to learn what was causing the issue and how to resolve or avoid it next time.

7.

I did not use AI to answer a question in class or in Discord since I was barely active on either fronts.

8.

I did not use AI to ask or answer a smart-question, as I did not find previous responses to questions of similar depth and weight in other courses to be satisfactory and therefore felt that it is currently not quite equipped to handle such multidimensional inquiries.

9.

I did not write any prompts to produce example code because it didn’t occur me to use AI in this manner.

10.

While I did not use AI to explain the context of existing code, I utilized the “Explain using Copilot” function from the extension in order to gain some insight on causes of bugs and possible solutions.

11.

Other than accepting suggestions from the Copilot extension, I did not use AI to write code because I don’t trust its accuracy at this point in time.

12.

Documenting code: see #5, but otherwise did not use AI for tasks such as summaries or generating code comments.

13.

Between a small handful of Gemini debugging requests and “Explain using Copilot”, I would say I utilized AI frequently enough to double down on quality assurance.

14.

No other miscellaneous uses of AI in the course came to mind, other than generating humorous images to slap on essay webpages like this one

III. Not hindering, somewhat helping

I can confidently state that I tailored my AI use to specifically avoid losing out on core learning experiences. I did this by refusing to have the tools write my code for me unless it was near-perfectly what I already envisioned for each section. If it was precise enough, I would accept suggestions for additions to my work. Some of the explanations I asked for were either more in-depth than what we knew in-class from the learning experiences, while others were technically not incorrect but also too vague to make a lasting impact on my understanding. Others were just reminders of things I had already forgotten but were given knowledge from the course.

IV. Beyond 314

I do not intend to participate in any upcoming HACC events, but I will be using Copilot with VSCode or any other IDEs throughout the rest of my undergrad courses since it proved to be convenient and handy… if we disregard its small nuisances and more persistent features. AI tools can improve productivity and create more time for humans to expend their energy on other challenges and projects, but in terms of programming has only progressed to a level (as of May 2025) where it can be reliably implemented by experienced developers who don’t find it more taxing to check if the tool has made mistakes or suggestions that will break their existing system.

AI has already done impressive things in assorted tasks and challenges, such as prove itself through the latest releases and applications of Alphafold and Evo to predict DNA and other types of protein structures, and can continue to do so in order to aid genetic disease research as well as other causes. In the future, AI could also greatly assist the agricultural industry by predicting weather patterns or letting farmers manage production factors like irrigation levels and schedule or plan when the most optimal time for harvesting certain crops would be.

V. Inconvenience, convenience

While AI has some functions that served a legitimate wider purpose in the software engineering course, it is clearly unprepared for our greater expectations of its performance because it has not yet reached ideal consistency milestones. I could keep complaining about how annoying Github Copilot’s incorrect suggestions or vague language in certain explanations were, but observing even more outlandish assumptions used by peers from ChatGPT reenforced my disappointment with the current capabilities of generative AI. Some of it works, but it tends to deviate from conventions taught in the course to the point of meaning unrecognizability relative to the context.

Once AI’s programming accuracy and understanding improves, it would be qualified to provide more valid and explanatory learning material. In its current state, when it comes to concept explanations some of the most commonly used models can, at best, slip in a bonus tip or two that your course lectures may have missed.

VI. Manual vs. AI learning

Currently, our instructor also used Github Copilot in the video lectures to autocomplete an in-progress thought, but if the auto-complete was incorrect or became an obstacle to hinder work, its syntax would be manually edited to reflect developer intentions. Relying too much on AI was discouraged, but experimentation and light familiarity with it were encouraged. However, it was not interwoven into the class deeply enough to consider it an ‘AI-driven course’, but we can still examine differences between traditional and AI-assisted teaching.

In traditional software engineering education, students would manually program nearly everything, save for the code that came from borrowing templates on platforms like Github. This volume of time spent practicing coding concepts enforced learning the skills, which were then committed to long-term and even sometimes muscle memory. They also were administered tasks that would test their knowledge of each unit learned in the course. It is assumed that core foundations in athletic software engineering will be retained by continuing to assign ‘workout’ programming activities under a given time limit and other similar methods so that the craft is not lost for future generations of students.

VII. Can AI beat its own learning curve?

When AI models collectively show significant enough improvement to be considered a reliable information tool, they can be used to generate lectures with concise, specific prompts. Instructors would also be able to use it to write example code, and create assignments and project ideas. Some across the world may be already doing this now, but hopefully it will be more credible to use the tools in this way in a few years. Even when AI has progressed enough to have more acceptable use, it should still be treated as supplemental to educators’ central focuses in their courses. They would need to understand it well enough and have sufficient experience to be able to teach students how to best use it without sacrificing codewriting and comprehension skills.

VIII. Final thoughts

Using AI as part of the software engineering course this semester has been an interesting journey, to say the least. I began with minimal use and flailing around trying to beg Gemini to debug my code properly, then upgraded to integrating Github Copilot in VSCode as an extension, and finally observed classmates doing the absolute most all using ChatGPT. While I am optimistic about where AI can take the computer science industry as well as the whole world in the future, I currently view my experiences with using it to program in the same way I see Zippy’s: it has sustenance for my appetite or convenience, but it’s not my favorite option or the best for my performance/health. Also, whenever I’m not in the mood to deal with its presence, it’s there anyway and requires more of my effort to avoid than to cave in and partake. Existing AI tools have evolved to a stage where they are neither completely unusable nor close to perfect. This leaves plenty of room for the machines’ growth and experience to learn more about what will satisfy the human users.

As much as I’ve lamented how bothersome and unhelpful most generative AI currently is to incorporate into programming, I do believe it will become an integral part of developers’ education and that it can positively contribute to many situations once it reaches that acceptable level of effectiveness. It would be able to do the heavier lifting for educators when it comes to course development, and already uses its neural networking properties to help researchers discover incredible things. Aside from the insufficient data for accuracy and environmental ethical concerns, AI has the potential to improve to the point of being applied in many more everyday and larger tasks in life. For now though, instructors should continue to ensure they understand its current capabilities and how to direct students in utilizing them without letting its output interfere with learning and productivity.