A Critical Review


Practicing writing well

Today I had an assignment for my writing class, the goal was to write a critical review on a journal paper, article or book. I decided to choose a topic that is deeply relevant to our time: usage of AI tools in computing education. I think I did a decent job, so I decided to share what I wrote.

Critical Review

This critical review is on a journal paper [LG23] written by Sam Lau and Philip Guo, two researchers in the department of NeuroScience at the University of California, San Diego. The paper talks about the evolving landscape of education in the age of AI code generation and large-language models (LLMs). Through questioning a wide variety of programming instructors, the authors attempt to give an overview of the short-term and long-term plan of educators in regards to the rapid rise of AI generation tools. I personally found this paper really informative as it explains creative solutions to deal with the evident problems of cheating and lack of understanding the material from the students.

The paper first gives an overview of many possible ways that software developers and researchers have been using AI code generation tools. The most notable ones are code completion, code simplification, debugging support and conversational bug finding. However, the author also states some critical limitations to these AI generation tools such as inaccuracies and low code quality. In a related article [Guo23], Professor Guo gives an overview of other ways scientists can incorporate AI tools into their existing workflow to enhance productivity and learning. This begs the question, if the usage of AI tools is accepted and even promoted at the workplace, shouldn’t educators also try to teach their students how to make proper usage of upcoming AI technology?

The main findings of the paper consist of the summary from short interviews with 20 instructors of introductory Computer Science classes across the world. To begin with, most professors mention that it was difficult to know how much their students were using AI tools with one of the TAs stating that it almost felt like a “taboo topic” [LG23]. In fact, if a student was using these tools, there was no incentive to tell the professors about it. In contrast, all the educators mention that they had previously heard or discussed with colleagues on this subject matter. This ranges from casual discussions, to long email threads, and in the case of one educator, even the issual of a new policy banning the use of AI tools in all classes across the university.

Naturally, as anyone could imagine, all educators were concerned about cheating concerns in the short-term. This led to different types of changes to the structure of their courses; some professors decided to increase the weight of in-person written exams while others tried to ban AI tools in the classroom (as is the case with this current class). In all cases, the participating teachers noted that these were only temporary solutions and that stronger policies and changes needed to be enforced in the future. Here is where things get interesting, participants had vastly different opinions on the longer-term solutions: some proposed ideas to resist AI tools while others wanted to embrace their usability. The former preached the importance of learning the fundamentals of programming while the latter insisted that AI tools could enable students to focus on the design aspect without needing to battle with syntax and rote memorization.

As the author points out, there are many limiting factors to the findings of this journal paper. Despite being affiliated with universities spanning all continents, the classes were all taught in English and the teachers were also all originally from American universities. Another important factor to consider is the type of classes was limited to introductory university courses, thus it might not be generalizable to other levels of education. However, this journal does provide an interesting perspective on computing educator’s reaction to the beginning of worldwide adoption of AI tools. The structure of the paper, from introducing usages of AI tools, to detailing short and long-term solutions, and finally concluding with related open research questions, really helps understand the overall idea the authors were trying to convey. I think what this paper does especially well, is present results objectively, always acknowledging different perspectives, while also relating these concerns to other fields such as equity, access and pedagogy.

To conclude, this paper gives an excellent overview of the usage of AI tools in education from the perspective of university educators. It presents several ideas that are intuitive to the reader along with many surprising facts about the topic. However, this paper raises a lot more questions than it answers. For example, how does the usage of these AI tools affect software developers and scientists across other industries? What kind of repercussions can decisions in education have on the future landscape of jobs and opportunities? All in all, understanding different perspectives on the usage of new information communications technology (ICT) is a crucial step towards forming your own opinion in the subject matter. It will be interesting to see, in hindsight, how accurately the educators predicted the future of education, but alas, as with many things, only time can tell.


[Guo23] Philip J. Guo. Six opportunities for scientists and engineers to learn programming using ai tools such as chatgpt. Scholar articles, August 2023.

[LG23] Sam Lau and Philip J. Guo. From “ban it till we understand it” to “resistance is futile”: How university programming instructors plan to adapt as more students use ai code generation and explanation tools such as chatgpt and github copilot. In Proceedings of ICER 2023: ACM Conference on International Computing Education Research, Aug 2023.