I met Raphael Porro, a first-year nursing student at the Islamic University of Uganda, and asked him about his experience using Artificial Intelligence technologies in his studies. Surprisingly, he was not shy about admitting that he used them in his studies.
Porro says the AI tool he first came across was ChatGPT through a friend, and he has not looked back since then.
“I often use AI when answering multiple-choice questions. Many of them can be difficult, and going to textbooks takes a lot of time,” Porro explains. “So, I go to AI and simply type in the question, and it provides an answer within a very short time, and I feel very okay.”
His colleague, Nalukooya Sharifah, a law student, sees things differently. She accepts having been introduced to AI tools on her very first day at university. However, she still refuses to rely on them for coursework. “I feel it discourages me from thinking,” she says. “As a future lawyer, you have to think and read many sources.”
These are two students from the same campus but with opposite relationships with the same technology. In that gap between Porro and Nalukooya, one discovers the central challenge facing every university in Uganda today.
The debate over AI use in educational institutions many times rotates around cheating versus learning, integrity versus convenience. However, several experts have said that such a framing could be too narrow and dishonest.
AI is not something that can be resisted by making test rules stricter or writing plagiarism rules more strongly. It is an infrastructure and the calculator of this generation. The real question is not whether students should use it, but whether our institutions are prepared to teach them how.
Godfrey Kyazze, an education expert and lecturer at African Bible University, makes a point that when combine harvesters replaced hand ploughing, farmers did not stop farming, instead, they farmed differently. When calculators arrived, mathematics education had to adapt to them.
“What teachers are saying today, that AI is a threat,” Kyazze observes, “I want to relate it to what farmers possibly said when tractors came.”
Kyazze is right, but the analogy also contains a warning for a farmer who hands over the plough and stops learning the land. In the same way, a student who outsources every question to an algorithm stops developing the muscle of thought. The danger is not the tool. The danger is dependency without understanding.
Dr. Adam Ali, the academic registrar of Islamic University in Uganda, suggests that institutions of higher learning need to encourage both lecturers and students to accept that AI tools like ChatGPT are not the enemy.
“We find it is very important,” he says, “for our lecturers and students to know that it is not wrong to use AI tools to make themselves effective and efficient.”
He further urges that there is a need to shift away from the traditional model of the lecturer as the sole custodian of knowledge, toward one that equips students with the skills to progress in a world where information is generated and accessed in real time.
This is not being naive, it is necessary, but it needs more than just school rules. It needs a new way of testing students and a clear understanding of what a university is really meant for.
If the purpose of a degree is just to show that a student can read and remember information, then AI has already made that useless. But if a degree is to prove that a person can reason even when things are not clear, judge evidence, defend an argument, and solve problems that do not have direct answers, then AI has not reduced its value at all. Instead, it has made that purpose even clearer.
A 2025 review published by the American National Center for Biotechnology Information adds another view to this conversation that Ugandan educators cannot afford to ignore. While AI offers genuine advantages in higher education, like personalised learning and improved communication, it also carries documented risks of loneliness, digital stress, reduced interpersonal skills, and social isolation.
Nalukooya Sharifah may not explain the science behind why she feels uncomfortable with AI, but she has felt something true. Thinking is something you practice, and like anything you do not use, it becomes weak. Therefore, if we build academic systems that reward students for submitting AI-generated work without ever developing the capacity to evaluate, critique, or extend it, we are not producing graduates, we are producing prompts.
What, then, should Ugandan universities actually do?
First, they must stop denying the truth. Pretending that students are not using AI, or just banning it when it cannot be controlled, only makes institutions lose credibility. Dr. Ali’s honest acceptance that AI cannot be stopped, but should instead be guided, is a better and more useful starting point.
Second, assessment must change. Essays and multiple-choice questions that can be answered in seconds by a Chatbot are no longer adequate measures of student ability. Universities need to invest in formats that are genuinely AI-resistant and require the kinds of human judgment that machines cannot replicate, such as oral defenses, supervised practical work, interactive projects with documented reasoning, and presentations where students must respond in real time to probing questions.
Third, and most importantly, students must be taught to think alongside AI rather than beneath it. The goal is not to forbid the tool but to ensure that the student who uses it remains the author and can be capable of questioning the output, identifying its errors, and producing something that comes from a mind that was genuinely engaged.
Authored by Nakhokho Rashid Matselele
Ugandan Journalist, Communication and Media Specialist, Exploring Culture, Politics, Religion, and Media
