Sydney's "two lanes" approach
Two years and a half later, it looks like radical measures are finally on the table.
This is a time where virtually every single lecturer has had to deal with generative AI’s impact on their courses, and they have seen that:
“AI-resistant” assessments remain resistant for less than the time it takes to develop them.
Detection is a fool’s errand and an administrative nightmare.
AI can do many good things that benefit students, so it is bad to exclude it tout court1.
Malpractice procedures are entirely inadequate; most people who cheat using AI get away with it. And most people cheat using AI if they can, a classic example of broken windows theory.
Covid-19
When Covid-19 hit, the government told the sector that most in-person assessment events (exams) could not happen for quite some time.
Things like plexiglass boxes were proposed, but nobody took those seriously. Faced with what was an insurmountable obstacle, the sector adapted—many exams were cancelled short term, and then a monumental effort brought all the sector online. Was it perfect? No. Was it immune from defects? No. Was it fully inclusive? No.
But that’s what we could do at the time. And we did! The sector survived, and I think it was the right choice.
How was the choice made? Well, there wasn’t one. The government made things illegal. Indeed, wishing that Covid-19 did not exist… was not an option. So why is it an option now for AI?
Generative AI makes any kind of fully remote assessment impossible, if one wants to safeguard the main purpose of the assessment exercise (certifying that students have learned the intended learning objectives). This is exactly like Covid-19 made in-person assessment impossible—for other reasons, okay, but that is the same. A branch of the assessment tree, incidentally the one we were happily developing in case other pandemics occurred, is suddenly unavailable to us, forever.
There is another big difference between Covid-19 and generative AI: vaccines won’t save us, and generative AI is something that most people want to be involved in their lives.
Adapting seems, therefore, even more urgent… yet, 31 months later, very little has happened. Why?
Two lanes: The Sydney policy
It is poetic that the first truly innovative policy2 comes from the University of Sydney, where “Sydney” is the name that Bing’s first chatbot gave itself. They introduced quite a radical policy, which is built on a very simple premise, the one that we have been using here since the start.
I quote: "unsupervised assessments still play an important role in students' learning - but they cannot be used to assure program learning outcomes have been attained"
That’s it. We live in this world now, sorry. We didn’t choose the Covid-19 world either. It’s sink or swim.
1. Secure Assessments
These are invigilated or supervised assessments where generative AI is strictly prohibited. Crucially, these are supervised and completed in person (such as exams, in-semester tests, interactive oral assessments, practical tasks, and so on).
Their purpose is to verify that a student has met key learning outcomes and to provide a safeguarded evidentiary base for progression and certification. Secure assessments include things such as final exams (written, practical, oral), in-semester tests, interactive oral assessments, and supervised tasks during placements. These provide the backbone of assessment integrity.
The University mandates that these secure assessments be used in all units to ensure students demonstrate achievement of learning outcomes independently and authentically. Typically, these have to account for the majority of a student's summative grade (e.g., 30–60%).
2. Open Assessments
These are unsupervised and often scaffolded tasks where the use of AI is permitted, even encouraged, under the guidance of the instructor. Students are required to clearly acknowledge any AI use, and instructors provide structure on how to do so responsibly. These tasks aim to promote learning and creativity, and include things such as essays or creative writing, presentations and case studies, portfolios and reflections, or even online quizzes and dissertations.
The overall philosophy is that open assessments foster learning and skill development—including AI literacy—while secure assessments verify attainment of educational outcomes. The two assessment lanes must coexist, but final certification of knowledge must be rooted in secure tasks.
Implementation Timeline
Sydney's policy rollout occurs in two key stages. From Semester 1, 2025, new rules under the Academic Integrity Policy prohibit AI use in supervised assessments by default, and encourage and permit it in all other forms of assessment.
From Semester 2, 2025, a new university-wide Assessment Framework is introduced. All assessments will be classified as either secure (supervised and AI-prohibited) or open (unsupervised and AI-permitted). Secure assessments will be structurally required across all units, designed to safeguard certification of learning outcomes. Open assessments will instead be reimagined as learning opportunities where AI use is scaffolded and expected to support—but not replace—student work.
Moreover, the whole framework is complemented by university-endorsed tools like Microsoft Copilot, with data protection protocols in place... but, crucially, their usage is not mandated. So, the policy explicitly recognises the absolute, inalienable and inevitable freedom that this technology brings with it.
Radical enough, and simple enough
This is, in my view, exactly the sort of policy other institutions should adopt. Its strength lies in four key areas:
Structural clarity: Every course must have secure components. There are no fully remote secure components. There are no exceptions.
This provides peace of mind to faculty, a consistent message to students, and makes it easy to spot issues. No more abstract debates on how hard an assessment would be to do with AI... AI can do everything, plan accordingly.Flexibility: Being based on principles, and little more, this approach adapts to the individual needs of each course leader and student cohort. It asks, after all, for little more than minimal safeguards to be maintained... safeguards that, I must add, were at the core of every education system for the past 200 years. Fully remote assessment was a young, novel element... that is, unfortunately, dead on arrival.
Recognition of AI's role: The policy differentiates between the constructive use of AI and its misuse. AI is part of education now, and pretending otherwise only increases risk.
Clarity of enforcement: Because AI use is explicitly permitted or forbidden in each lane, lecturers and students are finally free to explore AI usage without restrictions. This lifts the burden of "continuous assessment" or other philosophies that brought with them a need for oppressive surveillance, giving back freedom to lecturers to design tasks that work best for their courses.
This approach recognises that generative AI is a structural, not behavioural, threat. It cannot be addressed through policing alone. Instead, systemic adaptation is necessary. The University of Sydney's model accepts this truth and re-engineers assessment accordingly. I look forward to see how this develops
We really are out of time
One more thing: secured assessment... will soon need a lot more security. We are exactly 0 days away from extremely small AI-enabled gadgets that students will be able to use to cheat at in-person assessment events. China just turned off its AI models for a few days for the national university entrance exams.
AI is not static. It keeps evolving. And we still haven’t deployed a proper answer to November 2022’s ChatGPT. Making education work is a crucial challenge, absolutely essential for the survival of the thinking ability of our species, which may prove fundamental in the future.
So, yes, this policy is radical. So is the shift that the sector needs, if it wants to survive.
But this policy is also measured, implementable, and realistic. It provides a blueprint that, honestly, every instituition should copy - especially if the alternative is that another academic year passes us by, in inaction. 3
Crucially, “AI principles” of most places state this. But then they avoid the question of assessment entirely.
That I have seen!! I do have a good Twitter feed though, so I doubt that other major policy shifts have escaped me. If they have, sorry.
The cover picture was generated by Gemini (so Nintendo should go after them if they believe that it isn’t fair use, which it is). The prompt was ”Generate a cover picture for the article "two lanes: assessment approach with AI". Give it a mario kart style where one lane has an in person exam (or a book?) and the other has the internet”.
Appropriately, Luigi was chosen for the unsupervised lane… I hope based on this quite popular meme among Nintendo fans like myself.
Let’s not live in a future where students pass by doing absolutely nothing!