Event: IS Research and AI – Time to Rethink Our Role

03.12.2025, UiA Kristiansand, Norway
AI4Users

On December 3rd we hosted an academic session at UiA – Department of Information Systems where colleagues gathered to reflect on the role of data and AI in reshaping work and education, and the place of IS research in this landscape. The session included inspiring and thought-provoking presentations by Olivia Benfeldt (Assistan professor at CBS) on Data Diplomacy, and Sarah Hönigsberg (Assistant professor at ICN Business School) on GenAI enabled learning, and finished off with a meaningful dialog through a panel discussion with Olivia Benfeldt, Sarah Hönigsberg and Margunn Aanestad (Professor at UiO), moderated by Polyxeni Vassilakopoulou (Professor at UiA, Project Leader for AI4Users).

The event resulted in an engaging afternoon that surfaced exciting perspectives and meaningful reflections on how we can work to leverage AI in ways that strengthen, rather than sideline, human capabilities. The discussions also highlighted the role of Information Systems research in this shifting landscape, and how data diplomacy is becoming an increasingly important lens for understanding how we can promote accountability, build data literacy, and collectively govern data within AI-driven enviroments.

Formidlingskonferanse

03.12.2025, UiA Kristiansand, Norge
AI4Users & Digitalisering av offentlige velferdstjenester i Skandinavia

3. desember samlet vi forskere, ansatte i offentlig sektor (blant annet NAV og Helsesektoren) og ledelsen ved UiA – Institutt for Informasjonssystemer til en inspirerende formidlingskonferanse. Der ble det presentert resultater fra både dette prosjektet og prosjektet: Digitalisering av offentlig velferdstjenester i Skandinavia.

Første del av programmet omhandlet skandinavisk velferdsdigitalisering og inlkuderte en overordnet presentasjon av prosjektet – fra mål til resultater, presentert av Sara Hofmann (Førsteamanuensis v./ UiA og leder for prosjektet); innsikt om hvordan digital transformasjon påvirker roller og arbeidsoppgaver i offentlig sektor, presentert av Christian Østergaard Madsen (Associate professor v./ IT-Universitetet i København); informasjon om hvordan forskning og praksis utvikler bedre digitale tjenester sammen, presentert av Ida Heggertveit & Hanne Höglund Rydén (PhD. Stipendiater v./ UiA).

Andre del av programmet omhandlet menneskesentrert KI i offentlige tjenester og inneholdt presentasjon av prosjektet og dets mål og resultater, presentert av Polyxeni Vassilakopoulou (Professor v./ UiA og leder for prosjektet); innsikt om implementeringsfasen ved å benytte KI i offentlig sektor, presentert av Elena Parmiggiani (Associate professor v./ NTNU); designprinsipper og funn for forståelig og ansvarlig bruk av KI i offentlig sektor, presentert av Vilde Elvemo (Forsker v./ UiA) ; gevinster, spenninger og lederinnsikt ved bruk av KI i helsetjenesten, presentert av Mari S. Kannelønning (Forsker v./ UiO).

Konferansen fugerte som en møteplass mellom forskning og praksis, og skapte rom for mange interessante samtaler og diskusjoner. Dagen ble avsluttet med en samlet felles lunsj for alle som deltok.

Design Challenge on transparent AI

30.10.2025, Online event with UiA students
Vilde Elvemo & Ingebjørg gregersen

At the end of October, we carried out a design challenge in which students explored how AI can be applied in a transparent and responsible way when creating exam explanations.

The participants were divided into groups and tasked to design how exam explanations crafted with support from an AI tool could be presented to students. As part of the challenge, they were given a concrete use case, along with personas and scenarios (See Read more), to guide their design work and ensure a shared understanding of the context and user needs.

After a working session, each group were asked to showcase both their design solutions and their reflections on what should and should not be included, as well as their own views on the use of AI in this context.

This activity offered valuable insight into students’ expectations, preferences, and ideas for how AI support can be used transparently in ways that feel both trustworthy and meaningful to them.

AI-generated illustration (Chat-GPT)

Read more
Background

According to the Act relating to Universities and University Colleges (§11-8), students in Norway have the right to request an explanation for their exam grade. When a student requests an explanation, the lecturer must provide it within two weeks.

Lecturers report that this process is time-consuming and that there is a need for more efficient solutions that still preserve the learning value of the explanation. To support this, universities both in and outside of Norway are exploring the use of AI as an assistive tool for drafting exam explanations.

If the universities decide to implement such a tool, several important questions arise, one of them being how these AI-assisted explanations should be presented to students. Since students are required to be transparent about their own use of AI in deliverables, the universities feel an equal obligation to be transparent in return.

However, students have different levels of understanding and expectations regarding AI. Some are comfortable with technical details and prefer in-depth explanations; others find such information confusing or unnecessary and prefer simple, human-like explanations. Some want full transparency; others only want the key takeaway.

To investigate how these challenges could be addressed and what kinds of presentation formats students themselves consider trustworthy, understandable, and useful, we carried out an online design challenge with students from the UiA course IS-117-1 “Introduction to Human-Centered Artificial Intelligence”.

Use case

The AI tool is named WhyGrAId and can be used by lecturers to get support when they need to provide an explanation after evaluating and grading the exam. To get support they upload:

  • The exam question
  • The grading guide/model answer
  • The student’s answer
  • The grade given
  • Extra instructions to the AI or comments about the assessment, if any

WhyGrAId generates draft text that the lecturer must review and edit before making the final explanation available to the student.

Personas and Scenarios

Some submitted designs

Design for Responsible AI and welfare: 2025 EiT Village

10.01.2025 – 24.01.2025, NTNU Trondheim, Norway
Course teacher (village supervisor): Sofia Papavlasopoulou
Course code: TDT4850

This was the fourth time we run design activities with students to establish new knowledge about the use of AI applications in public administration in Norway, focusing on the users´ need to understand the role of AI in welfare delivery processes, their key characteristics and the types of data they use, leading the way in developing human-friendly and trustworthy solutions. Participants worked alongside researchers from the AI4users project from both University of Agder (UiA) and NTNU. The purpose of this partnership was to provide participants with relevant insights, facilitate meaningful discussion, and evaluate proposed measures addressing specific challenges related to the responsible use of AI in delivering welfare services.

EiT is an obligatory course for study programmes at the master’s level at NTNU. In this course, students from different study programs come together to solve real challenges through collaboration, reflection, and shared responsibility. It works by placing diverse groups in interdisciplinary villages, where they explore problems, develop ideas, and practice communication, trust building, and constructive conflict resolution. This takes place through written and oral reflections, and structured teamwork exercises. Through this process students discover how their individual strengths become far more powerful when combined with the perspectives of others, and they learn what effective teamwork truly means in practice. The teaching takes place in courses (villages) wich each have a different focus and challenges.
Read more about EiT on NTNU´s website.

Examples of challenges:
– Developing methods for identifying the needs of different groups of citizens and caseworkers.
– Creating solution ideas or prototypes that visualize information in an understandable way.
– Evaluating alternative visualizations and their impact on decision-making processes or public acceptance.
– Increasing the efficiency of decision processes without compromising transparency.
– Respecting legal requirements while building trust in both the systems and the caseworkers who use them.

Timeline

Are AI practitioners ready for AI Fairness? The need for Institutional Work for Early Prioritization of Fairness in AI Practices

15.12.2024 – 18.12.2024, Bangkok, Thailand
Pouria Akbarighatar, Ilias Pappas, Polyxeni Vassilakopoulou

Authors

Pouria Akbarighatar 1

Ilias Pappas 2

Polyxeni Vassilakopoulou 1

2 Norges teknisk-naturvitenskapelige universitet, Fakultet for informasjonsteknologi og elektroteknikk, Institutt for datateknologi og informatikk

1 Universitetet i Agder, Fakultet for samfunnsvitenskap, Institutt for informasjonssystemer

Event

Title: 45th International Conference on Information Systems, ICIS 2024

Organizer: the Association for Information Systems (AIS)

KI i helsetjenestene

21.11.2024, Scandic St. Olavs plass, Oslo
Mari Serine Kannelønning

Author

Mari Serine Kannelønning 1,2

  • 1Universitetet i Oslo, Det matematisk-naturvitenskapelige fakultet, Institutt for informatikk, DIG Digitalisering, Digital innovasjon
  • 2Universitetssykehuset Nord-Norge HF, Nasjonalt senter for e-helseforskning
Event

Title: Helsedagen 2024

Organizer: Dagens Medisin