Whenever there are revolutionary technical innovations in Germany, the first thing they call for is regulation. This also applies to ChatGPT, which is already used almost daily, especially by high school students and students, and as a text and language processing program has both opportunities and risks. Undoubtedly, it increases the possibilities of fraud, especially in science. ChatGPT is currently still producing a lot of bugs. However, most researchers consider it a deceptive hope that the origin of a text generated with artificial intelligence (AI) remains technically easily recognizable.
Political correspondent in Berlin, responsible for the "Bildungswelten".
- Follow I follow
The AI researcher from the Institute of Technology Assessment and Systems Analysis at KIT in Karlsruhe, Steffen Albrecht, who wrote a background paper for a hearing of the Committee on Education, Research and Technology Assessment this week in the German Bundestag, refers to the unique character of the texts generated by ChatGPT. The existing software for detecting plagiarism is failing, and new programs are being trained, but not with resounding success. He proposes a kind of watermark that intersperses certain patterns in texts that do not interfere with people's reading, but are recognizable by machine.
When writing scientific texts, AI could help to get an overview of relevant literature or to publish in another language. There are good reasons why scientific publishers refuse to publish texts written by AI systems. This is because German copyright law presupposes personal intellectual creation, so that only man-made texts can be protected. Nevertheless, the increasing obligation to publish, especially in the qualification phase, could tempt some researchers to have their studies written by an AI system – not to mention term papers and dissertations.
Exam formats need to be changed
The first universities have already set out and changed their examination formats. They are more likely to rely on face-to-face exams than on term papers. In the social sciences and linguistics, however, this is difficult. In law as well as in other humanities subjects, ChatGPT could design counter-arguments to one's own position and thus train controversial discussions. The Higher Regional Court of Stuttgart is currently testing the use of AI in contract review and other routine legal tasks in a pilot project.
In schools, the process of searching for sources, building up an argumentation, i.e. the preparatory work for one's own linguistic text, could play a much greater role in the future. At the chair of Enkelejda Kasneci, the Technical University of Munich has developed the tool "PEER" (Paper Evaluation and Empowerment Resource), which is intended to support students in writing essays. This allows students to photograph or upload their text, which is then examined by the AI and provides personalized feedback with suggestions for improvement. Kasneci himself believes that weaker students in particular can benefit from such tools. However, this also means that ChatGPT cannot replace teachers, but requires continuous support for the learning process. Otherwise, what has already been shown in other digital teaching and learning opportunities will be repeated: the stronger students benefit enormously, and the weaker ones learn even less effectively. When learning to read, the so-called eye-tracking technology in AI-supported textbooks could detect whether children can follow what they read. Similar models are conceivable for language deficits and learning disabilities.
Among the biggest risks for AI researchers are not only copyright issues, but also the risk that students will feed ChatGPT with a lot of personal data, over which they then no longer have control because the system is maintained by a private company in the United States. In addition, users can easily commit copyright infringement if ChatGPT uses copyrighted text that is similar or even identical to the original without this being recognizable to the user.
FDP: Don't be afraid of artificial intelligence
Berlin was the first country to publish a handout on how to deal with AI in schools using ChatGPT as an example, which refers to the possibilities for self-learning and checking one's own learning progress, but also states that a text generated by ChatGPT that is output as one's own is in any case to be rated insufficiently. Other countries and also the Conference of Ministers of Education and Cultural Affairs will follow.
In a position paper for AI in education, the FDP parliamentary group in the Bundestag emphasizes above all the opportunities. "Fear of AI must not determine our actions". However, the FDP is also overshooting the mark when it thinks that AI-based learning tools will primarily take over the transfer of knowledge in the future. They can relieve teachers, but they certainly cannot replace them. On the other hand, the FDP's proposal to use AI for pedagogical diagnostics and performance evaluation in order to ensure support needs and objective assessment criteria seems more sensible.
The Liberals strictly reject a classification of chat robots as a "high-risk application", as is currently being discussed at the European level. Then "their use, for example in schools, would be practically impossible," according to the paper, which is available to the F.A.Z. In diagnostics, for example before school entry or during school entrance examinations, AI applications could be of good service if teachers can handle them. "Ethical and data protection debates must be conducted realistically instead of impractically," says the position paper of the Free Democrats.